AI and EDTs in Warfare: Ethics, Challenges, Trends | IGF 2023 WS #409

11 Oct 2023 08:45h - 09:45h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The expanded summary examines the impact of artificial intelligence (AI) on global security from various perspectives. One viewpoint raises concerns about the potential for AI to make the world more insecure, particularly in the context of warfare. This perspective highlights the evolution of the massive retaliation strategy, which now considers preemptive strikes due to the capacities of AI. The comparison of AI capacities on the battlefield may favor preemptive actions. Overall, the sentiment towards the effect of AI on world security is negative.

Furthermore, the development of deep learning in AI has raised worries about the easier generation of bioweapons, leading to concerns about biological warfare. With AI and deep learning, the process of generating bioweapons has become more accessible, posing a significant threat. This argument emphasizes the need to ensure biosecurity and peace. The sentiment surrounding this issue is also negative.

In addition to the concerns about AI in warfare and biological warfare, ethical considerations play a crucial role in the development and deployment of autonomous weapon systems. It is recognized that there is a need for ethical principles to guide the use of AI in armed conflicts. The sentiment regarding this perspective is neutral, but it highlights the importance of addressing ethical issues in this domain.

On the other hand, AI can potentially be used to reduce collateral damage and civilian casualties in conflict situations. This observation suggests a potential positive impact of AI on global security, as it can aid in minimizing harm during armed conflicts. The sentiment towards this notion is also neutral.

In conclusion, the analysis reveals mixed perspectives on the impact of AI on global security. While there are concerns regarding its potential to make the world more insecure, particularly in warfare and biological warfare, there is also recognition of the potential benefits of AI in reducing collateral damage and civilian casualties. It is crucial to ensure that ethical principles are followed in the development and deployment of AI in armed conflict situations. Additionally, the maintenance of biosecurity and peace is of utmost importance. These factors should be considered to navigate the complex landscape of AI and global security.

Fernando Giancotti

A recent research study conducted on the ethical use of artificial intelligence (AI) in Italian defence highlights the importance of establishing clear guidelines for its deployment in warfare. The study emphasises that commanders require explicit instructions to ensure the ethical and effective use of AI tools.

Ethical concerns in the implementation of AI in defence are rooted in the inherent accountability that comes with the monopoly on violence held by defence forces. Commanders worry that failure to strike the right balance between value criteria and effectiveness could put them at a disadvantage in combat. Additionally, they express concerns about the opposition’s adherence to the same ethical principles, further complicating the ethical landscape of military AI usage.

To address these ethical concerns and ensure responsible deployment of AI in warfare, the study argues for the development of a comprehensive ethical framework on a global scale. It suggests that the United Nations (UN) should take the lead in spearheading a multi-stakeholder approach to establishing this framework. Currently, different nations have their own frameworks for the ethical use of AI in defence, but the study highlights the need for a unified approach to tackle ethical challenges at an international level.

However, the study acknowledges the complexity and contradictions involved in the process of addressing ethical issues related to military AI usage. It notes that reaching a mutually agreed-upon, perfect ethical framework may be uncertain. Despite this, it stresses the necessity of pushing for compliance through intergovernmental processes, although the prioritisation of national interests by countries further complicates the establishment of universally agreed policies.

The study brings attention to the potential consequences of the mass abuse of AI, highlighting the delicate balance between stabilising and destabilising the world. It recognises that AI has the capacity to bring augmented cognition, which can help prevent strategic mistakes and improve decision-making in warfare. For example, historical wars have often been the result of strategic miscalculations, and the deployment of AI can help mitigate such errors.

While different nations have developed ethical principles related to AI use, the study points out the lack of a more general framework for AI ethics. It highlights that the principles can vary across countries, including the UK, USA, Canada, Australia, and NATO. Therefore, there is a need for a broader ethical framework that can guide the responsible use of AI technology.

The study cautions against completely relinquishing the final decision-making power to AI systems. It emphasises the importance of human oversight and responsibility, asserting that the ultimate decision for actions should not be handed over to machines.

Furthermore, the study highlights the issue of collateral damage in current defence systems and notes that specific processes and procedures are in place to evaluate compliance and authorise engagement. It mentions the use of drones for observation to minimise the risk of unintended harm before any decision to engage is made.

In conclusion, the research on ethical AI in Italian defence underscores the need for clear guidelines and comprehensive ethical frameworks to ensure the responsible and effective use of AI in warfare. It emphasises the importance of international cooperation, spearheaded by the UN, to address ethical challenges related to military AI usage. The study acknowledges the complexities and contradictions involved in this process and stresses the significance of augmenting human decision-making with AI capabilities while maintaining human control.

Paula Gurtler

The discussion surrounding the role of Artificial Intelligence (AI) in the military extends beyond legal autonomous weapon systems. It includes a broader conversation about the importance of explainable and responsible AI. One key argument is the need for ethical principles to be established at an international level. This suggests that ethical considerations should not be limited to individual countries but should be collectively agreed upon to ensure responsible AI usage.

Another significant aspect often overlooked when focusing solely on legal regulations is the impact of AI on gender and racial biases. By disregarding these factors, we fail to address the potential biases embedded within AI algorithms. Therefore, it is crucial to consider the wider implications of AI and its contribution to societal biases, ensuring fairness and equality.

Geopolitics and power dynamics further complicate the utilization of AI in the military. With nations vying for supremacy, AI becomes entangled in strategic calculations and considerations. The use of AI in military operations can potentially affect global power balances and lead to unintended consequences. This highlights the intricate relationship between AI, politics, and international relations, which must be navigated with care.

Although various ethical guidelines already exist for AI deployment, one question arises: do we require separate guidelines specifically designed for the military? The military context often presents unique challenges and ethical dilemmas, differing from other domains where AI is utilized. Therefore, there is a debate over whether existing guidelines adequately address the ethical considerations surrounding AI in military applications or if specific guidelines tailored to the military context are necessary.

In conclusion, the debate regarding AI in the military extends beyond the legality of autonomous weapon systems. It encompasses discussions about explainable and responsible AI, the need for international ethical principles, the examination of gender and racial biases, the influence of geopolitics, and the necessity of specific ethical guidelines for military applications. These considerations highlight the complex nature of implementing AI in the military and emphasize the importance of thoughtful and deliberate decision-making.

Rosanna Fanni

During the discussion, the speakers explored the potential dual use of artificial intelligence (AI) in both civilian and military applications. They acknowledged that AI systems originally developed for civilian purposes could also have valuable uses in defense. The availability of data, machine learning techniques, and coding assistance makes it feasible for AI to be applied in both contexts.

A major concern raised during the discussion was the lack of ethical guidelines and regulations in the defense realm. While there are numerous ethical guidelines, regulations, and laws in place for the civilian use of AI, the defense sector lacks similar principles. This highlights a disconnect between the development and use of AI in civilian and defense contexts. Developing ethical guidelines and regulations specific to AI in defense applications is crucial to ensure responsible and accountable use.

The European Union’s approach to AI, particularly the exclusion of defense applications from the AI Act, was criticized. The AI Act employs a risk-based approach, yet its exclusion of defense applications contradicts this approach. This omission raises questions regarding the consistency and fairness of the regulatory framework. The speakers argued that defense applications should not be overlooked and should be subject to appropriate regulations and guidelines.

Another important issue discussed was the need for international institutions to take on more responsibility in terms of pandemic preparedness. The COVID-19 pandemic has demonstrated the necessity of being prepared to tackle challenges and risks arising from the rapid spread of bio-technology. The speakers emphasized that institutions should be better prepared to ensure the protection of public health and well-being. Moreover, they stressed that equal distribution of resources is crucial to prevent global South nations from being left behind in terms of bio-risk preparedness. The speakers highlighted the importance of avoiding a race between countries in preparedness and ensuring that global South countries, which often lack resources, are provided with the necessary support.

In conclusion, the discussion revolved around the need to address the potential dual use of AI, establish ethical guidelines and regulations for defense applications, critique the exclusion of defense applications in the European Union’s AI Act, and emphasize the role of international institutions in pandemic preparedness and equal distribution of resources. These insights shed light on the ethical and regulatory challenges associated with AI, as well as the importance of global collaboration in addressing emerging risks.

Pete Furlong

The discussion revolves around the impact of artificial intelligence (AI) and emerging technologies on warfare. It is argued that AI and other technologies can be leveraged in conflicts, accelerating the pace of war. These dual-use technologies are not specifically designed for warfare but can still be used in military operations. For example, AI systems that were not initially intended for the battlefield can be repurposed for military use.

The military use of AI and other technologies has the potential to significantly escalate the pace of war. The intent is to accelerate the speed and effectiveness of military operations. However, this raises concerns about the consequences of such escalated conflicts.

One of the challenges in implementing AI principles is the broad interpretation of these principles, as different countries may interpret them differently. This poses challenges in creating unified approaches to AI regulations and ethical considerations. While broad AI principles can address a variety of applications, there is a need for more targeted principles that specifically address the issues related to warfare and the military use of AI.

Discussions about the use of AI and emerging technologies in warfare are increasing in various summits and conferences. The UK Summit for AI Safety is an example of such discussions. Additionally, the concern about the use of biological weapons is growing, as it is noted that they only need to work once, unlike drugs that need to work consistently. This raises significant ethical and safety concerns.

AI’s capabilities are dependent on the strength of sensors. The cognition of AI is only as good as its sensing abilities. Therefore, the value and effectiveness of AI in warfare depend on the quality and capabilities of the sensors used.

One potential use of AI in warfare is to better target strikes and reduce the likelihood of civilian casualties. The aim is to enhance precision and accuracy in military operations to minimize collateral damage. However, the increased ability to conduct targeted strikes might also lead to an increase in the frequency of such actions.

One of the main concerns regarding the use of AI in warfare is the lack of concrete ethical principles for autonomous weapons. The RE-AIM Summit aims to establish such principles; however, there remains a gap in concrete ethical guidelines. The UN Convention on Certain Conventional Weapons has also been unsuccessful in effectively addressing this issue.

In conclusion, the discussions surrounding AI and emerging technologies in warfare highlight the potential benefits and concerns associated with their use. While these technologies can be leveraged to enhance military capabilities, there are ethical, safety, and interpretational challenges that need to be addressed. Targeted and specific principles related to the military use of AI are necessary, and conferences and summits play a crucial role in driving these discussions forward. The impact of AI on targeting precision and civilian protection is significant, but it also raises concerns about the escalation of conflicts. Ultimately, finding a balance between innovation, ethics, and regulation is essential to harness the potential of AI in warfare while minimizing risks.

Shimona Mohan

The discussions highlight the significance of ethical and responsible AI methodologies in military applications. Countries such as the United States, United Kingdom, and France have already implemented these strategies within their military architectures. However, India has chosen not to sign the global call for Responsible AI, prioritising national security over international security mechanisms and regulations.

The absence of national policy prioritisation of military AI poses challenges in forming intergovernmental actions and collaborations. Without a clear policy framework, it becomes difficult for countries to establish unified approaches in addressing the ethical and responsible deployment of AI in the military domain.

Gender and racial biases in military AI are also raised as important areas of concern. Studies have shown significant biases in AI systems, with a Stanford study revealing that 44% of AI systems exhibited gender biases, and 26% exhibited both gender and racial biases. Another study conducted by the MIT Media Lab found that facial recognition software had difficulty recognising darker female faces 34% of the time. Such biases undermine the fairness and inclusivity of AI systems and can have serious implications in military operations.

The balance between automation and ethics in military AI is emphasised as a crucial consideration. While performance in military operations is vital, it is equally important to incorporate ethical considerations into AI systems. The idea is to ensure that weapon systems maintain their level of performance while also incorporating ethical, responsible, and explainable AI systems.

The use of civilian AI systems in conflict spaces is identified as a noteworthy observation. Dual-use technologies like facial recognition systems have been employed in the Russia-Ukraine conflict, where soldiers were identified through these systems. This highlights the potential overlap between civilian and military AI applications and the need for effective regulations and ethical considerations in both domains.

Additionally, the potential of AI in contributing to bio-safety and bio-security is mentioned. A documentary on Netflix titled “Unknown Killer Robots” showcased the risk potential of AI in the generation of poisons and biotoxins. However, with the right policies and regulations in place, researchers and policymakers remain optimistic about preventing bio-security risks through responsible and ethical AI practices.

In conclusion, ethical and responsible AI methodologies are crucial in military applications. The implementation of these strategies by countries like the US, UK, and France demonstrates the growing recognition of the importance of ethical considerations in AI deployment. However, the absence of national policy prioritisation and India’s refusal to sign the global call for Responsible AI highlight the complex challenges in achieving a global consensus on ethical AI practices in the military domain. Addressing gender and racial biases, finding a balance between automation and ethics, and regulating the use of civilian AI systems in conflict spaces are key areas that require attention. Ultimately, the responsible and ethical use of AI in military contexts is essential for ensuring transparency, fairness, and safety in military operations.

Session transcript

Rosanna Fanni:
which is here at least in Japanese time, quite late in the afternoon on the third day of the IGF. My name is Rosanna Fanny. I have been actually working for the Center for European Policy Studies in short steps until last week and this session is a very special one because it’s a topic that I’m personally very passionate about that I’ve been working on for quite some time now and it is also a special session because I think the topic that we are going to address today is normally not really at the center of the IGF discussions, which is of course the use of AI and emerging technologies in the broader defense context. Why is that topic relevant for the IGF? Well let’s just consider for a moment that almost all AI systems that we are currently speaking about, the models that are currently being developed, they are of course used for civilian purposes but at the same time they could also be used for defense purposes. So that means they are dual use and as we also know today literally anyone has access to data and can easily set up machine learning techniques or algorithmic models and can use coding assistance such as JetGPT and so this means that basically almost all the technology, the computing power, programming languages, code, encryption information, big data algorithms and so on has become dual use and of course the military, not only the civilian sector but also the military has high stakes in understanding and using these technological tools to their own advantage. Of course we know that these developments are not really new and things started with the DARPA and which some of you may are familiar with, the US defense in-house R&D think-tank and DARPA was already at the time central to developing the internet software and also AI that we all use today. But as we now see with the conflict in Ukraine, AI is already in full-scale use by defense actors and also has the big potential to change power dynamics considerably and our panelists will speak to that. While we have seen numerous developments, the use of AI already in those contexts we see that there’s quite a disconnect to the civilian developments in AI which include a large number of ethical principles, ethical guidelines, regulation, soft law, hard law and so on. However we don’t really see that happening yet in the defense realm which for me is quite concerning because the stakes and the risks in the defense context may even be higher than in civilian ones. And this is also I think a great example for or a surprising example when we look at the current European Union approach to AI. So the much applauded AI Act which is risk-based and it actually excludes virtually all AI applications that are used in a defense context. So AI is completely excluded from the scope of the AI Act which is funny because it’s called risk-based approach, right? So this just as a means of introduction and we have a lot of urgent questions and very little answers so far so I hope that our panelists will enlighten us. I will introduce the panelists in the order that they are speaking and before they are speaking so I will now introduce the first speaker and I also first want to introduce Paula, my now former colleague who’s based in Brussels and joins us today as our online moderator and we have foreseen half an hour or yeah maybe the second half of the session so to say where we want to hear from you so answer any questions that you have so with the panelists obviously so get your questions ready and I think now it’s time to to dive right into the discussion and to do that I will introduce the first speaker who’s joining us here in person. I will introduce Fernando Giancorti and he’s a lieutenant general of the Italian Air Force, retired now, and he’s also a former president of the Centro Alti Studi per la Difesa which is in short the Italian Defense University. Fernando the floor is yours.

Fernando Giancotti:
Thank you very much Rosanna. I’m very honored to be here to share some thoughts about this topic which in the great debate about ethical use of emerging and disruptive technology kinds of lags behind. We have heard so many organizations involved in so much and rightly so in ensuring ethical behaviors in many of the domains of our lives and we don’t see as many taking care of one of the most dangerous and relevant threats to our security and to the lives of our fellow people. So this panel which I think is the only one here about the topic is meant to give a let’s say a call on this. Wars are on the rise unfortunately and conflicts. I don’t need to expand on that I think we have enough from media and in the field many a lot of violence is going on and while this is in very forefront of attention not so the implication of what is being already used on the fields. Yeah closer yeah so I argue that this is important both for ethical and functional reasons according to a research recently published about ethical AI in the Italian defense a case study commanders need clear guidance about this new tools because first the ethical awareness is ingrained both in education and in the system which implies also swift penalties if you fail and this is due to the in democracies to the assignment of the monopoly of violence to the armed forces. So ethical awareness is high and also of very practical grounds accountability issues commanders are afraid that without clear guidelines they will have to decide and then they will held accountable for that and furthermore and this is another major point what came out from the research which by the way was authored by the moderator and co-authored by me you can find it on LinkedIn there is what I call the bad guy dilemma which is a very functional problem about ethics in AI and EDT in general applied to warfare which is if we do not carefully balance value criteria with effectiveness and so we don’t do a good job in finding that balance and the bad guys do not let’s say follow the same principles that we will be in disadvantage this is a another worry that came out from the research so now let’s go very quickly in a few words through what’s going on on the battlefield about this in the industry and in the policy realm on the battlefield we can see three main timelines before Ukraine during Ukraine and we can imagine what’s going to happen after Ukraine given many indicators before Ukraine AI was not much used in warfare at all and but for experiments and a few isolated cases but with the breakout of the Ukraine war things have changed massively which means that there has been a strive to employ all the means available there is no evidence there is a very recent report a few weeks ago from the Center for Naval Analysis which shows that there is no evidence of extensive use of AI in the Ukraine war but except for decision-making support which is of course critical now or still there are several systems that can employ AI and maybe they have in cases and there is for sure a big investment in trying to increase the capabilities of artificial intelligence in warfare what we can expect given also the multi the big huge programs that are being developed in which already by design include artificial intelligence there will be a huge increase in that and the industry as a matter of fact also because it’s a dual use industry largely is working much on this and we cannot expand on the systems that are being developed but really there would be a major change in the nature of warfare due to AI so this is briefly what happens now in the battlefield what happens in the industry with government’s commissions and now what happens in the policy realm the policy realm the EU does not regulate defense because it’s outside the treaties but Europe the EU is doing many things that are outside the treaties regarding defense especially for the Ukraine war so it’s kind of a fig leaf let’s say I think this point the UN as a major international stakeholder has focused on highly polarized lethal autonomous weapons initiative which doesn’t move forward but there is no comprehensive approach to tackle with more general framework single nations have developed ethical frameworks for AI in defense but by definitions and remember the bad guy dilemma this kind of frameworks are relevant if they can be generalized at the largest possible level so we should I think according also to the multi-stakeholder approach that is typical for example of this forum have the UN join and lead the way for a comprehensive ethical framework kind of a digital use in bello in a multi stakeholder approach the UN it was born out of a terrible war its core business is to prevent and mitigate conflicts and and as there are some good news as Peggy Hicks said of the office for the High Commissioner for Human Rights on Monday she said we don’t need to invent from scratch an approach to an approach to digital rights because we have decades of experience in the human rights framework application I can say that we don’t need to invent from scratch a way to implement and operationalize ethical principles in in operations because we have decades long approach in application of international humanitarian law with procedures and structures dedicated to that the bad news is that we don’t have those general principles to operationalize at the strategic operational and tactical level before coming here in my previous job I was I’ve been the president for the Center for Higher Studies which is our National Defense University but also the operation before that the operational commander of the efforts and I can guarantee you that every operation has a very tight rigorous approach to for compliance to international humanitarian law which goes to specific rules for the soldier on the battlefield rules of engagements and things like that so my let’s say thought that can be of course discussed that’s very simplistic put in this way and maybe in the question and answer we can expand but it’s that we should really get a general effort because I think there is evidence that these ugly things that are wars and conflicts are not going away we better try to do our best to mitigate them thank you

Rosanna Fanni:
thank you thank you Fernando for your contributions and I guess we will I have already some questions prepared we will come back to you when we speak about the the question and answer session so yes exactly I will hand over to the next speaker who’s also here with us in person and Pete Furlong and he’s a senior policy analyst at the Internet Policy Unit at the Tony Blair Institute I am think tank and yeah the floor is yours Pete sure yeah and thank

Pete Furlong:
you for having me here I think you know it’s important when we talk about these issues that you know we kind of ground it in you know specific technologies and think about what technologies we’re talking about I think you know like Fernando mentioned that you know we can often get caught in these conversations about lethal autonomous weapons that can be you know pretty fraught but you know there’s a lot of other technologies that are important to talk about and you know especially when you think about the emerging and disruptive technology beyond just AI and I think you know when you look at the Ukraine war like things like satellite internet are a very good example of that but also kind of the broader use of drones in the warfare and you know I think it’s important to realize that extends beyond just traditional military drones but also through to like consumer commercial and hobbyist drones as well and I think that you know when we talk about things like that it’s important to realize that you know these systems weren’t designed for the battlefield and I think that’s often the case for a lot of dual-use AI systems as well and they weren’t designed you know with the maybe the reliability and the performance expectations that you know a war you know brings and you know the reality is that when you’re fighting for your life you’re not necessarily thinking about these issues and so it’s important that you know in these forums that we start thinking about and talking about these issues because you know this technology has a really transformative effect on these conflicts and I think you know the use of consumer drones in Ukraine is a great example of you know an area where Ukraine’s been able to leverage you know you know US and Turkish sophisticated attack drones but also you know simple like custom-built even like DJI which is like a you know consumer commercial drone provider drones from from different companies as well and I think that you know you’re really blurring the lines between these different types of technologies which have different governance mechanisms and different rules in place so I think that’s important for us to think about and I think the one other thing that I would bring up is that you know again moving beyond just the discussion about AI and weaponry but also by the military more broadly you really have the potential to escalate the pace of war significantly and I think that’s something for us to really consider when we talk about things like you know ensuring there’s space for diplomacy ensuring there’s space for for other interventions as well and and again really the intent is to accelerate the pace of war and we need to to really think about the consequences of that as well so thank you

Rosanna Fanni:
thank you thank you very much and yeah also good that you came back to this aspect that I already mentioned in the beginning that the war so to say are now almost as you say like a community you know because everyone can build a drone can develop a model and and kind of be an own actor almost and and that of course has manifold implications yeah thanks a lot and when I hand it over to the third speaker who joins us online from New Delhi and I hope we are also able to to see her on screen soon and introduce in the meantime and Shimona Mohan is a junior fellow at the Center for Security Strategy and Technology at the Observer Research Foundation and also think tank and based in India and New Delhi. Shimona the floor is yours. Thank you Rosanna I just wanted to check if you can see and hear me well before I start off. Excellent we can see and hear you.

Shimona Mohan:
Fantastic okay thank you so much for for having me on this panel. It’s the perpetual blessing and curse of having talented panelists that my job is simultaneously easier and harder but I hope the the issues that I will be speaking about will be of value as well. So since Fernando and Pete sort of spoke about why ethics are important already I will just probably take the conversation further and into the domain of two separate methodologies around AI in defense applications that we have seen being employed recently, and how they’ve sort of come about in the defense space. So the first one of which, which I’d like to sort of give a characterization around is explainable AI. And while there is no consolidated definition or characterization of what explainable AI is, it’s usually understood as computing models or best practices or a mix of both technical and non technical issue areas, which are used to make the black box of the AI system a little bit more transparent, so that you can sort of delve in and see if there are any issues or if there are any blocks that you’re facing with your AI systems in, in both civilian and military applications, you can sort of go in and fix them. So that’s definitely something that we’re seeing coming up a lot. And as Rosanna mentioned earlier, DARPA was actually at the forefront of this research a few years ago. And now we’re seeing a lot more players sort of come into this and sort of adopt XAI systems, or at least put in resources into the research and application side of them. So for example, Sweden, the US and the UK have already started research activities around using XAI in their military AI architectures. And then we also have a lot of civilian applications, which are being explored by the EU, as well as market standards by industry leaders like, like Google, IBM, Microsoft, and numerous other smaller entities which have much more niche sectoral applications around this. So that’s one. Another thing that a lot of us are sort of noticing in the defense AI space now is something called Responsible AI. And Responsible AI is sort of understood as this broad based umbrella terminology that, that encompasses within it stuff like trust, trustworthy AI, reliable AI, even explainable AI to a degree. And it’s mostly just the practice of sort of designing and development and also deploying AI, which sort of impacts society in a fair manner. So countries like the US, the UK, Switzerland, France, Australia, and a number of countries under NATO have also sort of started to talk about and implement ethical and responsible AI strategies within their military architectures. And for those who work around this area, they may also be aware about the Responsible AI in the military summit in the Netherlands, which was convened earlier this year, as sort of a global call to ensure that Responsible AI is part of military development strategies for about 80 countries that were there at this at this particular meeting. But the interesting thing, and this is where I’d like to bring in a geopolitical angle to this, is also the fact that out of those 80 governments that were present at this meeting, only about 60 of them actually signed this global call. And it’s interesting to note that the country where I come from, India was one of the 20 that did not sign this call. So the analysis for this ranges from considerations around national security, and a prioritization of national security over international security mechanisms, which is something that countries like India have pursued before as well. So India is actually one of the four or five countries which have not signed the nuclear non-proliferation treaty either. And that was on the same sort of principles of ensuring its national security over aligning itself with international security rules and regulations, and softer laws. So that’s an interesting dilemma here. And another dilemma that I’d like to sort of put my finger into is something that Fernando mentioned earlier, which is the bad guy dilemma. And of course, there’s no clear answers to sort of solve this bad guy dilemma. But something that’s been brought about by the responsible AI in the military domains, discourse around military AI, is the fact that AI based weapon systems like lethal autonomous weapons and other defense aids, which have not been screened for responsible AI considerations, carry a lot of tangible risks of exhibiting biased or error prone information processing for the operational environment in which they are deployed. So systems which actually don’t have responsible AI or ethical AI frameworks around them also pose unintentional exclusive harms, not only towards adversaries against which these military AI systems are employed, but also possibly for the entity employing them itself, which makes their use unnecessarily high risk, despite their other benefits which they give to the employing entity. And while we’re on the subject of ethics and AI, I’d also like to just spotlight another sort of aspect of this ethics debate, which is gender and racial biases in military AI. So we already know that there’s a ton of biases that AI brings to the fore, not only in civilian applications, but also in defense applications. And something that’s given a little bit more, a little bit less emphasis on is gender and racial biases. So gender is sort of seen as a soft security issue in policy considerations, as opposed to hard security deliberations, which are given a lot more focus. And the issue of gender in tech, whether it’s in terms of female workforce participation, is also characterized as sort of an ethical concern rather than a core tech one. So this characterization of gender as an add on, essentially makes it sort of a non issue in security and tech agendas. And if at all it is present, it’s it’s usually put down as a checkbox to performatively satisfy policy or compliance related compulsions. But we’ve seen that gender and race biases in AI systems can have a lot of a lot of devastating effects on on the applications where they are employed. So there was actually a Stanford study a few years ago, on publicly available information on 133 biased AI systems. And this was across different sectors. So it’s not just limited to military, but across the ambit of dual use AI systems. And about 44% of these actually exhibited gender biases, amongst which 26 included both gender and racial biases. So similar results have also been obtained by the MIT Media Lab, which conducted the gender shade study for AI biases, where we’re seeing that the the softwares, the facial recognition softwares, which are which are popularly employed in a lot of places now, recognize, say, for example, white male faces quite accurately, but they don’t recognize darker female faces up to 34% of the time, which means that if your particular AI system that you employ in your military architecture, has this kind of biased facial recognition system, 34% of the time, when it looks at a human, it doesn’t recognize her as a human at all, which is, of course, a huge ethical issue, as well as an operational issue. So going back to the argument, given by Fernando, that ethics are not only just, just just a soft, soft issue, they also have a lot of operational risks attached to them. And my last point here would then be also about how we are seeing these sort of blanks emerge in how military AI is, is is developing in terms of both gender and races. So the first and these blanks are sort of threefold. So the first blank here would be the technologically blank itself, which means when you have and are developing these AI systems that you have skewed data sets, or you have uncorrected biased algorithms, which are sort of producing these biases in the first place. The second blank then would be your legal systems, your weapons review processes, which don’t have gender reviews, gender sensitive reviews, or, or race specific reviews, or any other particular aspect of your military system, which could be biased. And then the third set of blanks would be a normative blanks, which would be in terms of a lack of policy discourse around AI biases in military systems, and how they affect the populations which they affect most. So the idea for us now is to sort of take forward these conversations about ethics, about biases, about geopolitical specificities in military AI policy conversations, and sort of put them wherever we can so that these don’t get left behind. And we are not sort of only looking at military systems as killing machines. And not as systems that need to be regulated according to a certain set of rules and regulations. Thank you so much. And I look forward to all the questions.

Rosanna Fanni:
Thank you. Thank you, Simona. That was also super insightful. Also, thanks a lot for raising the issue of gender and race, which I think is already a big issue in the civilian context. But again, this is replicated in a defense context, and definitely not sufficient attention at the moment, at least as paid to this issue. Okay, so that concludes the first round of the interventions. Thanks a lot to all the speakers. I will hand it now over to our online moderator, Paula, to give us a short summary, so to say, of the points that we’ve just heard. And then maybe also already start with a question and answer session. So I’m taking some online questions first. And I also, I will invite you, the three of you, once you answer, you can also refer to the points that you made, as we don’t have a, so to say, a circle of points or reactions from your side. But feel free to include them in the question and answer session. Okay, over to you, Paula.

Paula Gurtler:
Yes, thank you. Greetings from Brussels, where it is still in the morning. So thank you for an interesting start into my day, so to speak. For me, there are so many interesting points that you’ve raised that it’s really difficult to just settle on three takeaways. For me, the first one would be, though, that we need ethical principles at international levels, so that we need to find some kind of agreement so that we can move forward with ensuring more ethical practices in military AI applications. That also relate to accountability issues that were raised by the commanders in the Italian military defence. The second one is, for me, the main takeaway probably of the entire session is that the conversation is much bigger than loss. And by just focusing on legal autonomous weapon systems, we really miss out on much of the conversation on explainable AI, responsible AI, and also what you mentioned, Shimona, in the last intervention, that we really miss out on gender and racial biases if we just focus on loss and these extreme use cases. So I think, really, that the conversation is bigger than loss is one of the main takeaways. And another one that complicates the whole use of AI in military is, of course, geopolitics and the power plays that are pitting stakeholders against each other. So I think this is already so many interesting points. And I would love to give our online audience a chance to raise their questions. Please feel free to raise your hands, type in the chat if you have any questions. But if there aren’t any, I have my own questions, which I’m really excited to ask. So I will just start off with my own question and then come back to the online participants. Please don’t hesitate to be in touch via the chat. So what I’m wondering is, on the ethical principles that we need for AI and military use, I’m wondering, do we need different ones than for those that we already have? We know how many ethics guidelines are floating around and about. And I’m wondering, do we need different ones for use of AI in military contexts? I also heard bias plays a role, responsible AI, explainable AI. Do we need ethical principles that look different to those that we have right now to cover the military domain? So thank you so much. I’m really looking forward to the continuation of the discussion.

Rosanna Fanni:
Okay. I don’t know who wants to address this question. And then we also, of course, go to an in-person round of questions. You will not be forgotten, but maybe we can first address this one. I don’t know who wants to go first. Okay. We do first this round, and then we’ll have another round of questions with the in-person one. Yeah. Go.

Pete Furlong:
Yeah. I mean, I think it’s a great question. And I think, you know, in an ideal world, you know, these principles would be the same. And I think, you know, that would be great. But I also just think there’s an element of maybe not necessarily do we need different principles, but do we need maybe more targeted principles that address some of the issues, you know, that we’re seeing more specifically? Because I think, you know, again, most of these AI principles are very useful and important, but they’re, you know, intentionally broad, because they’re meant for a wide variety of applications. And I think that, you know, that poses a challenge when we talk about how do we implement them? And, you know, you can end up in a situation where different countries interpret these things very differently. And I think that’s maybe the risk in having, you know, pretty broad, you know, interpretation here.

Rosanna Fanni:
Shimona and then Fernando, you also want to say something? Yeah, maybe we have Shimona first and then you. We will have Shimona first and then you can go. Yeah. Please.

Shimona Mohan:
Thank you, Rosanna. Just to add on to Pete’s already very substantive point, I would also like to highlight the fact that in the absence of national policy prioritization of military AI, it’s very hard for countries to actually go ahead and form intergovernmental actions around military AI. So while we speak of ethical principles, since AI itself is not really a tangible entity that you can control via borders, the most effective sort of ethical principles might only emerge from intergovernmental processes around this. But to get to that step where we are discussing substantive ethical principles in substantive intergovernmental processes, I think the first step is to have a good national AI policy for all the countries who are currently developing military AI systems or any other systems around AI which might have military offshoots. So that would be sort of my two cents on this.

Fernando Giancotti:
Very quickly, I think that the quality of the process does not change from what has always happened. Also, for all the other ethical issues that have been raised and tackled, for example, after World War II with the constitution of the UN and then the implementation of the agreed guidelines, there have always been a very dialectic and contradictory process, and we will never get a perfect framework everybody is going to comply with. But striving for the best possible balance, I mean, I think it has no alternative because the alternative is to let things go, you know, possibly in the worst possible way. So we have no certainty, according also to what we see for the other big agreements, agreements about the nuclear and also, you know, unconventional weapons and many other frameworks, and Simona mentioned exactly that some countries prefer the national interest in specific cases, and so this is going to happen. But this doesn’t mean that we shouldn’t strive to push forward the compliance as much as possible through the, as it has been said, the intergovernmental process and especially the organizations that are the responsibility to promote this.

Rosanna Fanni:
Thank you. Fantastic. We’re already in the midst of the debate. We will now take the in-person questions, maybe also one after each other, and then I also hear from Paula that we have another online question, so we will take that afterwards. But first, if you would like to ask questions, also maybe briefly introduce your name, your affiliation. I see you don’t have a microphone. Maybe you would like to use this one. It’s a bit far away, but if you have this one already.

Audience:
Yes, my name is Julius Endert from DW Academy from Germany, also from Deutsche Welle Broadcaster, so I would like to ask Fernando, from your perspective as a military leader, so does AI make our world safer or not? Because we are coming from the massive retaliation strategy from 25 years ago, and if I see now that we are living in a situation where we may think from a perspective of states or NATO that a pre-emptive strike is better when the other side has massive AI capacities, and also in tactics, when we compare our own capacities on the battlefield, then we also might think, okay, let’s go for a pre-emptive strike, and so that in the end would mean that our world will be more unsecure than it was before because of AI. So, what do you think?

Rosanna Fanni:
Thank you. We’ll just take the other question first, and then we’ll answer together. If you would like to ask a question also now. I see you have a microphone accident. You have to switch it on.

Audience:
So, thank you. I’m Rafael Delis. I’m a scientist in infection biology, and I am concerned about an invisible battlefield, that is biological warfare and non-state actors. Now with AI and deep learning, generating bioweapons has never been easier. So, I’d like to use this forum to ask what should we do to ensure biosecurity and peace.

Rosanna Fanni:
Thank you. Also, a very pressing question for sure, especially in international context. Over to the speakers for their replies. Maybe Fernando, you’d like to go first this time, and then I’ll let Shimona and Pete fill in.

Fernando Giancotti:
The question is very interesting. By the way, this paper I just mentioned, the one of the CNA, talks about this also, whether mass abuse of AI will, let’s say, make things more stable or more unstable. Now, there are good grounds to say that could be either way, and which is like things have always been. It could have been the other way, one way or another. What I think, and I’m very interested in the augmented cognition that AI can bring, what I think that many strategic mistakes that led to wars, if we really get to an excellent degree of cognition, augmented cognition, could be avoided. For example, if you study wars, you see that most of the time, was a strategic miscalculation that led decision makers to start wars that for which they paid a very high price, much more than expected. Had they had lesser fog of war, most likely they would not have done that. The Ukraine case is a perfect case of that. So I think that if we can, and now we cannot, use AI for an actual quantum leap in strategic decision making, then this should be a stabilizing factor for most of the cases. There will be anyway cases, I think, in which this augmented cognition will prompt intervention. And so again, either or. But better to go toward augmented cognition are judging from the blood that has been shed for miscalculation so far.

Rosanna Fanni:
Thank you. Okay, Shimona, Pete, I don’t know who wants to also add something, maybe also to the second question. Shimona, you want to go next? Sure.

Shimona Mohan:
I can add just another point to Fernando’s already very well done answer, and I’ll take the second question. On the question of whether military makes a word, more unsecure or safer. I think all weapon systems are developed with the singular focus of giving yourself an edge over your adversary, as a result of which in like a systemic format, it definitely makes the world a lot less safe. But then we also have this idea of what kind of Cobra effect will come about from this. What kind of opposite effect can we see emerging from this? And I think Fernando highlights that very well when he says that this augmented factor might lead to a higher threshold of war, which might eventually then make it safer. But again, these are just optimistic viewpoints at this point, and it remains to be seen how this plays out in the global scenario. On the second question of bio-safety and security as well, it’s very correct to say that AI is something that will contribute a lot to this domain as well. And in fact, it’s already a risk factor that a lot of issue domains and experts are already aware of. So there’s this documentary on Netflix, it’s called Unknown Killer Robots. And it was chilling in the sense that it showcased a lot of these military application potentials, which we haven’t really explored a lot in the lethal autonomous weapons debate at the intergovernmental level. And one of these risk potential factors was how AI can be used to make a lot more poisons and biotoxins and generate them at an alarming speed, which we as humans at this point are not capable of. And this is even more exacerbated by generative AI applications now. So it’s very right to have the assumption that AI will lead to a lot more of these risk potentials around biosecurity also coming up. But at the same time, anything that is a genius for the wrong things can also be a genius for the good things. So let’s hope that while we have malicious actors or nefarious entities sort of taking over the biosecurity domain from the negative side, there are also scientists and policy researchers and normative actors working on the regulation side to help prevent that from happening, or at least having punitive measures in place before and when it happens. And that’s unfortunately the best I can say for now.

Pete Furlong:
Yeah, and just to add to what my colleagues have said sort of quickly here, I think on the biological weapons side of things, I think one of the concerns that I have is that when you talk about for these types of use cases, if I’m using a generative AI system to develop some sort of drug to help people, that needs to work every time. If I’m developing a biological pathogen for some sort of attack vector, it only needs to work once. And so I think there’s a gap in terms of capabilities that when we talk about trying to address at this stage is very important for us to recognize. And I think that it poses a significant challenge. The good thing I will say is I think that on this issue of like biological weapons is something that people are starting to talk about a little bit more. I know with like the UK Summit for AI Safety, that’s been one of the topics that is gonna be addressed at that. And then just actually to build on what Fernando said earlier, I think when we talk about this idea of improved cognition, I think one of the potential fears that I have with that is that cognition is only as good as your sensing. And so actually my background’s in robotics. And so one of the things in robotics that’s very challenging, right, is that you can have a very good robotics software system, but if your sensors aren’t strong enough and your sensors aren’t able to perceive the information, then that doesn’t really buy you anything. And so I think it’s important for us to consider that these AI software systems exist in a broader system and in a broader ecosystem, and it’s important to consider all those factors as well.

Rosanna Fanni:
Absolutely, thanks a lot. And if I maybe just abuse my moderator role a bit, and also add one tiny point about bio-risks, bio-ethics, bio-tech, so to say, is that I think with COVID, of course, we have seen a complete shift of mindset when we look at institutions, and I can just speak about the EU because that has been the focus of my study, but I think with a lesson learned, so to say, of the COVID pandemic, institutions have, I think, at least woken up and have seen that they need to be prepared much better to tackle those challenges, and those risks also emerging from the rapid spread and also the cross-fertilization between technology and bio. And as you may know also, the commission itself has established an entire new directorate general, so a new DG, which is called HERA, which just deals with pandemic preparedness, but not only pandemic preparedness, but also the future of indeed protecting civilian, yeah, civil people from those risks, also bio-risks, and I have friends that work also in this department, so it’s always very insightful to hear that actually institutions are already thinking about this issue, but I think still there needs to be done so much more, and I think especially also when you look at international institutions, much more foresight, I think, will need to happen, and foresight, as we know, is a tool. It’s not to foresee the future. It’s not to be a storyteller of what actually happens, but to be prepared and to know certain scenarios and to know certain risks, and I think there needs to be much more investment in research and development into foresight, into methodologies, into actually training also civil servants, capacity building, what is also mentioned here a lot in this context, so that eventually institutions themselves can be prepared, and hopefully also then the world as such, so that also especially global South nations are not left behind, because of course if you have more capacities to set up your institutions accordingly, then you will be better prepared, hopefully, but this should not mean that there should be, again, a race between global North and global South countries who arrives there first, and of course often global South countries do not have the appropriate resources to work on those topics, so I think it’s really important that especially international institutions, such as the United Nations, take over more responsibility in this point. Okay, now I talked a lot, as my moderator role is abused, I’ll hand it over to Paula for the online question, I hope the person is still there and also interested and following, so yeah, over to you, Paula.

Paula Gurtler:
Yes, I think I can confirm that the person who asked the question is still here and interested and engaged, because they asked a second question, and I would like to offer it to you, Lloyd, to actually take the floor yourself, if that is possible, otherwise I’m also happy to read your question aloud.

Rosanna Fanni:
So I think it should be possible if the technical department is just able to, I think the person can unmute herself, himself, and just ask the question out loud.

Audience:
Very good morning, all observed, thank you so much for the session, first and foremost, and it’s a great pleasure to be part of great conversations that would obviously be impacting the way the world is gonna be looking at things. So my first question is, oh, sorry, my name is Lloyd, and I’m actually calling in all the way from South Africa, so looking at obviously the great work that everybody’s doing on the platform, my first question would obviously then be more around what are the ethical considerations when developing and deploying autonomous weapon systems, and how do we strike a balance between human control and automation? How does the body of CSIJF look at that? Then should I just quickly ask the second question, sorry, Paula? Okay, awesome. And then the same question is, how can AI be leveraged to reduce civilian casualties and minimize collateral damage, and obviously armed conflicts, and what ethical principles should guide this use itself? If there’s any thought been put around that as well. So those are my two, well, I’ll call them three main questions from my side. Thank you.

Rosanna Fanni:
Thank you. Thank you, Lloyd, for asking the question and joining us all the way from South Africa. Greetings from Kyoto. I don’t know who wants to answer this question. Pete, do you wanna go first this time?

Pete Furlong:
Yeah, I mean, thanks a lot for some great questions. And maybe just to take your second question first. I think there’s been a lot of talk about trying to use AI to better target strikes and reduce the likelihood of civilian casualties. So that’s been kind of a way in which people have been talking about using AI to reduce the likelihood of those issues. But I mean, I think it’s worth also bringing up kind of the flip side of that, which is that if you can conduct more targeted strikes, we might see more strikes. And I think when you look at the use of drone strikes in the past 20 to 30 years, maybe that’s the reality. In terms of ethical principles being used for autonomous weapons, I think the RE-AIM summit was, its goal is to try to get to that. But for now, it’s more of a, just a call to action at this point. And I don’t think we necessarily have anything concrete. And the UN Convention on Certain Conventional Weapons has tried and somewhat failed to this point to address that as well.

Rosanna Fanni:
Thank you. Over maybe to you, Shimona.

Shimona Mohan:
Thank you so much for those questions. And I think these are sort of the cardinal questions that we also have to ask ourselves when we research around military AI and ethics. On your first question about the balance between automation and ethics, I think that’s a very, very pertinent question because that’s also something that the explainable AI domain is sort of struggling to contend with. In fact, the performance and explainability trade-off is something that’s very well-established within the AI and machine learning space, which tends to the fact that the more explainable, or let’s in this case say the more ethical your system is, the less it would be, the less performative it would be, or the less capable it would be in terms of its performance levels. So there is this sort of idea established which sort of pits these two values against each other. My personal take would be that it probably is a false dichotomy. There’s definitely a lot that we’re looking into, which sort of makes sure that we’re not compromising on one aspect of a weapon systems to ensure that another aspect is fulfilled. So in an ideal scenario, of course, this would not even be a question where you would always go for the ethical point over the performance factor. But because this is a realistic question, I think the idea is more around ensuring that these systems have and retain their level of performance, while also having an add-on of ethical, or responsible, or explainable AI systems attached to them. Of course, how well they are ensured is something that only the country’s military systems know about, because this kind of information is usually classified, or is behind a number of barriers when it comes to weapons testing, et cetera. But the idea would definitely be to make sure that we’re not compromising on one for the other. And I think policy conversations are also going according to that tune itself, that we’re not policing your capacity to build your weapon systems to its fullest capacity, but we’d also like to make sure that these particular systems are ethical enough to send out into the world without causing undue harm. So as of this point, that’s where the conversation is stuck, of course. As and when we advance more in this field, we’ll have a lot more nuanced ideas around where this particular balance stands at that point. On your second question, I think Pete sort of summarized it perfectly, and I have very little else to add, except for the fact that maybe in terms of casualties, we’re still looking more towards civilian AI systems being employed rather than military AI systems. Of course, this line is blurred in a lot of places, but for example, facial recognition systems are a good example of a dual use technology. And these systems have been employed in, for example, the Russia-Ukraine conflict, where soldiers were sort of identified through these facial recognition systems, and then their remains were sort of transported to either side. So there are a lot of these, so to speak, civilian AI applications which are being employed in conflict spaces. Whether or not they minimize civilian casualties is still a larger question that we’re contending with.

Rosanna Fanni:
Thank you. And the last word to Fernando.

Fernando Giancotti:
Thank you, Eliud, for your great questions. Very quickly, the research I mentioned before, and by the way, I want to thank the Center for Defense Higher Studies for having sponsored this research, has a table, and so if you go on the LinkedIn profile of Rosanna or mine, you will find this table with five examples of ethical principles which have been developed by UK, USA, Canada, Australia, and NATO, which talk basically about human moral responsibility, meaningful human control, justified and overrideable uses, just and transparent system and processes, and reliable AI systems. So these are, as we said, principles which have been developed by single nations, and I just got kind of a summary because they are different, okay? They are not the same on the table. Now the problem is to get, let’s say, a more general framework, as we said, which will have to be negotiated, and that will not be easy. So for the collateral damage there, I can speak with cognition because I can guarantee you that when I talked about operationalizing the international humanitarian law, there is a process with specific, process and procedures with specific rules and specialized legal advisor which evaluate the compliance and, let’s say, clear the commander decision to engage. In some cases, I can tell you that, it’s not a classified information, we had drones for 48 hours over an area to observe movements before deciding to engage. So this means that in today’s system already, this is, let’s say, this issue is of a high priority. That doesn’t mean that there are never mistakes, unfortunately. The AI, if it is used with the money in the loop, can help doing better. I can tell you that at this point, at this stage of the game, I’ve heard nobody saying that they would relinquish the final decision to the machine. I think we cannot think that. We cannot trust AI to drive a car, which is a simple task. Can we trust it to do much more relevant things?

Rosanna Fanni:
Yes. Okay, thank you. Thank you very much. Being mindful of the time, we are already three minutes over time. I would conclude the session now, saying that I think we answered some questions, but we have added probably a lot more questions during the conversation. So, yeah, feel free to reach out to the three speakers. You can find them all, I think, on LinkedIn, and they’re always more than happy to engage on the topics. Feel free to connect. And also, my colleague Paula has, or my former colleague Paula, has put the link already in the chat to the study, so you can also retrieve it and read it on your own. The case study that Fernando and I co-authored. And yeah, with that, wishing everyone a great rest of your day or evening, wherever you are. And thank you a lot again for your attention.

Audience

Speech speed

166 words per minute

Speech length

436 words

Speech time

158 secs

Fernando Giancotti

Speech speed

120 words per minute

Speech length

2087 words

Speech time

1046 secs

Paula Gurtler

Speech speed

187 words per minute

Speech length

516 words

Speech time

166 secs

Pete Furlong

Speech speed

167 words per minute

Speech length

1261 words

Speech time

454 secs

Rosanna Fanni

Speech speed

171 words per minute

Speech length

2326 words

Speech time

817 secs

Shimona Mohan

Speech speed

169 words per minute

Speech length

2911 words

Speech time

1032 secs