WS #184 AI in Warfare – Role of AI in upholding International Law

18 Dec 2024 12:15h - 13:15h

WS #184 AI in Warfare – Role of AI in upholding International Law

Session at a Glance

Summary

This discussion focused on the role of AI in warfare and its implications for international law and ethics. Experts from various fields explored the challenges and responsibilities associated with AI in military applications.

The speakers emphasized the importance of compliance with international humanitarian law and human rights law in the development and use of AI in warfare. They highlighted the need for a comprehensive global governance framework for AI that addresses both civilian and military applications due to the dual-use nature of the technology.

Key issues discussed included the principles of distinction, proportionality, and necessity in warfare, and how AI systems might struggle to adhere to these principles. The question of liability and accountability for AI actions in conflict situations was raised, with concerns about who bears responsibility when AI systems make mistakes or cause harm.

Ethical considerations such as data bias, privacy concerns, and the need for human oversight in AI decision-making were explored. The speakers stressed the importance of incorporating international law considerations from the early stages of AI development, promoting a “compliance by design” approach.

The discussion also touched on the need for multi-stakeholder engagement, including input from industry, civil society, and academia, in shaping AI governance in the military domain. The speakers called for increased awareness of the current use of AI in conflict situations and the urgent need for effective regulation and oversight.

Overall, the discussion underscored the complex challenges of balancing technological advancement with ethical and legal considerations in the use of AI in warfare, emphasizing the critical importance of maintaining human control and accountability in life-and-death decisions.

Keypoints

Major discussion points:

– The role of AI in warfare and its implications for international law

– Challenges in ensuring AI systems comply with principles of international humanitarian law

– Issues of accountability and liability for AI-enabled weapons systems

– The need for human oversight and control in AI-powered military applications

– Ethical considerations and potential biases in AI systems used in conflict

Overall purpose/goal:

The purpose of this discussion was to explore the complex issues surrounding the use of AI in warfare and military applications, with a focus on how to ensure compliance with international law and ethical principles. The speakers aimed to raise awareness of current challenges and discuss potential governance frameworks and solutions.

Tone:

The tone was primarily serious and academic, reflecting the gravity of the topic. Speakers approached the issues analytically, drawing on their expertise in law, ethics, and technology. There was an underlying sense of urgency about addressing these challenges, but the tone remained measured and constructive throughout. Towards the end, there were some more optimistic notes about the potential for responsible development and use of AI in this domain.

Speakers

– Bea Guevarra: Moderator/Organizer, Netmission.Asia

– Qurra Tul AIn Nisar (Annie): Online moderator, senior year law student, governance and policy analyst, Netmission.Asia

– Yasmin Afina: Representative from United Nations Institute for Disarmament Research (UNIDIR)

– Jimena Sofia Viveros Alvarez: Commissioner at Global Commission on the Responsible Use of AI in the Military Domain

– Anoosha Shaigan: Technology lawyer, human rights expert

– Mohamed Sheikh-Ali: Representative from International Committee of the Red Cross (ICRC)

Additional speaker:

– Abeer Nisar: Civil Society, Asia-Pacific Group

Full session report

The Role of AI in Warfare: Legal, Ethical, and Governance Challenges

This discussion brought together experts from various fields to explore the complex issues surrounding the use of artificial intelligence (AI) in warfare and its implications for international law and ethics. The speakers, including representatives from the United Nations Institute for Disarmament Research (UNIDIR), the Global Commission on the Responsible Use of AI in the Military Domain, and the International Committee of the Red Cross (ICRC), addressed the challenges and responsibilities associated with AI in military applications.

International Law and AI Governance

Yasmin Afina from UNIDIR emphasized that international law should be a core component of AI governance in the military domain. She introduced UNIDIR’s RAISE program (Responsible AI in Security and Ethics) and mentioned an upcoming global conference on AI security and ethics. Afina stressed the importance of translating legal requirements into technical specifications for AI systems and advocated for a “compliance by design” approach.

Jimena Sofia Viveros Alvarez, Commissioner at the Global Commission on the Responsible Use of AI in the Military Domain, argued for a broader, coherent global AI governance framework addressing both civilian and military applications. She highlighted the transfer of discussions from Group of Governmental Experts (GGEs) to the UN General Assembly and called for binding treaties aligned with international law to govern AI use in warfare.

Anoosha Shaigan, a technology lawyer with a background in human rights law, discussed specific legal issues such as liability, command responsibility, and developer liability in the context of AI in warfare. She emphasized the importance of international humanitarian law principles like distinction, proportionality, and necessity. Shaigan also mentioned the Outer Space Treaty in relation to AI-guided satellites and suggested developing an international military AI tribunal.

Ethical Considerations and Challenges

The discussion delved into several ethical challenges posed by AI in warfare. Anoosha Shaigan raised concerns about data bias and model drift in AI systems, using the example of potentially discriminatory targeting based on appearance. She also addressed the challenges posed by generative AI, deep fakes, and disinformation in military contexts.

Privacy concerns in conflict zones were addressed, with speakers noting the challenge of balancing military needs with civilian privacy rights when deploying AI technologies. The concept of explainable AI for autonomous weapons systems was introduced, emphasizing the importance of human understanding and oversight of AI decision-making processes in warfare.

Accountability and Human Control

A significant point of agreement among the speakers was the necessity of maintaining human control and accountability in AI-powered warfare systems. Mohamed Sheikh-Ali from the ICRC stressed that human oversight and control are essential for weapons systems, particularly for life-and-death decisions. This view was strongly supported by other speakers, who emphasized the need for human responsibility and accountability in the use of AI in military contexts.

The discussion touched on the complex issue of liability for AI actions in warfare. Anoosha Shaigan highlighted the need to clarify who bears responsibility when AI systems make mistakes or cause harm, whether it be the operator, commander, developer, or the state itself.

Multi-stakeholder Engagement and Corporate Responsibility

Yasmin Afina introduced the importance of multi-stakeholder engagement in shaping AI governance in the military domain. This approach calls for input from industry, civil society, and academia, in addition to government actors.

The role of private sector companies developing AI technologies for military use was emphasized by both Anoosha Shaigan and Mohamed Sheikh-Ali. They agreed on the need to engage tech companies from the design stage and ensure corporate accountability for military AI suppliers. Sheikh-Ali specifically mentioned the ICRC’s engagement with technology companies in Silicon Valley and China.

Future Developments and Recommendations

Looking towards the future, the speakers offered several recommendations:

1. Develop binding treaties aligned with international law to govern AI use in warfare (Jimena Sofia Viveros Alvarez)

2. Create specific standards for military AI that incorporate legal and ethical considerations (Anoosha Shaigan)

3. Engage technology companies from the early stages of AI development for military applications (Mohamed Sheikh-Ali)

4. Implement a “compliance by design” approach, incorporating international law considerations from the outset of AI system development (Yasmin Afina)

5. Establish an international military AI tribunal to address legal issues arising from AI use in warfare (Anoosha Shaigan)

Conclusion

The discussion underscored the complex challenges of balancing technological advancement with ethical and legal considerations in the use of AI in warfare. While there was a high level of consensus on core principles, such as the importance of international law and human control, the speakers differed in their specific approaches and areas of emphasis. This reflects the multifaceted nature of the issue and highlights the need for continued dialogue and collaboration among various stakeholders to develop comprehensive and effective governance frameworks for AI in warfare.

The urgency of addressing these challenges was evident throughout the discussion, as speakers called for increased awareness of the current use of AI in conflict situations and the pressing need for effective regulation and oversight. As AI technologies continue to advance, the international community faces the critical task of ensuring that their use in warfare remains within the bounds of law, ethics, and human control.

Session Transcript

Bea Guevarra: First of all, thank you for joining the session, AI in Welfare, Rules of AI in Upholding the International Law session, workshop number 184. And this session is, you know, about the nature, we are going to explore the sensitive nature of the AI in Welfare domain on the format that foster an open, frank discussion as a 60-minute roundtable discussion, but let’s see how we can manage the time anyway. So I will not consume so much time on that. So we are now having online moderator, Annie, and also one of the organizers, Beer, with us today. So I will pass the floor to Annie for introducing to the organizers. Just quick introduction. Welcome, everyone. We can’t hear you, Annie. Is there any technical issue? We can’t hear you. You can hear me? Yeah, I can hear you right now.

Qurra Tul AIn Nisar: Amazing. Then. Yeah, so I was saying that it’s an immense pleasure to have you all with us today. And I am Kratul Ayn Nisar, you can call me Annie. And I am a senior year law student, as well as a governance and policy analyst. I have with me the other organizers, Bea, and Abin Nisar, as our organizers. So, firstly, I want to, you know, quickly thank them all for their constant support and, you know, immense help for finding all the speakers and the experts on this topic. I’m like, I’m very grateful for your insight and have you on board. topic. We all are aware that AI has reshaped how we are living in this world and warfare is no separate aspect from that. So, I would love if we can quickly start the session because we have already, apologies for that, but we have already, you know, a lot of time. So, back to you, Bill, for, you know, introducing our speakers.

Bea Guevarra: Thank you, Annie. In this session, we are going to have three speakers, Ms. Anosha from the Civil Society and Yasmin Afina from the Intergovernmental Organization, and also Ms. Jamina Sophia who are also joining on site here. So, as an opening of this session, I would like to ask the speaker about their own side on the topic, such as like AI in warfare. So, I will ask to Yasmin, how do you see the future of the AI and also the warfare?

Yasmin Afina: Yeah, perfect. Hi, thank you, everyone. It’s nice to meet you. My name is Yasmin Afina from the United Nations Institute for Disarmament Research, or UNIDIR. Thank you so much for the organizers of this panel, for inviting me today, and I’m so sorry for not joining you in Riyadh in person due to personal circumstances. I could not travel in time for the workshop, so I know that you wanted me to speak a little bit about the future of AI in warfare, but if you would allow me, I might just share a few slides, if I may. Is it correct? Is it okay? So, let me just… Yeah, I hope that you can see my screen. Perfect. So I know that you wanted me to speak about the role of AI in warfare and its role in upholding international law, specifically from a responsible AI perspective. But please allow me to twist the framing a little bit and instead look at international law as a key and central facet of responsible AI in the military domain. So in the first half of 2024, UNIDIR took part in regional consultations with states and experts in Asia Pacific, the Middle East and Central Asia, Africa, Europe, Latin America, and the Caribbean. And based on these consultations, we have identified and established a number of facets of responsible AI in the military domain based on what states have shared during these consultations. And one of them relates to compliance with national and international law, as you can see in the top right of the diagram. And in fact, the overwhelming majority of states across regions place compliance with international law as a central component of their governance approaches to AI in the military domain and wider security domains. And there is this shared sentiment that international law is an important framework that must be upheld throughout the lifecycle of AI technologies meant for deployment and use in defense and security and thus including in the context of warfare. So international considerations must be considered from the earlier stages, from the design, development, testing, and evaluation, which would require efforts to translate international law obligations into technical requirements in order to frame and shape the pre-deployment stages of these technologies in such a way that they will somewhat be compliant by design. And I’ll get back to that later in my concluding remarks. And so in addition, international law, and in particular international humanitarian law, and international human rights law must inform or even shape and frame procurement processes, as many states are increasingly considering purchasing AI-enabled capabilities. are developing AI, but also those that are purchasing. And so from a policy standpoint, however, it’s also important to note that while this overall shared sentiment that international law is important, it does not mean that states approach it in a uniform way. And there are nuances across regions in states’ approach to AI in the military domain and the applicability of international law. So for example, states in Latin America and the Caribbean, they generally dedicate more attention and efforts to foster compliance with and uphold international human rights law. And this approach is somewhat reflective of the regional security landscape where transnational efforts at combating organized crime prevail and in the light of the international human rights laws applicability, both in and outside of conflict. And while of course states in all the regions acknowledge the importance of international human rights law, international humanitarian law tends to be overwhelmingly dominating the policies and discourse of states in other regions. Although our findings were also such that the African region would also dedicate more attention to international human rights law, particularly within the framework of the African Charter on Human and People’s Rights. And there’s more of these findings in the report that I launched back in September, which I invite you to download and read from UNIDIR’s website by following the QR code on the slide or by going to unidir.org slash Kali Doscope AI. So now that we’ve established that states around the world see international law and compliance as an important component of responsible AI in the military domain, I wanted to add another layer to our discussions, the role of the multi-stakeholder community. And in fact, in the report that I previously mentioned, one of the other key areas of nuance convergence that we have identified is the importance of multi-stakeholder engagement. And states, in fact, generally recognize the value of multi-stakeholder and cross-sectoral engagement to promote responsible AI in the military. domain, but states generally disagree on how such engagement should be conducted. And so UNIDIR in our capacity as an independent research within the UN ecosystem, and with a mandate of informing member states, we’ve launched earlier this year in March a program of work called the Roundtable for AI Secured in Ethics, or RAISE, in partnership with Microsoft. And we’ve been engaging very closely with a group of industry representatives, including big tech and startups, and consultancy organizations, civil society, and academics. And we basically ask them, what are the main themes that should be prioritized in the context of AI governance and security and defense? And as a small parenthesis, I just wanted to note that as part of the RAISE program of work, UNIDIR will be holding the inaugural global conference on AI security and ethics on the 27th and 28th of March in Geneva. It’s open to all. We’ll soon be issuing a call for abstracts for you to present your insights to the international diplomatic community in Geneva in the UN. So please do mark your calendars and let me know if you’d like to be kept in the loop. So coming back to RAISE, the group has identified six themes that must be prioritized for the governance of AI and security and defense. And across all of the six themes, international came across as a recurrent pattern. So for example, the second priority theme was trust building. And one of the key recommendations put forward was that in order to enable this trust building, there’s a need to clarify the interpretation of applicable laws. And so for this group, states should develop clear national positions on how to interpret and apply international law in the context of AI applications in the military domain, and thus ultimately contributing to build this trust between states. And another example is the third priority theme, which pertains to unpacking the human element in the development, testing, deployment, and use of AI systems in the military domain. And so clarifying how international law applies can help clarify then what is the level of human element that is required at each stage of the lifecycle of the technologies of AI in the military domain. and under what basis in international law. So again, all of this can be found in the report that we’ve published on UNIDIR’s website at unidir.org slash governance AI, or you can do so by scanning the QR code on the slide. And so finally, to conclude, I wanted to circle back to something that I mentioned earlier on how all of these initiatives basically can contribute to efforts towards compliance by design in the development, testing and evaluation of AI technologies in the military domain, while also acknowledging and addressing and mitigating some of the risks that these technologies can present with regards to international law. So anecdotally last month, I submitted my PhD manuscript specifically looking at how international humanitarian law considerations should frame the development, testing and evaluation of AI technologies for military targeting. So anything that is going on before the deployment in the battlefield. And the thesis has been drafted with the assumption and acceptance personally, not UNIDIR’s side, that AI in the military domain is happening already. And without of course, prejudice to possible instruments in the future that may prohibit an outlaw some applications. But at this stage, it’s important to dedicate efforts and research towards ensuring that whatever technology will come out of the lab for warfare, they have been developed with compliance in mind instead of an afterthought. So earlier I mentioned the need for to translate legal requirements into technical requirements. One example that I looked at my thesis is for example, the use of proxy data for training and testing the use of AI technologies. And I argued that while proxy data can to a certain extent be necessary by virtue of the rule of precautions due to the messy and uncertain nature of warfare, it cannot be separated from direct indicators that instead should be seen as a natural part of the ecosystem of intelligence needed for military decision making. So all of this to say that with the right efforts, dedicated resources and political will, compliance with international law should in principle be at the heart of the development of AI technologies for military domain. And this is not about. coding international law into algorithms, but rather identify and prioritize practical measures for the implementation of international law and ensuring that the deployment and use of AI in warfare upholds international law from the outset and does not jeopardize it instead by remaining as an afterthought. Because at some point you just lose the right to say oops. And on that note, thank you very much. And I look forward to our Q and A.

Bea Guevarra: Thank you, Yasmin. And I would like to ask to another speaker who are joining on site. How do you think of the international laws and the future of the AI in warfare? Could you also please give like an insight based on your experience as well? Hello, thank you. Can you all hear me?

Jimena Sofia Viveros Alvarez : Perfect. Well, first of all, thank you for the organizers for inviting me. I think I don’t like to just circumscribe this conversation to warfare. Because these technologies are being used to attack civilians also during peacetime. So I like to call it the peace and security spectrum of things. And also because they’re not only used by military actors, but also civilian actors, both state and non-state. So state actors that are civilian law enforcement or border controls. Whereas non-state actors, it depends from the context. As Yasmin very well pointed out, for example, in Latin America where I’m from, organized crime is a big threat. In other regions, it’s terrorism. In other regions, it’s mercenaries. So all of these actors are using the same technology. So it’s important to acknowledge the different implications and the different treatment under international law of each one of this. Because when we’re talking about AI in the peace and security domains, we are talking about many different sets of rules, right? So we have obviously IHL, international humanitarian law. We have international human rights law, which applies to both wartime and peacetime, and also by civilian actors, and it also kind of involves state responsibility. So it also comes to public international law, which deals with this, but which also deals with use ad bellum. So that’s the use of force, the right to use of force or self-defense type of considerations that can stem out of the use of these technologies. We also have international criminal law. We also have national, regional regulations and laws around different types of liability modes and compliance and procurement and all the different mechanisms that apply to the entire life chain of these technologies. So it is quite a broad spectrum to talk about the future of international law, because we’re also seeing it as in the present. It’s not just a future situation, especially right now when we’re living in a world where international law is blatantly violated and with complete impunity, unfortunately. So we’re living in a volunteerism world where compliance seems to be optional, and that’s really not how it should be, because we’re seeing very dire consequences for civilians in different types of contexts around the world. So what we need to do is to, everyone, advocate, promote, and foster a coherent AI global governance framework. And I’m saying AI in general, because by its dual-use nature, we cannot really divide by civilian, by military, precisely because of the distinction that I made at the beginning, the convergence of actors, the convergence of moments of use and types of use. and etc. So we all really need to strive for this global AI governance framework to materialize, to be binding, and to have the correct mechanisms for implementation, because that will be crucial. And this obviously requires enforcement mechanisms, which, you know, it’s going to be even harder, but we need to be ambitious, because this is a very ambitious goal, to preserve international peace and security at this time. So what we have in the current governance landscape, in this particular domain, we obviously have the GGEs, which is the group of governmental experts in Geneva, that are under the Convention of Conventional Weapons, which I think is a little bit ironic, because these are the least conventional weapons, autonomous weapons. We also have RE-AIM, which is the Global Commission on the Responsible Use of AI in the Military Domain, where I’m a commissioner. We also have RAISE, as Yasmin mentioned, and we’re now seeing the development of the transfer from the GGEs to the General Assembly, with resolutions that are coming out by the initiative of different states, for example the Netherlands and Korea, Austria, you know, which are leading this conversation, amongst others of course. So these are very welcome steps that we are building towards, but we still need to create a lot more awareness about the fact that these are situations that are going on right now, they’re not future, eventual possibilities. And we also need to be very mindful, because there is a tendency to try to separate either alleged pros of these technologies, like okay, well there will be more precise, there will be more accurate, there will be less bias, but you know, we know. That’s why it needs to be all comprehensive within the same global governance framework for AI, because we all know the problems with AI itself, right? So the bias, the brittleness, the hallucinations, the misalignment, etc. et cetera. So those two cannot be dissociated when we’re looking at what the actual consequences and effects of the use of these technologies in this space will be. And also the differentiation between offense and defense capabilities, it’s completely illusory because the same technology is just interchangeably used. So any type of defense is an office in itself. So that’s something we should be mindful when we’re having this conversation. And I will leave it there for now. And again, also looking forward for the Q&A. Thank you.

Bea Guevarra: Thank you, Ms. Jimena Sofia. And it is very insightful to understand and the dynamics of the emerging technology and the challenges that we are facing. So I also would like to ask to the Inucia who are joining here online. From the civil society perspective, how do you see AI in warfare and how we could facilitate that collaboration and why making sure the responsibility for the conversation among the community? So Inucia, are you going to share the screen? Is there any PowerPoint? No? Yeah, go ahead, please. Ms. Inucia, can you share your screen and have the slides on the screen, if that’s possible? Sure, sure. Thank you, thank you. Is that, does that work? Yes, okay.

Anoosha Shaigan: So thank you everyone for organizing this and thank you for having me, it’s such a privilege to be here and talk about such an important issue. I’m quickly going to go over some of the points and the slides are just bullet points of my, you know, some of the issues that I would like to touch upon so you can follow along. So thank you to the speakers for setting the stage for international collaboration, that is of course, you know, the first and foremost thing that we need to do. But let’s also look at some of the very specific issues. So I am a technology lawyer by profession, I started my career in human rights, in international human rights law, working on treaties and I was responsible for, I was part of the team responsible for bringing the first seven core human rights treaties to Pakistan. So we got the government to sign these treaties and then we started working on them. So my association goes a long way back when the SDGs were called MDGs. So, you know, we’ve come a long way since then and I’d like to touch upon some very specific legal issues. The aim is not to give you more anxiety about these issues but, you know, maybe help you form an opinion because, you know, as civil society experts, as lawyers, as development professionals, you know, your opinion matters as well because this is a very new area and as we go into the future, digital technologies become more and more decentralised, which means that… governments have to rely on the civil society, the academia, and the development sector, and the private sector, and not just technology companies to be able to start forming these principles and guidelines moving forward. So let me just. So I’m going to touch upon AI and international humanitarian law, some of the very specific issues, and then I’ll go into some of the ethical considerations as well. So when we talk about the key principles of international humanitarian law, they can be found in UN principles, the Geneva Conventions, the ICRC’s handbook. If you’re a person of faith, they could be part of your religion as well. And they just make common sense, right? So there’s the principles of distinction, proportionality, and necessity. I’m going to talk about proportionality and necessity first, since we might be more familiar with that. So specifically talking about Gaza, are the military responses towards civilian population proportional? Are they excessive? These are some of the things that we’ve already been talking about for the past year. So you might be more familiar with this. Do you think autonomous AI systems, or weapons, or autonomous drones, or any other kind of robots, do you think they would be able to make these kind of proportional responses? So that is something to consider. As we have seen in the past one year, they’ve not been doing that. Then there’s the principle of necessity. It obviously talks about whether this military response is necessary. When it comes to AI, they’ve been calls to simulate certain situations first and then see and verify whether you know they warrant an actual military response. So these are like some of the principles you know that you might be familiar with. As far as distinction is concerned you know there are laws available as well at the international level perhaps not at the state level or domestic level where states are supposed to distinguish between civilian and military targets or civilian or military figures or entities and that has somehow you know translated into applying to AI targets as well. So but do you think AI will be able to make that distinction? Let’s hold that thought and we’ll come back to that when we discuss ethical issues. So liability is of course a very important you know issue that we’ve seen with autonomous weapons. If an AI shoots you know somebody down which was civilian target perhaps, perhaps it was a hospital or a school who is going to bear responsibility for that. Will that be the person who was operating the AI? Will that be the AI itself? Will that be the commander of the person or the agent representing you know a certain team or will that be an entire state? So command responsibility you know there are rules around that but they have to be applied in the context of AI. State responsibility of course it talks about that the state can be held responsible for the actions of its agents. I believe the principles are laid down under RCWA. Then there’s developer liability. Somebody who developed an AI system that you know did not work and now it’s being reviewed. It will go back to whether they followed all the protocols, they followed government guidelines or international humanitarian guidelines and whether they tested these systems and whether they removed glitches and they made sure that all the laws were followed. There’s a recent, and for those more familiar with how legal compliance works in highly regulated industries like nuclear power plants or especially those working in the energy sector or in climate. Trainers can also be held responsible if the training was inadequate. If you didn’t document things properly or if you did not impart adequate training, your trainers could be held responsible as well. If somebody did not know how to use an AI system, their trainers could be held responsible as well. By responsible and accountable and liable, we also mean that it would include monetary compensation towards the victims and their families also. Now, there have been calls for developing an international military AI tribunal in particular. Coming from Pakistan, we do not believe in military trials in principle, but just to let you know that this is a form of accountability. But do we need additional forums when we already have international courts and when we have these other international tribunals? How would they impact states individually? Would states have to sign treaties? Would they have to incorporate them into their domestic laws? These are some of the considerations. Then, of course, this is an area that I specialised in, so I really want you to touch upon that as well. Of course, there are laws around the Outer Space Treaty as well, which mandate peaceful use of space and sharing resources. and keeping things clean and debris-free. But then there are, of course, issues with AI-guided satellites. If there are issues, who is going to resolve them? Is it going to be the International Space Station? Is it going to be the United Nations? Is it going to be the state being affected? Or is it going to be the state that actually launched that AI satellite? And how do their actions work? So oversight might be a bit of a questionable issue here. Then let’s quickly touch upon ethical frameworks. So data bias and model drift are the main concerns with AI models. Data bias is, of course, if you train your AI with biased data. For example, if you train it with skewed data or discriminatory patterns like kill all the dark-looking people or kill all the brown people or kill anybody who doesn’t look white or Caucasian. So these kinds of stereotypes, if AI picks up on these elements, it can be very indiscriminate in the actions that these autonomous or AI-based weapons take, especially during military action. So the data sets need to be checked for bias. They need to be audited. There are algorithmic checks as well that you can fix those as well. But constant and regular oversight is very necessary. Then, of course, there’s the issue of model drift. Model drift is when you overstuff and overfit your AI so much with data that it starts behaving unpredictably. So when people say, you know, I like a child or a person and you keep feeding it information and training it and one day it will start you know making better decisions and wider decisions. Personally I don’t think that’s quite accurate because at the end of the day it’s still a machine, it’s still you know it’s something technical or technological and if you look at you know for example the language of some of the AIs, coding language, there could be you know zeros and ones which means they’re very black and white, they’re very exact and very specific so over stuffing it with data can actually lead to unpredictable outcomes where it just becomes so confusing that you don’t know how it’s going to act and then who takes responsibility of course you know that’s the issue that we’ve been discussing so audits and monitoring are important. Then of course there’s the issue of privacy especially in conflict zones, you know surveillance is an issue, surveillance of civilian population is an issue, you know facial recognition software is that you know allowed during a conflict especially when you’re trying to you know target civilians or pinpoint somebody’s exact identity or you know trying to for example you know we’ve seen in the Gaza conflict that very very specific people individually have also been targeted like doctors, journalists, so you know the privacy becomes very very crucial in such scenarios so of course we need like a solution for this perhaps on the lines of the GDPR as in Europe but do we need another international regulation or can we come up with like a general framework or some specific standards that all countries that are perhaps part of the United Nations must follow without having to sign additional treaties or pass additional laws within their countries. This is another issue. Then, of course, autonomy versus human side is a concern as well. Human side or additional or added oversight is, of course, important when AI is being used in conflict zones. I think one of the areas that we could be following for development around autonomous robots or technologies basically comes from the autonomous vehicle economy. When we look at some of the laws that have been, some of the cases that have been going on around autonomous vehicles, those cases are going to help us determine some very minute details and very specific issues, particular to autonomous vehicles and robots. They could apply to drones as well, for example, or other autonomous weapons that have been used in conflict zones. Again, human in the loop is another solution where you always make sure that there’s human oversight present when an autonomous system is being used. Then, of course, you can create ethics committees as well, which will constantly monitor these developments. Corporate accountability is, of course, important just like we require corporations to submit transparency reports on how they’re doing on climate change. We could ask the same from different militaries of different states, who actually, sorry, military. suppliers or military contractors to who are you know part of the corporate world to submit these kind of transparency reports. And then of course if they lack training if they didn’t follow certain laws or if they didn’t follow certain standards they could be held liable for those as well. There’s of course a proposition to develop certain standards for military AI which I think it’s still a very very nascent area still developing so it could be interesting to follow these developments. And at the end I would like to touch upon generative AI as well. So we’ve seen a lot of you know issues surface related to deep fakes and disinformation. Now generative AI again we should be able to you know use detection and prevention tools to spot it especially on social media for example or especially for people operating AI based weapons or AI systems within conflict zones. So developing these tools is very important to counter disinformation and deep fakes because at the end of the day our ultimate goal is to save human lives. This is I think this is all what we’re here for. We’re not here to you know talk about how we’re going to make profits or make money. The ultimate goal is to prevent civilian casualties and you know have this dignified regard for human life. So with that I would like to end the presentation and I’m open to questions. I look forward. Thank you.

Bea Guevarra: Thank you Ms. Inosha. You talk your presentation is very informative like talking about the open data bias and some modern drift privacy and data autonomous versus human oversight and also from the side of the corporate accountability, as well as the use of the adequate use of the AI is very informative and insightful. I noted that we have to end this session very soon, since we are running a bit late for this session, due to the technical error, but before moving to the open floor session, I noted that Mr Mohamed is here in the room with us, so I also would like to invite him to give any comment if he has.

Mohamed Sheikh-Ali: Thank you very much, organisers and the kingdom for hosting this. Being from the ICRC and acknowledging that we are running late, I will just focus on a few things and I will not duplicate what has been said by the other colleagues. AI and autonomous weapons should comply and respect international humanitarian law, proportionality, distinction and precaution. Can an AI-controlled or autonomous weapon that has been tasked to execute an operation abort autonomously the operation if they see a child or a civilian or a fighter who is no longer capable of participating in the conflict? Because a soldier who was in a frontline operation, who was participating in the conflict, once they are injured and they are no longer part of the conflict, they are protected under international humanitarian law. So will these autonomous systems comply with those basic principles of international humanitarian law? It is a huge concern that we have. And therefore, we are actually in ICRC. Today, we have a specialist at the Silicon Valley, we have a delegation in China that are discussing with the technology companies that are contributing to the development of these systems and having this kind of conversation. I absolutely agree with the notion of engaging with the tech companies and those who are developing these technologies from the design stage. That is quite key. We are also calling for human oversight and control on any kind of weapons. A decision to kill, life and death decision, should not be made by a tool or by a logarithm. It has to be a decision that’s at least the bottom or the final engagement or discharge of the munition should be controlled by a human being. That’s quite important. And we are also convening, as some of my colleagues have already mentioned, and discussions and dialogue on how to incorporate and integrate the international humanitarian law in the development of autonomous weapons. And artificial intelligence controlled warfare. And regulations are needed, but international humanitarian law applies in any kind of warfare, whether it is carried out by a human or by autonomous weapon. And so that’s very clear. Where we have to seek clarity is who assumes responsibility? Is it the developer? Is it the commander? we ask and that has to be clarified therefore we are convening discussions just last year or the end of 2012 there were two workshops in Geneva with experts discussing these kind of issues and the recommendations and reports is out there in our website. Treaties that are binding you know and that are aligned with international humanitarian law are actually necessary and ethical you know it was mentioned by one of the classes I don’t remember ethics dignity and preservation of a human life is is the ultimate goal and that’s what international humanitarian law is is eventually about. Thank you very much.

Bea Guevarra: Thank you Mr. Mohamed and for the next session I want to pass to our online moderator Annie. Annie the floor is yours. Just a reminder that we only left like nine minutes.

Qurra Tul AIn Nisar: Oh that’s okay right thank you so much Mr. Mohamed Sheikh Ali and I completely don’t want to forget Mr. Neem’s efforts in bringing Mohamed Ali on board with us. Thank you so much. So I love how this session is not only about identifying problems but also about practical solutions and exploring how exactly AI can be used as a force of good. So I would really quickly want to you know move our discussion towards our last policy question because you guys have effectively answered all other policy questions in your presentations. So just just quickly bring and bringing the discussion towards it. Can explainable AI technologies be effectively applied to autonomous weapon system to ensure human oversight and understanding of how targeting decisions are made. So I understand how there was compliance by design discussed and also biases in AI systems discussed by Anusha and also Ms. Yasmine. I would really appreciate if any of the speakers present on-site or online would like to take this question up.

Yasmin Afina: Sure, I mean I can, I’m happy to have a first stab at it and colleagues please feel free to sort of compliment just you know from the top of my head I think first of all thank you for the very interesting question. I think that it’s very pertinent, it’s something that in Geneva as well diplomats are grappling with every day. I think based on what I’ve seen and what I’ve heard engaging with stakeholders ranging from state representatives to civil society industries, I think in terms of explainable AI generally that’s an issue that has that you know the AI community is trying to grapple with but in the military domain there’s quite a few implications especially with regards to in IHL you have the legal duty to investigate violations of international law and then when you look at machine learning based systems where you have a black box so you basically know what is the input, you know the output but you don’t know where how it went from the input to the output. So for example why the system specifically recommended to target this particular person for example then there is you know there are issues, there may be issues as to like how the investigation, if this output led to a potential violation of international human internal law, how you could effectively conduct an investigation when the system has a black box but at the same time there are measures that is growing research efforts in trying to circumvent that problem so you might for example have In the military domain, you also have to understand that when the commander authorizes the use of force, it’s held to such a high level of standard, or supposed to be held to a high level of standard, then the commander will always maintain some level of responsibility over their decision, even when the commander decides to use and follow the recommendation of a weapon system or of an AI-based decision support system. And even when the commander does not know how the system came up with the output. And then there are also recommendations related to best practices with regards to documentation in the military domain. I think that’s something that, even when not using AI, it’s something that is increasingly being looked at. So there is growing research into that, but I don’t think there is any silver bullet for this question. But there is research that is ongoing. But again, it depends on how much resources are being dedicated to that, and how much political willingness is there to dedicate those resources. Thank you.

Qurra Tul AIn Nisar: Thank you so much, Ms. Yasmine. And I would love if the on-site speakers can also add their insights on this very question.

Mohamed Sheikh-Ali: All I can add is, for now, until the technology is advanced enough, which, in our opinion, from the internal humanitarian law perspective, we will never get there, and a human responsibility and accountability is necessary. So I think I lost, yeah, we can leave the decision, you know. So, I think there’s a technical issue, but all I’m saying is a human control and oversight and accountability is ultimately necessary, even if the technology is so advanced. So that’s our position at the moment, and there’s an expert here.

Jimena Sofia Viveros Alvarez : Well, I agree entirely. So I believe that unless one can completely understand and control the entirety of the effects of a technology, one should not be using it, especially when human lives are at stake. And I also don’t like the term laws, because it’s not only lethal. So even other types of physical harm or to integrity or to targeting for detention for other types of purposes is also quite harmful and should also be encompassed in the regulation of these technologies and their uses. And I would like to end with a quote of Amina Mohamed, Deputy Secretary General of the United Nations. She said this at the Arab Forum that was held in March in Beirut. And she said, there can be no sustainable development without peace. So that’s something that we should keep in mind, because without peace, we really have nothing. Thank you.

Qurra Tul AIn Nisar: Amazing. Thank you so much, the on-site speakers and as well as the online ones. I understand how we still have Miss Anusha left to answer this question, but we quickly need to wrap it up. I love how, you know, in a very short time, we covered a lot of topics of accountability, global coordination and collaboration. also the compliance and design part has got to be my favorite. So guys, we have had some great key takeaways from this session, and I hope really that it further inspires more action and discussion, because we really need it in this time. So the report of this session would be shared right away. And I would quickly request all the speakers that are present on site to maybe come closer to the screen, so we can have a group photo together. Feel free to come, and we can pin Ms. Yasmine and Ms. Anusha on the screen, and how about have a quick group photo?

Bea Guevarra: Next thing, please pin Beer, who are also our organizing member. Unfortunately, we have to end this session. It is a time set right now, so thank you, everyone. I know that you guys have comments and questions. Maybe you can approach to the speaker who are on site later. Sorry about that. I feel it’s also been a beerness hour. She’s also one of the organizers. Could you please have the beer to pin on the screen? Thank you. Okay. Hi, speaker. Feel free to try if you would like to. Thank you. There will be no one in the picture. I guess, Annie, you can take a quick screenshot of us, and then we’re good to leave the. session today. I think you’re on mute, Annie. So I only have the online participants including Phil with me, so I’m taking the screenshot guys. Hold your best poses. Three, two, and one. Got it, got it. Thank you so much. Thank you. Have a good day. Thank you so much. Have a good day. Bye. . . . . . . . . . . . . .

Y

Yasmin Afina

Speech speed

170 words per minute

Speech length

2017 words

Speech time

711 seconds

International law as central component of AI governance

Explanation

Yasmin Afina argues that international law is a crucial element in governing AI in the military domain. She emphasizes that states across regions view compliance with international law as a central component of their approaches to AI governance in defense and security.

Evidence

Findings from regional consultations with states and experts in Asia Pacific, the Middle East and Central Asia, Africa, Europe, Latin America, and the Caribbean.

Major Discussion Point

International Law and AI in Warfare

Agreed with

Jimena Sofia Viveros Alvarez

Anoosha Shaigan

Mohamed Sheikh-Ali

Agreed on

Importance of international law in AI governance

Differed with

Jimena Sofia Viveros Alvarez

Differed on

Scope of AI governance in warfare

Translating legal requirements into technical requirements

Explanation

Afina suggests that international law considerations should be translated into technical requirements for AI technologies in the military domain. This approach aims to ensure compliance with international law from the early stages of development and testing.

Evidence

Reference to her PhD research on framing the development, testing, and evaluation of AI technologies for military targeting based on international humanitarian law considerations.

Major Discussion Point

Future Developments and Recommendations

Explainable AI for autonomous weapons systems

Explanation

Afina discusses the challenges of explainable AI in the military domain, particularly for machine learning-based systems with a ‘black box’ nature. She highlights the implications for investigating potential violations of international law when the decision-making process of AI systems is not transparent.

Evidence

Mention of ongoing research efforts to address the ‘black box’ problem in AI systems used in the military domain.

Major Discussion Point

Ethical Considerations and Challenges

J

Jimena Sofia Viveros Alvarez

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Need for coherent global AI governance framework

Explanation

Jimena Sofia Viveros Alvarez emphasizes the importance of developing a comprehensive global AI governance framework. She argues that this framework should be binding and include proper implementation mechanisms to address the challenges posed by AI in peace and security domains.

Evidence

Reference to current governance initiatives such as GGEs, RE-AIM, RAISE, and UN General Assembly resolutions.

Major Discussion Point

International Law and AI in Warfare

Agreed with

Yasmin Afina

Anoosha Shaigan

Mohamed Sheikh-Ali

Agreed on

Importance of international law in AI governance

Differed with

Yasmin Afina

Differed on

Scope of AI governance in warfare

Need for binding treaties aligned with international law

Explanation

Alvarez stresses the importance of creating binding treaties that align with international humanitarian law. She argues that these treaties are necessary and ethical to ensure the preservation of human life and dignity in the context of AI and warfare.

Major Discussion Point

Future Developments and Recommendations

Agreed with

Mohamed Sheikh-Ali

Agreed on

Human control and accountability in AI warfare systems

A

Anoosha Shaigan

Speech speed

137 words per minute

Speech length

2080 words

Speech time

909 seconds

Compliance with international humanitarian law principles

Explanation

Anoosha Shaigan emphasizes the importance of AI systems in warfare complying with key principles of international humanitarian law. She specifically mentions the principles of distinction, proportionality, and necessity as crucial considerations for AI in conflict situations.

Evidence

Reference to UN principles, Geneva Conventions, and the ICRC’s handbook as sources for these principles.

Major Discussion Point

International Law and AI in Warfare

Agreed with

Yasmin Afina

Jimena Sofia Viveros Alvarez

Mohamed Sheikh-Ali

Agreed on

Importance of international law in AI governance

Data bias and model drift in AI systems

Explanation

Shaigan highlights the ethical concerns of data bias and model drift in AI systems used in warfare. She explains that biased training data can lead to discriminatory actions by AI-based weapons, while model drift can cause unpredictable behavior in AI systems.

Evidence

Examples of potential biases in AI training data, such as discriminatory targeting based on appearance.

Major Discussion Point

Ethical Considerations and Challenges

Privacy concerns in conflict zones

Explanation

Shaigan raises concerns about privacy issues in conflict zones, particularly related to surveillance and the use of facial recognition technology. She emphasizes the need for privacy protections, especially when targeting specific individuals like doctors or journalists.

Evidence

Reference to the Gaza conflict where specific individuals have been targeted.

Major Discussion Point

Ethical Considerations and Challenges

Clarifying liability for AI actions in warfare

Explanation

Shaigan discusses the complex issue of liability for actions taken by AI systems in warfare. She outlines various potential responsible parties, including operators, commanders, states, and developers, and emphasizes the need for clear accountability frameworks.

Evidence

Reference to existing legal principles such as command responsibility and state responsibility.

Major Discussion Point

Accountability and Responsibility

Corporate accountability for military AI suppliers

Explanation

Shaigan proposes that military AI suppliers and contractors should be held accountable for their products. She suggests implementing transparency reports similar to those used for climate change compliance in the corporate world.

Major Discussion Point

Accountability and Responsibility

Developing standards for military AI

Explanation

Shaigan mentions the proposition to develop specific standards for military AI. She acknowledges that this is a nascent area but suggests it could be an important development to follow in the future.

Major Discussion Point

Future Developments and Recommendations

M

Mohamed Sheikh-Ali

Speech speed

107 words per minute

Speech length

569 words

Speech time

318 seconds

Human oversight and control necessary for weapons systems

Explanation

Mohamed Sheikh-Ali emphasizes the necessity of human oversight and control in AI-powered weapons systems. He argues that life-and-death decisions should not be made solely by algorithms or tools, but must involve human judgment.

Evidence

Reference to ICRC’s position on the need for human control in autonomous weapons systems.

Major Discussion Point

International Law and AI in Warfare

Agreed with

Yasmin Afina

Jimena Sofia Viveros Alvarez

Anoosha Shaigan

Agreed on

Importance of international law in AI governance

Human control needed for life-and-death decisions

Explanation

Sheikh-Ali reiterates the ICRC’s position that decisions to kill or use lethal force should not be made by AI alone. He stresses that the final engagement or discharge of munitions should always be controlled by a human being.

Major Discussion Point

Ethical Considerations and Challenges

Agreed with

Jimena Sofia Viveros Alvarez

Agreed on

Human control and accountability in AI warfare systems

Human responsibility and accountability necessary

Explanation

Sheikh-Ali maintains that human responsibility and accountability are ultimately necessary in the use of AI in warfare. He argues that even with advanced technology, human control and oversight remain essential from an international humanitarian law perspective.

Major Discussion Point

Accountability and Responsibility

Agreed with

Jimena Sofia Viveros Alvarez

Agreed on

Human control and accountability in AI warfare systems

Engaging tech companies from design stage

Explanation

Sheikh-Ali mentions ICRC’s efforts to engage with technology companies developing AI systems for military use. He emphasizes the importance of having these conversations from the design stage of the technologies.

Evidence

Reference to ICRC’s specialist in Silicon Valley and delegation in China discussing with technology companies.

Major Discussion Point

Future Developments and Recommendations

Agreements

Agreement Points

Importance of international law in AI governance

Yasmin Afina

Jimena Sofia Viveros Alvarez

Anoosha Shaigan

Mohamed Sheikh-Ali

International law as central component of AI governance

Need for coherent global AI governance framework

Compliance with international humanitarian law principles

Human oversight and control necessary for weapons systems

All speakers emphasized the crucial role of international law in governing AI in warfare and military applications, stressing the need for compliance with existing legal frameworks and principles.

Human control and accountability in AI warfare systems

Jimena Sofia Viveros Alvarez

Mohamed Sheikh-Ali

Need for binding treaties aligned with international law

Human control needed for life-and-death decisions

Human responsibility and accountability necessary

Both speakers strongly advocated for maintaining human control and accountability in AI-powered warfare systems, particularly for critical decisions involving the use of lethal force.

Similar Viewpoints

Both speakers highlighted the need to develop specific technical standards or requirements for military AI that align with legal and ethical considerations.

Yasmin Afina

Anoosha Shaigan

Translating legal requirements into technical requirements

Developing standards for military AI

Both speakers emphasized the importance of involving and holding accountable the private sector companies developing AI technologies for military use.

Anoosha Shaigan

Mohamed Sheikh-Ali

Corporate accountability for military AI suppliers

Engaging tech companies from design stage

Unexpected Consensus

Comprehensive approach to AI governance beyond warfare

Jimena Sofia Viveros Alvarez

Anoosha Shaigan

Need for coherent global AI governance framework

Privacy concerns in conflict zones

Both speakers unexpectedly broadened the discussion beyond just warfare, emphasizing the need for a comprehensive AI governance approach that addresses various contexts including peacetime and civilian applications.

Overall Assessment

Summary

The speakers generally agreed on the importance of international law in AI governance, the need for human control and accountability in AI warfare systems, and the necessity of developing specific standards for military AI. There was also consensus on involving and regulating private sector companies in the development of military AI technologies.

Consensus level

High level of consensus on core principles, with some variations in specific focus areas. This strong agreement suggests a solid foundation for developing international norms and regulations for AI in warfare, but also highlights the complexity of implementing these principles across different contexts and stakeholders.

Differences

Different Viewpoints

Scope of AI governance in warfare

Yasmin Afina

Jimena Sofia Viveros Alvarez

International law as central component of AI governance

Need for coherent global AI governance framework

While Afina focuses on international law as a central component of AI governance in the military domain, Alvarez argues for a broader, coherent global AI governance framework that encompasses both military and civilian uses due to the dual-use nature of AI technologies.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the scope and approach to AI governance in warfare, the extent of human control required, and the specific mechanisms for ensuring accountability and compliance with international law.

difference_level

The level of disagreement among the speakers is moderate. While they share common concerns about the ethical and legal implications of AI in warfare, they differ in their proposed solutions and areas of emphasis. These differences reflect the complex and multifaceted nature of the issue, highlighting the need for continued dialogue and collaboration among various stakeholders to develop comprehensive and effective governance frameworks for AI in warfare.

Partial Agreements

Partial Agreements

All speakers agree on the need for human oversight and accountability in AI-powered weapons systems, but they differ in their approaches. Afina suggests translating legal requirements into technical ones, Shaigan focuses on clarifying liability frameworks, while Sheikh-Ali emphasizes maintaining human control over life-and-death decisions.

Yasmin Afina

Anoosha Shaigan

Mohamed Sheikh-Ali

Translating legal requirements into technical requirements

Clarifying liability for AI actions in warfare

Human oversight and control necessary for weapons systems

Similar Viewpoints

Both speakers highlighted the need to develop specific technical standards or requirements for military AI that align with legal and ethical considerations.

Yasmin Afina

Anoosha Shaigan

Translating legal requirements into technical requirements

Developing standards for military AI

Both speakers emphasized the importance of involving and holding accountable the private sector companies developing AI technologies for military use.

Anoosha Shaigan

Mohamed Sheikh-Ali

Corporate accountability for military AI suppliers

Engaging tech companies from design stage

Takeaways

Key Takeaways

Resolutions and Action Items

Unresolved Issues

Suggested Compromises

Thought Provoking Comments

I wanted to add another layer to our discussions, the role of the multi-stakeholder community. And in fact, in the report that I previously mentioned, one of the other key areas of nuance convergence that we have identified is the importance of multi-stakeholder engagement.

speaker

Yasmin Afina

reason

This comment broadened the scope of the discussion beyond just governments to include other stakeholders, highlighting the complexity of AI governance.

impact

It shifted the conversation to consider a more holistic approach to AI governance, leading to discussion of various stakeholder perspectives and initiatives.

I think I don’t like to just circumscribe this conversation to warfare. Because these technologies are being used to attack civilians also during peacetime. So I like to call it the peace and security spectrum of things.

speaker

Jimena Sofia Viveros Alvarez

reason

This reframing challenged the narrow focus on warfare and expanded the discussion to consider broader implications of AI in security.

impact

It prompted consideration of AI’s impact across different contexts and actors, leading to a more comprehensive examination of legal and ethical issues.

Data bias and model drift are the main concerns with AI models. Data bias is, of course, if you train your AI with biased data. For example, if you train it with skewed data or discriminatory patterns like kill all the dark-looking people or kill all the brown people or kill anybody who doesn’t look white or Caucasian.

speaker

Anoosha Shaigan

reason

This comment brought attention to specific technical challenges in AI systems that have serious ethical implications, especially in conflict situations.

impact

It deepened the discussion on the ethical considerations of AI in warfare, leading to further exploration of oversight and accountability measures.

Can an AI-controlled or autonomous weapon that has been tasked to execute an operation abort autonomously the operation if they see a child or a civilian or a fighter who is no longer capable of participating in the conflict?

speaker

Mohamed Sheikh-Ali

reason

This question highlighted a crucial ethical and practical challenge in implementing AI in warfare while adhering to international humanitarian law.

impact

It focused the discussion on the specific challenges of ensuring AI systems can comply with the nuanced requirements of international law, leading to consideration of human oversight and control.

Overall Assessment

These key comments shaped the discussion by expanding its scope beyond just warfare to consider broader security implications, highlighting the importance of multi-stakeholder engagement, addressing specific technical and ethical challenges of AI systems, and emphasizing the need for human control and oversight. The discussion evolved from a general overview of international law and AI to a more nuanced exploration of practical challenges, ethical considerations, and governance frameworks across various contexts and stakeholders.

Follow-up Questions

How can international law obligations be effectively translated into technical requirements for AI systems in military applications?

speaker

Yasmin Afina

explanation

This is crucial for ensuring AI technologies used in warfare are compliant with international law from the design stage.

How can a coherent global AI governance framework be developed and implemented?

speaker

Jimena Sofia Viveros Alvarez

explanation

A comprehensive framework is needed to address the dual-use nature of AI and its applications across civilian and military domains.

How can liability be determined when AI systems are involved in military actions that violate international law?

speaker

Anoosha Shaigan

explanation

Clarifying responsibility (e.g., operator, commander, developer, or state) is essential for accountability in AI-enabled warfare.

How can data bias and model drift in AI systems used in military contexts be effectively monitored and mitigated?

speaker

Anoosha Shaigan

explanation

Addressing these issues is critical to prevent discriminatory or unpredictable actions by AI in warfare.

How can privacy and surveillance concerns be addressed when using AI technologies in conflict zones?

speaker

Anoosha Shaigan

explanation

Balancing military needs with civilian privacy rights is a key challenge in AI-enabled warfare.

Can AI-controlled or autonomous weapons reliably comply with core principles of international humanitarian law, such as distinction and proportionality?

speaker

Mohamed Sheikh-Ali

explanation

This is fundamental to ensuring AI weapons can be used in accordance with international law.

How can explainable AI technologies be applied to autonomous weapon systems to ensure human oversight and understanding of targeting decisions?

speaker

Qurra Tul AIn Nisar

explanation

This is important for maintaining human control and accountability in AI-enabled warfare.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.