AI and EDTs in Warfare: Ethics, Challenges, Trends | IGF 2023 WS #409

11 Oct 2023 08:45h - 09:45h UTC

Event report

Speakers and Moderators

Speakers:
  • Rosanna Fanni, Civil Society, Western European and Others Group (WEOG)
  • Fernando Giancotti, Government, Western European and Others Group (WEOG)
  • Pete Furlong, Civil Society, Western European and Others Group (WEOG)
  • Shimona Mohan, Civil Society, Asia-Pacific Group
Moderators:
  • Rosanna Fanni, Civil Society, Western European and Others Group (WEOG)

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Audience

The expanded summary examines the impact of artificial intelligence (AI) on global security from various perspectives. One viewpoint raises concerns about the potential for AI to make the world more insecure, particularly in the context of warfare. This perspective highlights the evolution of the massive retaliation strategy, which now considers preemptive strikes due to the capacities of AI. The comparison of AI capacities on the battlefield may favor preemptive actions. Overall, the sentiment towards the effect of AI on world security is negative.

Furthermore, the development of deep learning in AI has raised worries about the easier generation of bioweapons, leading to concerns about biological warfare. With AI and deep learning, the process of generating bioweapons has become more accessible, posing a significant threat. This argument emphasizes the need to ensure biosecurity and peace. The sentiment surrounding this issue is also negative.

In addition to the concerns about AI in warfare and biological warfare, ethical considerations play a crucial role in the development and deployment of autonomous weapon systems. It is recognized that there is a need for ethical principles to guide the use of AI in armed conflicts. The sentiment regarding this perspective is neutral, but it highlights the importance of addressing ethical issues in this domain.

On the other hand, AI can potentially be used to reduce collateral damage and civilian casualties in conflict situations. This observation suggests a potential positive impact of AI on global security, as it can aid in minimizing harm during armed conflicts. The sentiment towards this notion is also neutral.

In conclusion, the analysis reveals mixed perspectives on the impact of AI on global security. While there are concerns regarding its potential to make the world more insecure, particularly in warfare and biological warfare, there is also recognition of the potential benefits of AI in reducing collateral damage and civilian casualties. It is crucial to ensure that ethical principles are followed in the development and deployment of AI in armed conflict situations. Additionally, the maintenance of biosecurity and peace is of utmost importance. These factors should be considered to navigate the complex landscape of AI and global security.

Fernando Giancotti

A recent research study conducted on the ethical use of artificial intelligence (AI) in Italian defence highlights the importance of establishing clear guidelines for its deployment in warfare. The study emphasises that commanders require explicit instructions to ensure the ethical and effective use of AI tools.

Ethical concerns in the implementation of AI in defence are rooted in the inherent accountability that comes with the monopoly on violence held by defence forces. Commanders worry that failure to strike the right balance between value criteria and effectiveness could put them at a disadvantage in combat. Additionally, they express concerns about the opposition’s adherence to the same ethical principles, further complicating the ethical landscape of military AI usage.

To address these ethical concerns and ensure responsible deployment of AI in warfare, the study argues for the development of a comprehensive ethical framework on a global scale. It suggests that the United Nations (UN) should take the lead in spearheading a multi-stakeholder approach to establishing this framework. Currently, different nations have their own frameworks for the ethical use of AI in defence, but the study highlights the need for a unified approach to tackle ethical challenges at an international level.

However, the study acknowledges the complexity and contradictions involved in the process of addressing ethical issues related to military AI usage. It notes that reaching a mutually agreed-upon, perfect ethical framework may be uncertain. Despite this, it stresses the necessity of pushing for compliance through intergovernmental processes, although the prioritisation of national interests by countries further complicates the establishment of universally agreed policies.

The study brings attention to the potential consequences of the mass abuse of AI, highlighting the delicate balance between stabilising and destabilising the world. It recognises that AI has the capacity to bring augmented cognition, which can help prevent strategic mistakes and improve decision-making in warfare. For example, historical wars have often been the result of strategic miscalculations, and the deployment of AI can help mitigate such errors.

While different nations have developed ethical principles related to AI use, the study points out the lack of a more general framework for AI ethics. It highlights that the principles can vary across countries, including the UK, USA, Canada, Australia, and NATO. Therefore, there is a need for a broader ethical framework that can guide the responsible use of AI technology.

The study cautions against completely relinquishing the final decision-making power to AI systems. It emphasises the importance of human oversight and responsibility, asserting that the ultimate decision for actions should not be handed over to machines.

Furthermore, the study highlights the issue of collateral damage in current defence systems and notes that specific processes and procedures are in place to evaluate compliance and authorise engagement. It mentions the use of drones for observation to minimise the risk of unintended harm before any decision to engage is made.

In conclusion, the research on ethical AI in Italian defence underscores the need for clear guidelines and comprehensive ethical frameworks to ensure the responsible and effective use of AI in warfare. It emphasises the importance of international cooperation, spearheaded by the UN, to address ethical challenges related to military AI usage. The study acknowledges the complexities and contradictions involved in this process and stresses the significance of augmenting human decision-making with AI capabilities while maintaining human control.

Paula Gurtler

The discussion surrounding the role of Artificial Intelligence (AI) in the military extends beyond legal autonomous weapon systems. It includes a broader conversation about the importance of explainable and responsible AI. One key argument is the need for ethical principles to be established at an international level. This suggests that ethical considerations should not be limited to individual countries but should be collectively agreed upon to ensure responsible AI usage.

Another significant aspect often overlooked when focusing solely on legal regulations is the impact of AI on gender and racial biases. By disregarding these factors, we fail to address the potential biases embedded within AI algorithms. Therefore, it is crucial to consider the wider implications of AI and its contribution to societal biases, ensuring fairness and equality.

Geopolitics and power dynamics further complicate the utilization of AI in the military. With nations vying for supremacy, AI becomes entangled in strategic calculations and considerations. The use of AI in military operations can potentially affect global power balances and lead to unintended consequences. This highlights the intricate relationship between AI, politics, and international relations, which must be navigated with care.

Although various ethical guidelines already exist for AI deployment, one question arises: do we require separate guidelines specifically designed for the military? The military context often presents unique challenges and ethical dilemmas, differing from other domains where AI is utilized. Therefore, there is a debate over whether existing guidelines adequately address the ethical considerations surrounding AI in military applications or if specific guidelines tailored to the military context are necessary.

In conclusion, the debate regarding AI in the military extends beyond the legality of autonomous weapon systems. It encompasses discussions about explainable and responsible AI, the need for international ethical principles, the examination of gender and racial biases, the influence of geopolitics, and the necessity of specific ethical guidelines for military applications. These considerations highlight the complex nature of implementing AI in the military and emphasize the importance of thoughtful and deliberate decision-making.

Rosanna Fanni

During the discussion, the speakers explored the potential dual use of artificial intelligence (AI) in both civilian and military applications. They acknowledged that AI systems originally developed for civilian purposes could also have valuable uses in defense. The availability of data, machine learning techniques, and coding assistance makes it feasible for AI to be applied in both contexts.

A major concern raised during the discussion was the lack of ethical guidelines and regulations in the defense realm. While there are numerous ethical guidelines, regulations, and laws in place for the civilian use of AI, the defense sector lacks similar principles. This highlights a disconnect between the development and use of AI in civilian and defense contexts. Developing ethical guidelines and regulations specific to AI in defense applications is crucial to ensure responsible and accountable use.

The European Union’s approach to AI, particularly the exclusion of defense applications from the AI Act, was criticized. The AI Act employs a risk-based approach, yet its exclusion of defense applications contradicts this approach. This omission raises questions regarding the consistency and fairness of the regulatory framework. The speakers argued that defense applications should not be overlooked and should be subject to appropriate regulations and guidelines.

Another important issue discussed was the need for international institutions to take on more responsibility in terms of pandemic preparedness. The COVID-19 pandemic has demonstrated the necessity of being prepared to tackle challenges and risks arising from the rapid spread of bio-technology. The speakers emphasized that institutions should be better prepared to ensure the protection of public health and well-being. Moreover, they stressed that equal distribution of resources is crucial to prevent global South nations from being left behind in terms of bio-risk preparedness. The speakers highlighted the importance of avoiding a race between countries in preparedness and ensuring that global South countries, which often lack resources, are provided with the necessary support.

In conclusion, the discussion revolved around the need to address the potential dual use of AI, establish ethical guidelines and regulations for defense applications, critique the exclusion of defense applications in the European Union’s AI Act, and emphasize the role of international institutions in pandemic preparedness and equal distribution of resources. These insights shed light on the ethical and regulatory challenges associated with AI, as well as the importance of global collaboration in addressing emerging risks.

Pete Furlong

The discussion revolves around the impact of artificial intelligence (AI) and emerging technologies on warfare. It is argued that AI and other technologies can be leveraged in conflicts, accelerating the pace of war. These dual-use technologies are not specifically designed for warfare but can still be used in military operations. For example, AI systems that were not initially intended for the battlefield can be repurposed for military use.

The military use of AI and other technologies has the potential to significantly escalate the pace of war. The intent is to accelerate the speed and effectiveness of military operations. However, this raises concerns about the consequences of such escalated conflicts.

One of the challenges in implementing AI principles is the broad interpretation of these principles, as different countries may interpret them differently. This poses challenges in creating unified approaches to AI regulations and ethical considerations. While broad AI principles can address a variety of applications, there is a need for more targeted principles that specifically address the issues related to warfare and the military use of AI.

Discussions about the use of AI and emerging technologies in warfare are increasing in various summits and conferences. The UK Summit for AI Safety is an example of such discussions. Additionally, the concern about the use of biological weapons is growing, as it is noted that they only need to work once, unlike drugs that need to work consistently. This raises significant ethical and safety concerns.

AI’s capabilities are dependent on the strength of sensors. The cognition of AI is only as good as its sensing abilities. Therefore, the value and effectiveness of AI in warfare depend on the quality and capabilities of the sensors used.

One potential use of AI in warfare is to better target strikes and reduce the likelihood of civilian casualties. The aim is to enhance precision and accuracy in military operations to minimize collateral damage. However, the increased ability to conduct targeted strikes might also lead to an increase in the frequency of such actions.

One of the main concerns regarding the use of AI in warfare is the lack of concrete ethical principles for autonomous weapons. The RE-AIM Summit aims to establish such principles; however, there remains a gap in concrete ethical guidelines. The UN Convention on Certain Conventional Weapons has also been unsuccessful in effectively addressing this issue.

In conclusion, the discussions surrounding AI and emerging technologies in warfare highlight the potential benefits and concerns associated with their use. While these technologies can be leveraged to enhance military capabilities, there are ethical, safety, and interpretational challenges that need to be addressed. Targeted and specific principles related to the military use of AI are necessary, and conferences and summits play a crucial role in driving these discussions forward. The impact of AI on targeting precision and civilian protection is significant, but it also raises concerns about the escalation of conflicts. Ultimately, finding a balance between innovation, ethics, and regulation is essential to harness the potential of AI in warfare while minimizing risks.

Shimona Mohan

The discussions highlight the significance of ethical and responsible AI methodologies in military applications. Countries such as the United States, United Kingdom, and France have already implemented these strategies within their military architectures. However, India has chosen not to sign the global call for Responsible AI, prioritising national security over international security mechanisms and regulations.

The absence of national policy prioritisation of military AI poses challenges in forming intergovernmental actions and collaborations. Without a clear policy framework, it becomes difficult for countries to establish unified approaches in addressing the ethical and responsible deployment of AI in the military domain.

Gender and racial biases in military AI are also raised as important areas of concern. Studies have shown significant biases in AI systems, with a Stanford study revealing that 44% of AI systems exhibited gender biases, and 26% exhibited both gender and racial biases. Another study conducted by the MIT Media Lab found that facial recognition software had difficulty recognising darker female faces 34% of the time. Such biases undermine the fairness and inclusivity of AI systems and can have serious implications in military operations.

The balance between automation and ethics in military AI is emphasised as a crucial consideration. While performance in military operations is vital, it is equally important to incorporate ethical considerations into AI systems. The idea is to ensure that weapon systems maintain their level of performance while also incorporating ethical, responsible, and explainable AI systems.

The use of civilian AI systems in conflict spaces is identified as a noteworthy observation. Dual-use technologies like facial recognition systems have been employed in the Russia-Ukraine conflict, where soldiers were identified through these systems. This highlights the potential overlap between civilian and military AI applications and the need for effective regulations and ethical considerations in both domains.

Additionally, the potential of AI in contributing to bio-safety and bio-security is mentioned. A documentary on Netflix titled “Unknown Killer Robots” showcased the risk potential of AI in the generation of poisons and biotoxins. However, with the right policies and regulations in place, researchers and policymakers remain optimistic about preventing bio-security risks through responsible and ethical AI practices.

In conclusion, ethical and responsible AI methodologies are crucial in military applications. The implementation of these strategies by countries like the US, UK, and France demonstrates the growing recognition of the importance of ethical considerations in AI deployment. However, the absence of national policy prioritisation and India’s refusal to sign the global call for Responsible AI highlight the complex challenges in achieving a global consensus on ethical AI practices in the military domain. Addressing gender and racial biases, finding a balance between automation and ethics, and regulating the use of civilian AI systems in conflict spaces are key areas that require attention. Ultimately, the responsible and ethical use of AI in military contexts is essential for ensuring transparency, fairness, and safety in military operations.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more