WS #123 Responsible AI in Security Governance Risks and Innovation

26 Jun 2025 14:45h - 15:45h

WS #123 Responsible AI in Security Governance Risks and Innovation

Session at a glance

Summary

This discussion, moderated by Yasmin Afina from the United Nations Institute for Disarmament Research (UNIDIR), focused on responsible AI governance in security contexts and the critical role of multi-stakeholder engagement. The session was part of UNIDIR’s roundtable on AI security and ethics (RAISE), established in partnership with Microsoft to bridge global divides and foster cooperation on AI governance issues. Three expert panelists provided opening remarks: Dr. Jingjie He from the Chinese Academy of Social Sciences emphasized the importance of inclusive, multi-stakeholder approaches and highlighted AI’s positive applications in satellite remote sensing for conflict monitoring, while noting challenges like adversarial attacks. Michael Karimian from Microsoft outlined industry’s crucial role in establishing norms and safeguards, emphasizing transparency, accountability, due diligence throughout the AI lifecycle, and proactive collaboration to reduce global capacity disparities. Dr. Alexi Drew from the International Committee of the Red Cross advocated for comprehensive lifecycle management approaches to AI governance, arguing that ethical and legal considerations must be integrated at every stage rather than treated as afterthoughts.


The discussion addressed several critical concerns raised by participants, including the need for AI content authentication to prevent misinformation and violence, the risks of AI misalignment in military contexts where commanders may rely on AI systems under pressure, and questions about responsibility for mitigating AI risks in developing countries with limited technological control. All panelists agreed that responsibility for AI governance is shared among all stakeholders—governments, industry, civil society, and individuals—though each has distinct roles and capabilities. The conversation concluded with optimism that innovation and security can coexist when guided by proper values and governance frameworks, emphasizing that responsible AI development requires collective global effort rather than competitive approaches.


Keypoints

## Major Discussion Points:


– **Multi-stakeholder governance of AI in security contexts**: The discussion emphasized the critical need for inclusive engagement across various stakeholders (governments, industry, civil society, academia) to effectively govern AI applications in international peace and security, with particular focus on platforms like UNIDIR’s RAISE initiative.


– **Industry responsibility and proactive engagement**: Extensive discussion on how technology companies must take active roles in establishing norms, implementing due diligence processes, ensuring transparency and accountability, and contributing technical expertise throughout the AI lifecycle rather than treating governance as an afterthought.


– **Lifecycle management approach to AI governance**: A central theme focusing on the necessity of integrating ethical, legal, and technical governance considerations at every stage of AI development – from initial design and data selection through validation, deployment, and eventual decommissioning – rather than applying governance as a final checkpoint.


– **AI authenticity and content verification challenges**: Participants raised concerns about the security implications of AI-generated content that cannot be easily distinguished from human-created content, discussing the need for technical solutions like digital signatures to identify AI-generated materials and prevent misuse for disinformation or conflict instigation.


– **Military applications and human-machine interaction risks**: Discussion of specific challenges in military contexts, including the risk of AI systems becoming misaligned during battlefield use, commanders’ over-reliance on AI decision-support systems under pressure, and the importance of maintaining compliance with international humanitarian law in AI-enabled military operations.


## Overall Purpose:


The discussion aimed to explore responsible AI governance frameworks for international peace and security through a multi-stakeholder lens, examining how different actors (UN institutions, industry, civil society, military) can collaborate to ensure AI technologies enhance rather than undermine global stability and security.


## Overall Tone:


The discussion maintained a professional, collaborative, and constructive tone throughout. It began with an informative and academic approach during the introductory presentations, then became more interactive and practically-focused during the Q&A session. Despite addressing serious security concerns and potential risks, the conversation remained optimistic about the possibility of achieving responsible AI governance through collective action. The tone was notably inclusive, with moderators actively encouraging participation from diverse geographic and sectoral perspectives, and speakers consistently emphasizing shared responsibility rather than assigning blame.


Speakers

– **Yasmin Afina** – Researcher from the United Nations Institute for Disarmament Research (UNIDIR), moderator of the session on responsible AI in security, governance, and innovation


– **Jingjie He** – Dr. from the Chinese Academy of Social Sciences, researcher working on AI and satellite remote sensing projects


– **Michael Karimian** – Representative from Microsoft, involved in the roundtable for AI security and ethics (RAISE)


– **Alexi Drew** – Dr. from the International Committee of the Red Cross (ICRC), expert on lifecycle management approach to AI governance in security


– **Bagus Jatmiko** – Commander in the Indonesian Navy, researcher in AI and information warfare within the military domain and defense sector


– **Audience** – Multiple audience members who asked questions and made comments during the session


**Additional speakers:**


– **Francis Alaneme** – Representative from the .ng domain name registry


– **George Aden Maggett** – Judge at the Supreme Court of Egypt and honorary professor of law at Durham University, UK


– **Rowan Wilkinson** – From Chatham House (mentioned in chat/questions but did not speak directly in the transcript)


Full session report

# Comprehensive Report: Responsible AI Governance in Security Contexts – Multi-Stakeholder Perspectives and Collaborative Frameworks


## Executive Summary


This discussion, moderated by Yasmin Afina from the United Nations Institute for Disarmament Research (UNIDIR), examined responsible artificial intelligence governance in international peace and security contexts. The session formed part of UNIDIR’s Roundtable on AI Security and Ethics (RAISE), a collaborative initiative established in partnership with Microsoft to foster international cooperation on AI governance issues.


The discussion brought together perspectives from academia, industry, humanitarian organisations, military institutions, and the judiciary to explore multi-stakeholder approaches to AI governance challenges. Through interactive polling, structured presentations, and Q&A dialogue, participants examined questions about responsibility, accountability, and practical implementation of AI governance frameworks whilst addressing concerns about technical limitations, power imbalances, and real-world consequences of AI deployment in security contexts.


## Session Context and UNIDIR/RAISE Introduction


Yasmin Afina opened by explaining UNIDIR’s role as the UN’s dedicated research institute on disarmament, established during the Cold War to provide neutral space for dialogue on security issues. She positioned the RAISE initiative as continuing this tradition by creating depoliticised forums for AI governance discussions that can overcome competitive dynamics and distrust hindering international cooperation.


The moderator emphasised that whilst AI presents opportunities for enhancing international peace and security, it also introduces complex challenges requiring collaborative approaches across traditional boundaries. She noted the session’s connection to broader international efforts, including ongoing discussions around the Global Digital Compact and other UN-sponsored platforms addressing AI governance.


## Interactive Opening: Stakeholder Perspectives


Using Slido polling (code 179812), Afina engaged participants on two key questions about AI’s role in international peace and security and the multi-stakeholder community’s effectiveness in addressing governance challenges.


Participant responses highlighted diverse concerns including:


– Censorship and surveillance capabilities


– Fake news and misinformation


– Data privacy violations


– Facial recognition at borders


– Autonomous weapons systems


– Cybersecurity threats


These responses established the broad scope of AI governance challenges that would be addressed throughout the session.


## Expert Panel Presentations


### Academic Perspective: Dr. Jingjie He, Chinese Academy of Social Sciences


Dr. He emphasised the critical importance of multi-stakeholder approaches, arguing that technological challenges inherently require interdisciplinary solutions. She highlighted positive applications of AI in peace and security contexts, specifically referencing Amnesty International’s Darfur project as a successful example. This initiative used Element AI technology with 29,000 volunteers to analyse satellite imagery for conflict monitoring, demonstrating AI’s potential as a tool for humanitarian purposes.


However, Dr. He acknowledged significant technical challenges, particularly adversarial attacks that make AI systems fragile and governance discussions complex. She introduced the concept of AI as both a “force multiplier” and “threat multiplier,” noting that poorly designed systems create risks for both civilian populations and military forces themselves.


Regarding transparency, Dr. He expressed scepticism about algorithm openness due to intellectual property concerns and industry practices of protecting core technologies. She concluded by emphasising shared responsibility for AI governance whilst acknowledging the need for better knowledge sharing between technology developers and decision-makers, particularly in military contexts.


### Industry Perspective: Michael Karimian, Microsoft


Karimian outlined industry’s role in establishing norms and safeguards for responsible AI deployment in security contexts. He emphasised that companies are uniquely positioned to identify risks early in development processes and have obligations under UN guiding principles to ensure their products are not used for human rights abuses.


He stressed industry responsibility extends beyond compliance to proactive engagement in norm-setting and standard development. Karimian advocated for clear standards ensuring AI systems used in security applications are transparent about their capabilities and limitations, with robust accountability mechanisms including documentation, monitoring, and auditing capabilities.


Addressing global capacity disparities, Karimian noted the importance of proactive collaboration to reduce inequalities in AI governance capabilities between developed and developing nations. He suggested industry has a role in supporting capacity-building initiatives, particularly where regulatory frameworks are still emerging.


### Humanitarian Perspective: Dr. Alexi Drew, International Committee of the Red Cross


Dr. Drew presented a comprehensive lifecycle management framework for AI governance, arguing that governance must be integrated at every stage from initial design through decommissioning. She identified three critical stages:


1. **Development stage**: Ensuring compliance requirements like International Humanitarian Law are built in from the outset


2. **Validation stage**: Addressing risks of localization where systems may not work as intended in different contexts


3. **Deployment stage**: Managing inscrutability risks where users may not understand system limitations


Dr. Drew emphasised that systems should be designed, trained, and tested with compliance requirements integrated rather than retrofitted, preventing governance from becoming a “checkbox exercise.” She highlighted that all stakeholders possess various levers of influence, including participation in standard-setting organisations and procurement strategies to enforce governance requirements.


Addressing innovation concerns, Dr. Drew rejected the notion that responsible AI governance requires trade-offs between innovation and security, characterising this as a design challenge rather than a zero-sum game. She also stressed the importance of training military users to understand AI system capabilities, limitations, and failure modes.


## Audience Q&A and Discussion


### Content Authenticity Challenges


Francis Alaneme from the .ng domain name registry raised concerns about AI-generated content that cannot be distinguished from human-created materials, highlighting security implications of AI-generated video content being used to spread false information and potentially instigate violence.


Dr. Drew responded by mentioning the Content Authenticity Initiative (CAI) as an example of industry efforts to develop technical solutions for content authentication, whilst acknowledging implementation challenges in balancing comprehensive coverage with practical feasibility.


### Military Applications and Human-Machine Interaction


Commander Bagus Jatmiko, an Indonesian Navy officer and researcher in AI and information warfare, raised concerns about AI systems becoming misaligned during battlefield use. He introduced the concept of AI systems becoming “psychopathic” in their tendency to provide answers users want to hear rather than accurate assessments, warning this could be dangerous when commanders are under pressure and may accept AI-generated answers that confirm existing beliefs rather than challenge assumptions.


This highlighted the critical importance of training and education for AI system users in high-stakes environments where consequences of poor decision-making can be severe.


### Global Power Imbalances and Accountability


Judge George Aden Maggett from the Supreme Court of Egypt raised fundamental questions about responsibility and accountability, particularly regarding power imbalances between technology companies in developed countries and affected populations in developing nations. His intervention connected abstract policy considerations to real-world consequences, including civilian casualties in current conflicts involving AI-enabled weapons systems.


### Algorithm Transparency and Openness


Rowan Wilkinson from Chatham House asked about recent policy shifts regarding AI openness, prompting discussion about balancing transparency requirements with security and commercial considerations. This highlighted ongoing tensions between demands for accountability and practical constraints on algorithm disclosure.


## Key Themes and Takeaways


### Shared Responsibility Framework


All speakers agreed that responsibility for AI governance is distributed across stakeholders rather than concentrated in any single entity. This encompasses governments, industry, civil society, international organisations, and individuals, though speakers emphasised different implementation mechanisms.


### Multi-Stakeholder Engagement as Foundation


Strong consensus emerged that effective AI governance requires inclusive participation from diverse stakeholders, bringing together different perspectives and expertise to address complex technological challenges. Platforms like UNIDIR’s RAISE initiative provide valuable neutral spaces for knowledge-sharing that can transcend geopolitical constraints.


### Lifecycle Management Approach


Both industry and humanitarian perspectives converged on integrating governance considerations throughout the entire AI system lifecycle. This approach prevents governance from becoming mere compliance whilst ensuring ethical and legal considerations are substantively integrated into system design and operation.


### Technical Implementation Challenges


Several technical challenges remain unresolved, including practical implementation of content authentication systems, addressing adversarial attacks on AI systems used for peace and security monitoring, and developing effective mechanisms for preventing AI misalignment in operational contexts.


### Sustainability and Resource Concerns


Dr. He specifically noted funding challenges facing platforms like RAISE, emphasising that effective governance requires sustained commitment and resources from all stakeholders. This challenge could significantly impact long-term effectiveness of collaborative governance efforts.


## Conclusion


This discussion demonstrated both the complexity of AI governance challenges in security contexts and the potential for collaborative solutions. The consensus on fundamental principles of shared responsibility, multi-stakeholder engagement, and lifecycle management provides a foundation for developing governance frameworks that can enhance international peace and security whilst ensuring responsible AI development and deployment.


However, unresolved questions about sustainability, implementation, and accountability highlight significant work remaining to translate these principles into effective practice. The session’s combination of technical expertise, practical experience, and moral urgency suggests that effective AI governance will require continued collaboration across diverse stakeholder groups, sustained commitment to addressing global inequalities, and ongoing adaptation to evolving technological capabilities.


The optimistic perspective that innovation and security can coexist when guided by proper governance frameworks provides hope that these challenges can be addressed through collective effort rather than competitive approaches, though significant technical and institutional challenges remain to be resolved.


Session transcript

Yasmin Afina: Good afternoon from Oslo or good morning, wherever you are tuning in from. My name is Yasmin Afina, researcher from the United Nations Institute for Disarmament Research. And I have the pleasure of moderating today’s session on responsible AI in security, governance, and even innovation. For those who are joining us in person, may I please highly encourage you to come to the front to this beautiful, almost roundtable to allow us to have a free-flowing roundtable discussion because this session forms part of our project related to the roundtable on AI security and ethics and in the spirit of having a roundtable, I do highly encourage everyone in the room who have just joined us today to join us in the front because I would like this to be very interactive and highly engaging. And for those who are joining online, thank you very much for joining us online, wherever you are. And as we are using Zoom, I do encourage you to use the raise hand function if you would like to take the floor, as again, this is a very highly interactive discussion. So again, my name is Yasmin Afina and I am very pleased to be joined today by three excellent speakers who, for those who are in the room, we do not see them yet, but for those online, you will see them. Dr. Jingjie He from the Chinese Academy of Social Sciences, Michael Karimian from Microsoft, and Dr. Alexey Drew from the ICRC. And before we get into this, into a kick-off remarks from our excellent panelists, I would like to spend five minutes to introduce you a little bit to the roundtable for AI, security and ethics, and my institute, the United Nations Institute for Disarmament Research. So, at a glance, UNIDIR is an autonomous institution within the United Nations. You can think of us like a think tank within the UN ecosystem. We’re independent from the Secretariat, and we’ve been established in 1980 at the height of the Cold War to ensure that the deliberations of states are well-informed and evidence-based in the area of disarmament. Of course, today, the landscape of disarmament is much different from what it was in 1980, and so we are conducting evidence-based policy research, we’re conducting multi-stakeholder engagement, and we want to make sure that we also facilitate dialogue where there is none, including on sensitive issues such as AI insecurity. So, one of the priority areas of UNIDIR and our work is related to AI and autonomy insecurity and defense, including the military domain, and what we’ve noticed is that in the light of this technology’s highly unique nature, we understood very quickly the importance of multi-stakeholder engagement and perspectives to obtain input on the implications of AI for international peace and security. So, we saw the need to provide a platform for open, inclusive, and meaningful dialogue. We saw this need as well to warrant public trust and legitimacy, and to ensure that these discussions are not just a one-way discussion, but actually are coming both from the bottom-up and from the top-up. but also from the top-down approach. We also want to make sure that we improve cross-disciplinary literacy and so on and so forth. You may see on the slides a number of very different incentives as to why the multi-stakeholder in perspective is indeed important on this issue. So that is why in March 2024, Unidire joined forces with Microsoft and in partnership with a series of other stakeholders. The establishment of the roundtable for AI security and ethics, ARRAISE. Our idea is to bring together experts and thought leaders from all around the world. So we have, for example, experts from China, from Russia, from the United States and United Kingdom, but also from Namibia, Ecuador, Kenya, India. We really want to make sure that we bridge divides and we bridge the conversation when there is none on these issues of AI and security. We aspire to lay the foundation for robust global AI governance grounded in cooperation, transparency, and mutual learning. With the idea that we should overcome any sense of competitiveness or distrust, and where there is any need for building distrust, this is where it would be. We also would like to use RAISE to foster and facilitate compliance with international law and ethical norms in the light of their importance in the age of innovation, warfare, and security, and destabilization. Finally, we would like to complement and reinforce responsible and ethical AI practices in the security and defense domains. Again, in an area where we are hoping to disrupt monopolies in the hands of the few, and to ensure that all voices are heard from all layers of society. Before we hear from our excellent panelists, I would like to provide an opportunity for both participants who are joining online, but also in person to share their thoughts via Slido. For those who are unfamiliar with Slido, Please ask our technicians to share the Slido presentation on screen. Thank you very much. First, before we start, I wanted to get your sense of what you think AI and international peace and security means for you. There is no right or wrong answer. And for that, what I would encourage you to do is to go on slido.com and put in the code 179812. For those who see the screen, use the QR code to join the conversation. And you will see a text box where you’ll be able to provide your input on what you think AI and international peace and security means for you. And again, no right or wrong answer. It really is for us to understand your thoughts and your perspectives and to really set the scene and see where things are at. Because, of course, it is important for us to share with you the work that we’re doing. But it’s also important for us to engage with the incredibly diverse IGF community to see what you think about this issue. So I will leave this poll open for a few minutes while you put in your contributions on what you think AI and international peace and security means for you. And there should be the results showing in. Perhaps if I can ask our technicians to see if there’s any input that has been added. So it is not showing on the screen, but oh, sorry, if you can please come back to Slido. Sorry, I’m bugging the technicians with my. I can see for those who are joined online that there’s quite a few responses already. So we see, for example, censorship fake news that has been generated using AI. I see that AI could be used for good or for bad, and I really appreciate this balanced approach to looking at AI for international peace and security. I see issues related to data privacy and treats on human intelligency. I also see the use of AI in the military, law enforcement, and how they are used responsibly in their respective fields, facial recognition at borders, countering the proliferation of AI-enabled weapons systems, and I also see automated target selection. So a very wide range of responses, and please keep adding your responses to this question. May I please ask our colleagues from IT to share again Slido, and this time for the next question. Oh perfect, now we can see them. So I think that this is great. Thank you very much to our IT team, and I’ve heard that there was a connectivity issue, so please bear with us as we navigate the hybrid space of discussions for this session. So now I’m going to get us to the second question. What should be the role of the multi-stakeholder community in the governance of AI and international peace and security? For those who are on Slido, it’s the same link. If you just refresh your page, or it should be refreshing on its own. If you can please add your responses, and they should start appearing. And for those who have just joined us, may I encourage you to open slido.com using your laptop or your phone by scanning the QR code to provide your input on what you think should be the role of the multistakeholder community in the governance of AI and international peace and security. So I see already one response on agreeing on and implementing norms. I do encourage everyone to keep sharing their discussions and their reflections on what they think should be the role of the multistakeholder community because that will also help us at UNIDIR to inform our work on this and how to better engage with the multistakeholder community. So I see big commitments and I would love to hear your thoughts when we’ll open the floor on what sort of commitments do you think that the multistakeholder community could have a role in. I also see industry and AI and perhaps again when we’ll open the floor for discussions, I would love to hear your thoughts on what they mean. And once again, for those who are joining us in the room physically, may I encourage you highly to come into the stage to join us in the middle to enable us to have a roundtable discussion so that it is interactive. I see a lot of input into the slido and I do see, for example, trust building standards proposed solutions, technical standards again, actionable legislation, responsible and peace. So I do appreciate you really putting a lot of input into these discussions and now that we’ve had this little warm-up exercise, I will please ask our IT colleagues to get back us to the PowerPoint for me to introduce once again our speakers for today’s discussions. So the way it will work for the rest of the session as we have 45 minutes. I will be sharing, I will be providing the floor to three speakers who are joining us online for kick-off remarks, who are supposed to be introductory and generate more questions and answers perhaps on very select issues related to AI, international peace and security. And then I will open the floor for both for those who are joining us online but also in person for a discussion on perhaps reactions to what you’ve heard, perhaps to elaborate a bit on the questions that you have, on the answers that you have shared with us and also perhaps if you have any questions for our panellists and speakers who are joining us today. So again for those who have just joined, we are joined virtually by three excellent speakers Dr Jingjie He from the Chinese Academy of Social Sciences, Dr Alexi Drew from the International Committee of the Red Cross and Michael Karimi on Microsoft. For those who are joining in person, I assure you they are online and they should be appearing on the screen when it is their turn to speak. So now may I please turn online and ask Jingjie to provide us with her opening remarks. May I please ask the IT colleagues to show Jingjie on the screen. Jingjie, you have the floor. Thank you very much. Thank you Yasmin. Very nice to be meeting you all and thank you for the invitation. Always a pleasure to join the conversation just to see your face on screen.


Jingjie He: So I think the inclusive engagement across stakeholders is essential for the effective global governance of artificial intelligence and the main reason will be that technological challenges I believe can often be addressed through technological solutions. However, the identification of the true nature of artificial intelligence and artificial intelligence The nature of these challenges requires an interdisciplinary and multi-stakeholder approach. Such an inclusive approach ensures that a wide range of knowledge, expertise, and perspectives, often complementary in nature, are taken into account in shaping responsible and equitable understandings, norms, policies for AI development and deployment. So here I want to take the opportunity to really underscore the importance of the UN-sponsored platform, such as UNODIR’s RISE that Jasmine just introduced, and the IGF, and the Global Digital Compact, etc. So these platforms play a critical role in enabling multi-stakeholder engagements. What sets them apart from more state-centric mechanisms is their unique ability to provide neutral, depoliticized, and inclusive spaces. So within those platforms, knowledge-sharing and confidence-building can take place beyond the constraints of geopolitical tensions and national interests, allowing for more constructive, balanced, and therefore more promising outcomes. But of course, one dilemma that I want to point out is that such platforms, especially like RISE, do face funding issues and questions about how to make the project more sustainable. I remember the first time I attended RISE and Jasmine was sharing the concern that this project should be more sustainable and stuff like that. I do believe that Jasmine and Michael have done a great job. supporting this program, but I do believe that this should be a more collective effort for all of us to bring resources and contribute to this project and these communities. So Yasmin also asked me to provide some concrete examples for how AI fosters international peace and security. So one of my recent projects is about AI and satellite remote sensing. So satellite remote sensing has been increasingly recognized as a critical tool for international peace and security. In recent years there has been a growing interest in applying AI and machine learning to enhance analytical efficiency of satellite imageries. So one example is Amnesty International in collaboration with a company called Element AI as well as almost 29,000 volunteers. So they develop tools to automatically analyze satellite imagery for monitoring conflicts in Darfur. So this is just one of the many examples of how AI can empower satellite imagery and benefits for international peace, security, and non-proliferation missions, etc. Of course, I always care about the challenges. So one potential challenge is, as my previous research shows, that there’s always a challenge of adversarial attacks in such systems which will make the system more vulnerable and our discussion more interesting and challenging. So I will stop for now and I will be happy to answer questions. Jasmin?


Yasmin Afina: Thank you very much, Jingjie He, for this very short and crisp but also very provocative introductory remarks and I do appreciate you noting as well the difficulty that the UN is currently facing on fundraising and of course the UNIDIR as a voluntary funded institute we do rely on voluntary contributions so I do appreciate you noting as well the dire situation that we’re facing today to enable such dialogue from happening but also I appreciate you sharing the importance of AI to enhance the ability to analyze and to monitor conflicts including by civil society organizations so it does show the potential of AI to enhance international peace and security while of course being balanced by the risks that may resurface including with regards to adversarial attacks on these AI technologies so I think that one key aspect that you also shared with us is the importance of engaging all kinds of stakeholders and we’re very fortunate today to be joined by Michael Karimian from Microsoft Michael, may I please ask now that you provide us with your key coffee marks particularly to see what do you think is the role of industries in supporting responsible AI practices for international peace and security Michael, over to you


Michael Karimian: Thank you Yasmin, it’s a pleasure to join you all and thank you Yasmin not just for facilitating today’s discussion but of course of being an essential partner in the work of the roundtable for AI security and ethics as we’ve heard and as I think we already know AI is and will rapidly reshape international security dynamics and the governance frameworks needed to ensure its responsible use urgently require quite robust multi-stakeholder engagement just as Jingjie outlined and industry in particular has a critical role to play obviously as developers and deployers of AI technology but also I think as proactive stakeholders in establishing norms and standards and safeguards to mitigate risks associated with AI in security contexts. And the roundtable for AI security and ethics has already quite clearly highlighted that while states and international organizations are vital in setting norms and regulations, industry in particular has quite practical contributions to governance, which I think can’t be overstated. So for example, industry actors often are the first to encounter and understand AI risks and vulnerabilities, in part due to their direct involvement in developing and deploying these technologies. That can put industry players in a unique position to provide expertise on technical feasibility, operational impacts, and risk mitigation strategies, which are of course essential for effective governance. And through RAISE, industry stakeholders, including Microsoft, have already identified several key contributions that can be made. Firstly, transparency and accountability. Industry must develop and adhere to clear standards that ensure AI systems used in security applications are transparent in their capabilities and limitations, with accountability mechanisms clearly articulated. And that involves quite robust documentation practices, as well as continuous monitoring and the capability to audit AI systems, which together I think provide greater predictability and trust. Second and relatedly is the topic of due diligence. The Secretary General’s upcoming report and also ongoing UN General Assembly discussions will likely continue to underscore the importance of due diligence, because industry actors have a responsibility to implement robust due diligence processes across the AI lifecycle, from design and development through to deployment and eventual decommissioning. And this aligns closely with lifecycle management approaches already being emphasized by both UNIDIR and the ICRC in its submission to the Secretary General. and others. This is the topic of proactive collaboration. Industry should actively contribute technical expertise and capacity-building initiatives, particularly in regions where regulatory frameworks are still emerging. Effective governance, of course, requires global equity in knowledge and resources, and so initiatives such as RAISE, but also RE-AIM, the responsible AI in the military domain process, we see them promoting practical and inclusive governance strategies which serve as a strong foundation. And industry collaboration through those platforms can, of course, further amplify these efforts. I think on the topic of reducing disparities and capacity-building and knowledge transfer, industry really does have a significant technical and expertise resources that are needed to support governance, civil society, and international organizations, particularly those from the global south, in understanding and assessing and mitigating AI risks. So strengthening global capacity is really key to ensuring inclusive governance and avoiding exacerbating already existing inequalities and security capabilities. I guess if we look ahead, industry’s engagement should continue to be structured, it should continue to be sustained, and it should, of course, be substantive. And this means participating in and supporting frameworks established through the United Nations and other multilateral venues, as well as initiatives such as RAISE to collectively shape responsible AI governance and security. And I think that we can ensure that our collective or collaborative efforts lead not only to innovative but enhanced global stability, resilience, and trust. I look forward to the discussion.


Yasmin Afina: Thank you very much, Michael, for, again, for a very comprehensive overview of what you think should be the role of industries in promoting and enhancing responsible AI practices for international peace and security, both as developers but also as deployers. And I do appreciate your remarks as well, your points on The industry is needing to be proactive actors to mitigate the risks and harms that may enter from these technologies. I also note from your remarks the importance of implementing feasible and effective risk mitigation mechanisms throughout the life cycle of technologies for AI and for international peace and security. And we’re very fortunate to be joined by Dr. Alexi Drew from the International Committee of the Red Cross who’s been our expert within the race, who’s been promoting relentlessly on the importance of a life cycle management approach to the governance of AI and security. So now may I please ask Alexi to come to the floor and also share her remarks on this point. Thank you very much, Alexi.


Alexi Drew: Thank you very much, Yasmina. Thank you, Michael, for setting the stage for me. It makes it a lot easier for me to continue my crusade to make life cycle management a feature that everyone is aware of and be more aware of the necessity of why it needs to be approached and understood and actually engaged with rather than as a secondary feature. And that secondariness is actually one of the key reasons why life cycle management is critical because we’ve been talking about governance quite a bit. We’ve been talking about the need to be responsible and ethical and how we design, develop and deploy these systems. But governance is not something that can be added on after the fact. It’s not an afterthought. It needs to be something which is designed to fit in each stage of the life cycle. Now, for the purposes of this discussion, I’m going to break life cycles down into very simple segments. In this case, we’re going to talk about the development stage, the validation stage and the deployment stage. And I thought it would be helpful if I gave you a particular series of risks with hypothetical context where those things are actually producing risks now so we can understand why governance at each stage is important. So one of these risks that I see them and the ICRC approaches them is the trend that we have tried to engage towards a localisation of aid. and assistance is reversed through the utilization of systems which are by their default and their design not local. So for example at the development stage you might use data which is taken from the global north, train a model which is designed to be deployed in the global south, it doesn’t reflect the realities. A predictive model for example based on this for delivery of humanitarian aid is going to prioritize the delivery of aid to certain groups as opposed to others based upon the data that has been selected for it which is not applicable to the local context and that’s going to effectively create a compounding problem. At the validation stage localization could also create problems if it’s not properly taken into account with regards to localization. If you test something outside of the local context in which you intend to deploy it you’re not actually testing for the scenario and circumstance and the context which the thing is going to be used in. So your ability to be sure that it’s delivering as expected is undermined, you’re ignoring the social, economic, political dynamics of the context in which something is going to be ultimately deployed in. So our clean test beds which might be suitable for some circumstances are not likely to be suitable for those if you try and use the same system in multiple places. At the deployment stage for example we might be using aid algorithms that worked in one context that systematically exclude marginalized communities in another. So we might have a refugee processing system which trained on one population works perfectly fine but fails catastrophically when applied in a different way, slightly different linguistic characteristics, social characteristics, economic needs and requirements. When you take these across these localized issues at the development stage, the validation and deployment stage this is a compounding of problems and risks which you can’t then remove by a set of governance which is attached to the end of a life cycle. It’s something which has to be addressed at each of these stages to ensure that these risks are avoided and not compounded. There’s also the problem of inscrutability. Now inscrutability is almost the opposite of transparency. and explainability that Michael mentioned earlier. But sometimes inscrutability is a design choice that takes place at a certain point or several points in the life cycle. At the development stage, rather than choosing something which is open-source understood as a model, you might choose a proprietary algorithm which is more niche, more sophisticated, but a complex neural network selected because it seems more appropriate and more complicated. When actually a simpler, more explainable model could do the job, it’s going to introduce inscrutability into the system at the development stage. Further on at the stage where you’re actually validating or generating a model, you’re then going to create a system which is so complex that not only can the end users, the subjects of the system not understand decisions being made, but the users themselves may not be able to, particularly if these users haven’t been the designers, they simply purchased the systems from those who asked to procure it for them. What’s the real world impact of this? Well, it means that humanitarians or aid suppliers on the ground can’t explain to individuals why the decisions are being made as they are. They can’t explain why aid isn’t being delivered to one group while it is to another. They can’t explain why some resources are available in one place or not another. That undermines trust in both the humanitarian sector, but also in the systems being used, which further means that in the long term, this life cycle of a redeployment redesign is going to have less than an effective impact on the very communities and the very peace building that it’s designed to develop. And the final point I’d raise is that life cycles are often, and we use the term cycle, but what do we mean by cyclical? And what does that actually imply to how things are used? Well, the problem is, is that if you look at a life cycle as a series of stages that are begun at one end and produce a tour at the other, and then perhaps cycle round again, it seems like a conveyor belt. It could be seen and operated on operationally by the designers, procurers, and the ultimate deployers of these systems as a series of check boxes that you move from one stage to the other once certain things have been completed. But what that means is that we have. rather than a series of checks and balances and means of ensuring that these risks are not compounded, we have a series of things which is simply checked off as complete without sufficient evidence to the fact without the ability to understand is this system suitable for what it’s being used for. And when that’s then recycled and the requirements might be changed and this tool is deployed in a different context for a different purpose, and then we find ourselves further compounding the issues that we saw before. So what I would like to say is what we need to be aware of with this life cycle finally taking away from this is that if we are to ensure that these systems are being used in a manner which is humane, ethical and principled and adding to our security and building peace rather than the creating or recreating rather the conditions that have led to insecurity, unethical practice and a risk to both civilians, combatants and other already highly impacted and at risk individuals, we need to ensure that not only do we have a shared understanding of how these tools are made on the different stages of their life cycle, we need to understand and come up with a means of technical, ethical and humanitarian governance which intersects with all of these stages effectively. And I’ll leave it there and look forward to your questions.


Yasmin Afina: Thank you very much, Alexey, for again this very comprehensive overview of why the life cycle management approach to the governance of AI is indeed important. I particularly like the way that you ended the discussions and your remarks by noting that this is a prerequisite to ensure that these technologies indeed will build peace instead of exacerbating the sources of insecurity and instability. So on that note we have a bit around 20 minutes I would say for an open discussion. I would highly encourage for those who are both in the room in Oslo but also who are joining us virtually, to ask questions to our panelists, but also building on the Slido discussions that we had earlier, where we collected your responses of what AI, international peace and security means, but also the role of the multi-stakeholder community. I would encourage you to also take the floor to elaborate a bit more on these answers. But also if you have anything else to add based on, for example, we heard from Alex the importance of local contexts. How is AI being deployed and used, for example, in your respective regions or states or your organization to build peace and to enhance international peace and security? So on that note, I would like to open the floor now for those who are joining in person and online. For those who are online, I will keep an eye on the Zoom. But for those who are joining in person, I believe there’s a microphone on the side for those who are joining from the floor, or perhaps from those who are joining on the center table, if they would like to take the floor. I think there are microphones in front of you. So on that note, I’m opening the floor now and perhaps give a few seconds as well for you to collect your thoughts or your questions. The gentleman on my left, I think you have a question. Please introduce yourself and share your name, where you’re coming from. And also if you have a question, if it is directed specifically at a speaker, please do so as well. Thank you very much.


Audience: Okay, thank you very much. My name is Francis Alaneme. I’m from the .ng domain name registry. And so it’s just more of a comment. So I know AI is widely used and AI is something that a lot of persons are jumping into and it’s flying everywhere. A lot of contents are generated with AI. And I think, so part of the things that… The AI adoption is driving us, is trying to make imaginary things come real and I think part of the algorithm should look at ways to actually You know make AI Generated contents have more of a signature that okay people can actually easily say or can actually identify what AI is Generated and what humans actually generated, you know, when you look at some video contents You see that a lot of you know, there are video contents that you see and you think they are real and you know Those kind of contents can be used to pass some kind of false informations Can be used to actually instigate some kind of em, you know violence in some places, you know where you see some kind of contents that Actually not, you know culture friendly or something that can actually instigate some kind of thoughts in mind of people so I think there should be more of em, you know, that’s kind of a Signature or that kind of a you know thing to identify AI generated contents and human Contents. Thank you


Yasmin Afina: Thank you very much sir for Outlining the importance of ensuring some sort of signature or at least means to verify What is AI generated? What is not AI generated and perhaps the security implications of not being able to differentiate between the two? I see that we have a hand raised virtually by Bagus Chatmiko who I know is joining us from very late from Indonesia perhaps may I ask our IT technicians to


Bagus Jatmiko: Display him on the screen and Bagus, please. You have the floor now. Thank you very much Bagus if you can please unmute yourself and Turn on your camera if you if you would like to intervene. Okay, can you hear me now? Yeah, we can hear you. Okay, so I don’t know whether. Thank you. Okay, thank you. Sorry. Oh, there you go. Sorry. Sorry for the connection and also the technical issues. So I see a very familiar faces in this conference and also would like to bring some concern. I also would like to maybe bring some question to the panelists also in the way that I’m working in the defense sector. And AI is being used exponentially. And I also talked about this during the ICRC conference virtually last week, if I’m not mistaken. And I bring concern about how AI is being used in a way that some of the commander or the user within the military domain is unaware of the possibility that AI might be corrupted during the use. Like what we call as the emergent misalignment or there’s also the misalignment with the system itself. And I also would like to bring the concern about the possibility of the maybe it’s not possibility. This the tendency of AI being psychopath in a way that I would also maybe provide the answers that the users would like to bring or would like to seek. And being in the battlefield, that kind of tendency would be very, in a way, very risky and maybe dangerous and how they can actually misalign the user or the commander in the battlefield. In taking what you may call the decision that might increase the risk for the humanity and also for the civilian population And this goes to my question, how would you all maybe provide the attention and maybe the focus on how the AI is being used, especially in the military domain? This is for all the panelists. And how would you maybe encourage more into the use of AI, responsible AI within the military domain? Because if I relate it to the humanitarian law, somehow in the fog of war, in the condition of uncertainty, mostly commander would like to see the quick answers provided by AIDSS. And maybe they just ignore the possibility or the existence of law or humanitarian law in this case. Thank you, Vargus. Perhaps, may I please ask that you also introduce yourself for those who are not familiar with your work and where you’re coming from? Yeah, sorry for not introducing myself. So my name is Commander Bagus Jadmiko, and I’m actually Indonesian Navy officers. And I’m also a researcher in AI and information warfare that bring me to the attention that the use of AI within the military domain and defense sector. Thank you.


Yasmin Afina: Thank you very much, Bagus. I see a gentleman here would like to ask a question or perhaps show some comments, and then I’ll get back to our panelists for some reactions or answers to the questions. Gentleman, please. Good evening, everybody. Allow me to raise a very short question in the beginning. Who


Audience: is responsible for the mitigation of AI risks? This is a very short question for me. Is it high tech big companies who are creating AI and developing AI? Because it is not in the hand of the government, especially in the developing countries right now. So let me have the big issue here. While I’m following the rapid development advancement of AI, especially in fields which are related to security, I am terrorized, you know, because we are I am not going to mention or to name any country now, but we can see how AI is being used in current ongoing wars. And the victims behind the use of AI technology in autonomous weapon, for example, how civilians are being killed without accountability. So for this reason, looking from a developing country’s perspective, which have nothing to do in their hands right now, it is all in the hands of the big tech companies which exist in the powerful countries. So this is my issue here, how we are going to mitigate ourselves this risk. Thank you. May I please ask that you introduce yourself? Sorry, can you please introduce yourself in the microphone just to sort of we know who you are, and where you’re coming from? My name is George Aden Maggett. I am a judge at the Supreme Court of Egypt and I am also an honorary professor of law at Durham University, the UK.


Yasmin Afina: Thank you. Perfect. Thank you very much, sir. So, in the interest of time, I realize that we have 10 to 15 minutes left, so I just want to make sure and check in the room virtually or online, virtually or in person, if there’s any further questions or comments or remarks for our panelists or to add to the discussions today. If not, I know that, Alexi, you’ve also put in the chat that there is an ongoing project on adding signatures to AI-generated content, the Content Authenticity Initiative, which you might be interested in, and perhaps, Alexi, you’ll be able to elaborate a bit more. Before I give the floor back to our panelists, I do note a question from Rowan Wilkinson from Chatham House. Hi, Rowan. Many policymakers are discussing the importance of AI openness in civilian contexts, including in meeting safety commitments through OSS and community oversight. Does the panel foresee a policy shift around openness in the AI peace and security domain? So, we do have quite a few questions and remarks and also reflections. So, we had a question surrounding AI authenticity and the implications of not knowing what is generated or not, and the destabilizing effects. We had a question from Bagros on the commander and perhaps the human-machine interaction in the battlefield, and perhaps also how do we make sure that the use of AI remains indeed responsible in the hands of the commander, particularly under situations of pressure, such as in the battlefield. We had a question of who should be responsible for the mitigation of risks of AI, particularly in the light of ongoing conflicts today and the… implications of civilians. And finally, we have a question on openness in the AI peace and security domains. So perhaps may I ask, in the interest of time, Jingjie to start us off with three to four minutes. Please feel free to answer any questions you may think or any other element that you would like to add based on what you’ve heard today. Jingjie, please, you have the floor. Thank you for your questions. So first, the question is from Bagus. I think the first thing we need to do is knowledge sharing, because I assume that in military, when you deploy an AI


Jingjie He: system, you developed it first as a project and you deploy it. So many times, based on my experience from the civilian field and industries, many times the one who makes the decision whether to use, deploy or complete the project may not always be the one who understands the technologies. So knowledge sharing is very important. Transparency is important. Those people who make the decisions need to understand the technology perspective. And also, the second point I want to make is the importance of incentives. It is very important for the militaries to understand that AI is not only a force multiplier, but also a threat multiplier. It is not only about the risk of civilians. It’s also about increasing risk of your own combatants when you have a poorly designed, unverified AI system with uncertainties and you cannot be confident about it and there’s a whole black box. So this kind of incentive is very important. With this understanding, I believe many militaries will be more incentivized to improve their systems? A quick answer for the second question, who’s responsible for AI governance? I think everyone. Like, I’m sure Michael will talk more about from the industry point of view, but I do sense that everyone is responsible for, you know, raising a voice, being sensitive about the importance of AI governance, you know, incentivize or promoting a dialogue about AI risks. And the third question about AI openness, I’m actually not sure what is the AI openness, because if you’re talking about openness about the algorithm, I think it’s very difficult because when we’re in the industry, where we go for due diligence or technological scouting, we ask the company, what is it? What’s your core technology? They told us, they were likely to tell us it’s their own IP and they will not be able to reveal it. But look, we have a good system and it works perfectly, just believing our results. This is what happens. So if you’re talking about openness, about AI algorithm, I have a huge question mark about the feasibilities and possibility of this kind of solutions. Thank you.


Yasmin Afina: Thank you, Jingjie He, for your very sharp response and also the fact that you’re actually joining us from very far and it’s very late at home. So thank you very much for this. Michael, would you like to intervene now? Thank you, Yasmin. Happy to do so. A shared flag, Zoom keeps telling me that my internet connection is unstable. So if I pause at any moment, that’ll be the reason why.


Michael Karimian: In answer to the questions, Frances’s question on AI signature, I appreciate the question. I think one way of thinking about this is, are there specific use cases? where we really need AI signatures or would we be comfortable with other use cases where we don’t need them? I suspect that’s possibly the direction we will go in but of course the proliferation of AI solutions means that there’ll always be solutions or actors who would circumvent that anyway but that doesn’t downplay the importance of having AI signatures in the first place. To Bagus’ question on emergent misalignment and AI-supported DSS, I think this Bagus, your question really points to something which we’ve certainly discussed in the context of the roundtable for AI security and ethics and that’s the challenge which exists at the moment in having access to meaningful and trustworthy use cases to understand in very effective ways actually how AI is being used. I think the academic community, civil society, industry, governments actually at the moment we are relying on a number of examples which you know partly come from here say or just perhaps aren’t that reflective of how AI is being used in security domains but I’m hopeful that as AI is further adopted in various security domains, transparency around use cases will improve and then we’ll be better able to understand implications of them. To the question from a colleague from Egypt slash the University of Durham, change is right, who has responsibility? Everyone. From a human rights perspective, of course states have the state duty to protect, respect and fulfil human rights. Industry has a corporate responsibility to respect human rights and individuals have a right to remedy when their rights have been harmed. Focusing just specifically on the role of industry, what that means is that all companies under the UN guiding principles on business human rights have a responsibility to ensure that their products and services are not being used in ways to facilitate or contribute to serious human rights abuses and this means that any engagement with a government, ministry of defence, armed forces, especially in the context of ongoing armed conflict or where there are credible allegations of international law violations, must be subject to rigorous due diligence, clear red lines on misuse and where risks cannot be mitigated there should be a refusal to provide or maintain support. And actually that’s not new, this has been an established position for a number of years now, but of course matters there is implementation. And lastly to Rowan’s question, yes I would hope so, that we will see more openness and actually one example of that is the RE-AIM process which I mentioned earlier, responsible AI in the military domain process, last year hosted in South Korea and in the next 6-12 months will be hosted in Spain and so anyone in the audience that are stakeholders who are interested in this should certainly keep an eye on the RE-AIM process. Thank you very much Michael for this and the importance of differentiating of course principles but also actual implementation and the importance as well of human rights and providing a framework to ensure that everyone is indeed held accountable but also to ensure that civilians have a right to remedy including in the context of AI for international peace and security. Finally Alexi would you like to have any concluding remarks and responses to any of the questions raised?


Alexi Drew: Thank you, I’ll run through these nice and quickly in the interest of giving people their time. I’d like to start with the silver lining to the signatures and the kind of demarking in authentic content or AI generated content from human generated content. As someone who used to work in arms control there’s a great thing to kind of bear in note that every time a new threat arises or a new innovation creates a threat it’s very quick for a counter to be developed against it and that is just as true in the identification of inauthentically generated or machine generated content as it is with any other. risk of this type that we’ve seen before. So I’m encouraged to see that it’s not just the CAI that exists in this space. There are a number of initiatives coming with technical and non-technical means to kind of give us the means of, as Michael says, in critical circumstances, being able to identify when content has been generated as opposed to when it has been created and is authentic. On the question from Vagos’ reference, compliance and command as component, this is actually part of what I was referencing when I was talking about the need to ensure that governance, both ethical, legal and economic, is built in at every stage of the design of life cycle. If we take IHL as part of that governance, a system should be designed, trained, tested, authenticated and verified with the data selected with its need to be compliant with IHL in mind. If it isn’t, that’s when you’re introducing the risks that something could be designed which is either completely incompatible with IHL or is open to being – or is possible for it to be used in a manner which is non-compliant. If you actually treat the life cycle effectively and how you incorporate IHL across it rather than just in a section of it or in, say, the assurance stage or treat it as a checkbox exercise, then you can actually constrain the risks of that going wrong. That being said, there are other components to that. The fact that any user of a system should be trained to understand what it can and what it cannot do, what does it look like when it fails, what are the circumstances which have led to its failure in testing, what influences its level of accuracy so they can make informed decisions as to how much and whether, in fact, to trust an AI-based tool system or weapon system, be it decision-making system, strategic or tactical, or be it a direct weapon system. And it should also be that in some cases, these tools simply aren’t used because it’s understood because of how These systems have been designed with IHL baked into each part of it, but in the context you’re seeking to apply how they simply cannot be compliant with IHL. On the subject of trust around these tools and how particularly LLMs, some of them have been found to be very non-critical of their human users and how that might influence it. Yes, that is a problem. They’re not designed to be critical and to push back on their human users. They’re designed to be supportive administrative assistants that say yes a lot and that should be something which is understood as a potential failing and implications to how a military should design, create doctrine and then deploy a tool. Moving quickly on to the who owns it, where’s the responsibility. I agree with both previous speakers, Jingjie and Michael. Everyone owns the responsibility here, but there are levers despite the complex ownership structures between the private sector, the public sector, the global north and south and those with less seeming control that everyone has a lever they can use here. Be it the taking part in globalized standard setting organizations, technical or non-technical, or be it in procurement strategies and procurement standards. If governance is critical, IHL, ethical, social and economic, then it should be a condition of procurement from government to suppliers. So then even if they don’t own the system or the services required to operate the system, say it’s AI as a service, the thing has been designed to set these standards and it’s legally necessary to do so to meet the procurement standards. Finally, on a point on openness, I’m going to try and be positive with a bit of negativity here. I think we’re at a point where innovation is being posed as a solution to our increasing state of insecurity and a risk to peace. And it’s been posited as a zero-sum game between innovation and security or insecurity and constraint on innovation. That is not the case. You can in fact have security and innovation with adherence to values.


Yasmin Afina: and Alexu. Thank you very much indeed Alexu for ending us on a positive note but also for you know for Jingjie He to also add a point on the fact that Chinese social media also had signatures to AI generated content and I think that also adds to the importance of collective responsibility to ensure responsible AI and international peace and security and I do know the importance of incentivization noted by Jingjie He, I know the importance of human rights as a framework, compliance with IHL and yeah on that hopefully positive note that we end we’re ending this workshop. Thank you very much everyone for joining us today either online or in person. Please join me in giving a round of applause to our speakers online. Thank you very much. Thank you very much. . . .


J

Jingjie He

Speech speed

121 words per minute

Speech length

833 words

Speech time

412 seconds

Inclusive engagement across stakeholders is essential for effective global AI governance because technological challenges require interdisciplinary approaches

Explanation

Jingjie He argues that while technological challenges can often be addressed through technological solutions, identifying the true nature of AI challenges requires an interdisciplinary and multi-stakeholder approach. This inclusive approach ensures that a wide range of knowledge, expertise, and perspectives are taken into account in shaping responsible AI policies.


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Agreed with

– Michael Karimian
– Yasmin Afina

Agreed on

Multi-stakeholder engagement is essential for effective AI governance


UN-sponsored platforms provide neutral, depoliticized spaces for knowledge-sharing beyond geopolitical constraints

Explanation

She emphasizes that UN-sponsored platforms like UNIDIR’s RAISE and IGF play a critical role in enabling multi-stakeholder engagement. What sets them apart from state-centric mechanisms is their unique ability to provide neutral, depoliticized, and inclusive spaces where knowledge-sharing and confidence-building can take place beyond geopolitical tensions.


Evidence

References to UNIDIR’s RAISE platform, IGF, and Global Digital Compact as examples of such platforms


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Everyone has responsibility for AI governance and raising awareness about AI risks

Explanation

When asked who is responsible for AI governance, Jingjie He responds that everyone has a role to play. She emphasizes the importance of raising voices, being sensitive about AI governance importance, and promoting dialogue about AI risks.


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Agreed with

– Michael Karimian
– Alexi Drew

Agreed on

Universal responsibility for AI governance


AI enhances satellite imagery analysis for conflict monitoring, as demonstrated by Amnesty International’s Darfur project

Explanation

Jingjie He provides a concrete example of how AI can foster international peace and security through satellite remote sensing. She explains that AI and machine learning are being applied to enhance analytical efficiency of satellite imagery for monitoring conflicts.


Evidence

Amnesty International’s collaboration with Element AI and 29,000 volunteers to develop tools for automatically analyzing satellite imagery for monitoring conflicts in Darfur


Major discussion point

AI Applications for Peace and Security


Topics

Cybersecurity | Human rights principles


AI can empower international peace, security, and non-proliferation missions through improved analytical capabilities

Explanation

She argues that AI applications in satellite imagery analysis represent just one example of many ways AI can benefit international peace, security, and non-proliferation missions. However, she also acknowledges the challenges that come with these applications.


Evidence

References her previous research showing challenges of adversarial attacks in such systems


Major discussion point

AI Applications for Peace and Security


Topics

Cybersecurity | Human rights principles


Knowledge sharing between technology developers and decision-makers is crucial in military contexts

Explanation

Jingjie He emphasizes that in military deployments of AI systems, the people making decisions about deployment may not always be those who understand the technology. She stresses the importance of transparency and knowledge sharing so decision-makers can understand the technology perspective.


Evidence

References her experience from civilian field and industries where decision-makers often don’t understand the technologies they’re deploying


Major discussion point

Military AI and Human-Machine Interaction


Topics

Cybersecurity | Legal and regulatory


AI serves as both force multiplier and threat multiplier, increasing risks for combatants with poorly designed systems

Explanation

She argues that militaries need to understand that AI is not only a force multiplier but also a threat multiplier. Poorly designed, unverified AI systems with uncertainties create risks not just for civilians but also for the military’s own combatants when they cannot be confident about the system’s performance.


Major discussion point

Military AI and Human-Machine Interaction


Topics

Cybersecurity | Legal and regulatory


Algorithm openness faces feasibility challenges due to intellectual property concerns

Explanation

When discussing AI openness, Jingjie He expresses skepticism about the feasibility of algorithm transparency. She explains that in industry due diligence, companies typically claim their core technology as intellectual property and refuse to reveal algorithms, instead asking clients to trust their results.


Evidence

Her experience in industry technological scouting where companies refuse to reveal their core algorithms, claiming them as IP


Major discussion point

Technical Challenges and Risks


Topics

Legal and regulatory | Intellectual property rights


Disagreed with

– Michael Karimian

Disagreed on

Feasibility of AI algorithm transparency and openness


Adversarial attacks make AI systems more vulnerable and discussions more challenging

Explanation

She acknowledges that there are potential challenges with AI applications in peace and security contexts, specifically mentioning adversarial attacks as a vulnerability that makes AI systems more susceptible to manipulation and makes governance discussions more complex.


Evidence

References her previous research on adversarial attacks in AI systems


Major discussion point

Technical Challenges and Risks


Topics

Cybersecurity | Network security


M

Michael Karimian

Speech speed

149 words per minute

Speech length

1198 words

Speech time

480 seconds

Industry has critical role as developers and deployers, plus proactive stakeholders in establishing norms and safeguards

Explanation

Michael Karimian argues that industry has a critical role not just as developers and deployers of AI technology, but also as proactive stakeholders in establishing norms, standards, and safeguards to mitigate risks associated with AI in security contexts. He emphasizes that industry’s practical contributions to governance cannot be overstated.


Evidence

References the roundtable for AI security and ethics (RAISE) which has highlighted industry’s practical contributions to governance


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Agreed with

– Jingjie He
– Yasmin Afina

Agreed on

Multi-stakeholder engagement is essential for effective AI governance


Industry actors are first to encounter AI risks due to direct involvement in development and deployment

Explanation

He argues that industry actors are often the first to encounter and understand AI risks and vulnerabilities because of their direct involvement in developing and deploying these technologies. This puts industry players in a unique position to provide expertise on technical feasibility, operational impacts, and risk mitigation strategies.


Major discussion point

Industry Responsibility and Due Diligence


Topics

Legal and regulatory | Human rights principles


Industry must develop clear standards ensuring AI systems are transparent with accountability mechanisms

Explanation

Karimian emphasizes that industry must develop and adhere to clear standards that ensure AI systems used in security applications are transparent in their capabilities and limitations, with clearly articulated accountability mechanisms. This involves robust documentation practices, continuous monitoring, and the capability to audit AI systems.


Major discussion point

Industry Responsibility and Due Diligence


Topics

Legal and regulatory | Human rights principles


Agreed with

– Alexi Drew

Agreed on

Lifecycle approach is crucial for AI governance


Disagreed with

– Jingjie He

Disagreed on

Feasibility of AI algorithm transparency and openness


Companies have responsibility under UN guiding principles to ensure products aren’t used for human rights abuses

Explanation

He explains that under the UN guiding principles on business and human rights, all companies have a responsibility to ensure their products and services are not used to facilitate or contribute to serious human rights abuses. This means engagement with governments or armed forces, especially in conflict contexts, must be subject to rigorous due diligence and clear red lines on misuse.


Evidence

References UN guiding principles on business and human rights as established framework


Major discussion point

Industry Responsibility and Due Diligence


Topics

Human rights principles | Legal and regulatory


Agreed with

– Jingjie He
– Alexi Drew

Agreed on

Universal responsibility for AI governance


AI signatures may be needed for specific critical use cases rather than universal application

Explanation

In response to questions about AI content signatures, Karimian suggests thinking about whether there are specific use cases where AI signatures are really needed versus other use cases where they might not be necessary. He acknowledges that the proliferation of AI solutions means there will always be actors who would circumvent such measures.


Major discussion point

Content Authenticity and Misinformation


Topics

Legal and regulatory | Content policy


Agreed with

– Alexi Drew
– Francis Alaneme (Audience)

Agreed on

Need for technical solutions to AI content authenticity challenges


A

Alexi Drew

Speech speed

182 words per minute

Speech length

2065 words

Speech time

680 seconds

All stakeholders have levers they can use, including participation in standard-setting organizations and procurement strategies

Explanation

Alexi Drew argues that despite complex ownership structures between private and public sectors and between global north and south, everyone has levers they can use. These include participating in globalized standard-setting organizations and using procurement strategies as governance tools.


Evidence

Suggests that if governance is critical, it should be a condition of procurement from government to suppliers


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Agreed with

– Jingjie He
– Michael Karimian

Agreed on

Universal responsibility for AI governance


Governance cannot be added as afterthought but must be designed to fit each stage of the lifecycle

Explanation

Drew emphasizes that governance is not something that can be added after the fact as an afterthought. Instead, it needs to be something designed to fit into each stage of the AI system lifecycle, from development through validation to deployment.


Major discussion point

Lifecycle Management Approach


Topics

Legal and regulatory | Human rights principles


Agreed with

– Michael Karimian

Agreed on

Lifecycle approach is crucial for AI governance


Development, validation, and deployment stages each present unique risks that compound if not properly addressed

Explanation

She provides detailed examples of how localization issues can create compounding problems across the AI lifecycle. For instance, using Global North data to train models for Global South deployment, testing outside local contexts, and deploying systems that systematically exclude marginalized communities.


Evidence

Specific examples include refugee processing systems trained on one population failing when applied to populations with different linguistic or social characteristics, and aid algorithms that exclude marginalized communities


Major discussion point

Lifecycle Management Approach


Topics

Legal and regulatory | Human rights principles | Development


Systems should be designed, trained, and tested with compliance requirements like IHL built in from the start

Explanation

Drew argues that if International Humanitarian Law (IHL) compliance is required, AI systems should be designed, trained, tested, authenticated and verified with IHL compliance in mind from the beginning. This prevents systems from being designed that are incompatible with IHL or open to non-compliant use.


Major discussion point

Lifecycle Management Approach


Topics

Legal and regulatory | Human rights principles


Lifecycle approach prevents treating governance as checkbox exercise rather than integrated process

Explanation

She warns against treating the AI lifecycle as a conveyor belt or series of checkboxes to be completed. Instead, she advocates for understanding lifecycles as requiring checks, balances, and means of ensuring risks are not compounded throughout the process.


Major discussion point

Lifecycle Management Approach


Topics

Legal and regulatory | Human rights principles


Innovation can coexist with security and adherence to values, not a zero-sum game

Explanation

Drew concludes on a positive note, arguing against the false premise that innovation and security are in a zero-sum relationship. She contends that you can have both security and innovation while maintaining adherence to values, rejecting the notion that innovation must come at the expense of security or ethical constraints.


Major discussion point

AI Applications for Peace and Security


Topics

Legal and regulatory | Human rights principles


Military users need training to understand AI system capabilities, limitations, and failure modes

Explanation

Drew emphasizes that any user of an AI system should be trained to understand what the system can and cannot do, what failure looks like, what circumstances have led to failures in testing, and what influences accuracy levels. This enables informed decisions about how much to trust AI-based tools.


Major discussion point

Military AI and Human-Machine Interaction


Topics

Cybersecurity | Legal and regulatory


Counter-innovations quickly develop against new threats, including tools for identifying machine-generated content

Explanation

Drawing from her arms control background, Drew notes that every time a new threat arises or innovation creates a threat, counters are quickly developed. She applies this principle to AI-generated content, expressing encouragement that multiple initiatives exist to identify inauthentic or machine-generated content.


Evidence

References the Content Authenticity Initiative (CAI) and notes there are multiple technical and non-technical initiatives in this space


Major discussion point

Content Authenticity and Misinformation


Topics

Cybersecurity | Content policy


Agreed with

– Michael Karimian
– Francis Alaneme (Audience)

Agreed on

Need for technical solutions to AI content authenticity challenges


B

Bagus Jatmiko

Speech speed

130 words per minute

Speech length

477 words

Speech time

219 seconds

AI systems in military face risks of emergent misalignment and tendency to provide answers users want to hear

Explanation

Commander Bagus Jatmiko, working in the defense sector, raises concerns about AI being used exponentially in military contexts where commanders may be unaware that AI might be corrupted during use through emergent misalignment. He also notes the tendency of AI to be ‘psychopathic’ in providing answers that users want to seek rather than accurate assessments.


Evidence

His experience working in the defense sector and AI/information warfare research


Major discussion point

Military AI and Human-Machine Interaction


Topics

Cybersecurity | Legal and regulatory


Commanders may ignore humanitarian law when seeking quick AI-generated answers in fog of war

Explanation

Jatmiko expresses concern that in battlefield conditions of uncertainty and the ‘fog of war,’ commanders seeking quick answers from AI decision support systems may ignore the possibility or existence of humanitarian law. This creates risks for humanity and civilian populations.


Major discussion point

Military AI and Human-Machine Interaction


Topics

Cybersecurity | Human rights principles


A

Audience

Speech speed

138 words per minute

Speech length

490 words

Speech time

211 seconds

AI-generated content needs signatures for identification to prevent false information and violence instigation

Explanation

Francis Alaneme from the .ng domain registry argues that AI adoption is making imaginary things seem real, and AI-generated content should have signatures so people can easily identify what is AI-generated versus human-generated. He warns that realistic AI-generated video content can be used to pass false information and instigate violence in some places.


Evidence

Examples of video content that appears real but could be culturally inappropriate or violence-instigating


Major discussion point

Content Authenticity and Misinformation


Topics

Content policy | Cybersecurity


Agreed with

– Michael Karimian
– Alexi Drew
– Francis Alaneme (Audience)

Agreed on

Need for technical solutions to AI content authenticity challenges


Big tech companies in powerful countries hold significant control while developing countries have limited influence

Explanation

Judge George Aden Maggett from Egypt’s Supreme Court raises concerns about the power imbalance in AI development and deployment. He argues that big tech companies in powerful countries control AI development while developing countries have nothing in their hands, leading to situations where AI is used in autonomous weapons killing civilians without accountability.


Evidence

References current ongoing wars where AI is being used in autonomous weapons with civilian casualties


Major discussion point

Industry Responsibility and Due Diligence


Topics

Human rights principles | Legal and regulatory | Development


Y

Yasmin Afina

Speech speed

150 words per minute

Speech length

3381 words

Speech time

1344 seconds

Multi-stakeholder engagement is essential for AI governance to bridge divides and overcome competitiveness and distrust

Explanation

Yasmin Afina emphasizes that UNIDIR’s approach brings together experts from diverse countries including China, Russia, US, UK, but also Namibia, Ecuador, Kenya, and India to bridge divides and facilitate conversation where there is none on AI and security issues. The goal is to overcome competitiveness and distrust through inclusive dialogue.


Evidence

UNIDIR’s ARRAISE initiative bringing together experts from China, Russia, United States, United Kingdom, Namibia, Ecuador, Kenya, India


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Agreed with

– Jingjie He
– Michael Karimian

Agreed on

Multi-stakeholder engagement is essential for effective AI governance


AI governance requires both bottom-up and top-down approaches to ensure public trust and legitimacy

Explanation

Afina argues that discussions on AI and security should not be one-way but should incorporate both bottom-up and top-down approaches. This dual approach is necessary to warrant public trust and legitimacy in AI governance processes.


Evidence

UNIDIR’s platform design for open, inclusive, and meaningful dialogue


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Cross-disciplinary literacy improvement is crucial for AI governance in security contexts

Explanation

Afina emphasizes the importance of improving cross-disciplinary literacy as part of multi-stakeholder engagement on AI and security issues. This reflects the complex nature of AI challenges that require understanding across different fields and disciplines.


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Interdisciplinary approaches


AI governance should disrupt monopolies and ensure all voices from all layers of society are heard

Explanation

Afina advocates for using platforms like ARRAISE to disrupt monopolies in the hands of the few and ensure that all voices are heard from all layers of society. This reflects a commitment to democratizing AI governance rather than leaving it to a select few powerful actors.


Evidence

ARRAISE platform design and objectives


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Human rights principles


Voluntary funded institutes face dire fundraising situations that threaten dialogue facilitation

Explanation

Afina acknowledges the difficulty that the UN and UNIDIR face in fundraising, noting the dire situation they face today to enable such dialogue. As a voluntary funded institute, UNIDIR relies on voluntary contributions, which creates sustainability challenges for important governance initiatives.


Evidence

UNIDIR’s status as voluntary funded institute relying on voluntary contributions


Major discussion point

Multi-stakeholder Engagement in AI Governance


Topics

Legal and regulatory | Development


AI’s unique nature requires multi-stakeholder perspectives for understanding implications on international peace and security

Explanation

Afina argues that due to AI technology’s highly unique nature, UNIDIR quickly understood the importance of multi-stakeholder engagement and perspectives to obtain input on AI’s implications for international peace and security. This recognition led to the establishment of platforms for inclusive dialogue.


Evidence

UNIDIR’s establishment of multi-stakeholder platforms and ARRAISE initiative


Major discussion point

AI Applications for Peace and Security


Topics

Legal and regulatory | Human rights principles


Agreements

Agreement points

Universal responsibility for AI governance

Speakers

– Jingjie He
– Michael Karimian
– Alexi Drew

Arguments

Everyone has responsibility for AI governance and raising awareness about AI risks


Companies have responsibility under UN guiding principles to ensure products aren’t used for human rights abuses


All stakeholders have levers they can use, including participation in standard-setting organizations and procurement strategies


Summary

All three main speakers agree that responsibility for AI governance is shared across all stakeholders – governments, industry, civil society, and individuals – rather than being concentrated in any single entity.


Topics

Legal and regulatory | Human rights principles


Multi-stakeholder engagement is essential for effective AI governance

Speakers

– Jingjie He
– Michael Karimian
– Yasmin Afina

Arguments

Inclusive engagement across stakeholders is essential for effective global AI governance because technological challenges require interdisciplinary approaches


Industry has critical role as developers and deployers, plus proactive stakeholders in establishing norms and safeguards


Multi-stakeholder engagement is essential for AI governance to bridge divides and overcome competitiveness and distrust


Summary

There is strong consensus that effective AI governance requires inclusive participation from diverse stakeholders, bringing together different perspectives, expertise, and capabilities to address complex technological challenges.


Topics

Legal and regulatory | Human rights principles


Lifecycle approach is crucial for AI governance

Speakers

– Michael Karimian
– Alexi Drew

Arguments

Industry must develop clear standards ensuring AI systems are transparent with accountability mechanisms


Governance cannot be added as afterthought but must be designed to fit each stage of the lifecycle


Summary

Both speakers emphasize that governance considerations must be integrated throughout the entire AI system lifecycle, from development through deployment, rather than being treated as an add-on or afterthought.


Topics

Legal and regulatory | Human rights principles


Need for technical solutions to AI content authenticity challenges

Speakers

– Michael Karimian
– Alexi Drew
– Francis Alaneme (Audience)

Arguments

AI signatures may be needed for specific critical use cases rather than universal application


Counter-innovations quickly develop against new threats, including tools for identifying machine-generated content


AI-generated content needs signatures for identification to prevent false information and violence instigation


Summary

There is agreement that technical solutions are needed to address AI-generated content authenticity, though with recognition that implementation may vary by use case and that counter-measures are rapidly developing.


Topics

Content policy | Cybersecurity


Similar viewpoints

Both speakers emphasize the critical importance of knowledge transfer and transparency between those who develop AI technologies and those who make decisions about their deployment, particularly in security contexts.

Speakers

– Jingjie He
– Michael Karimian

Arguments

Knowledge sharing between technology developers and decision-makers is crucial in military contexts


Industry actors are first to encounter AI risks due to direct involvement in development and deployment


Topics

Legal and regulatory | Cybersecurity


Both speakers highlight the critical need for military personnel to understand AI system limitations and potential failure modes to make informed decisions about trust and deployment in security contexts.

Speakers

– Alexi Drew
– Bagus Jatmiko

Arguments

Military users need training to understand AI system capabilities, limitations, and failure modes


AI systems in military face risks of emergent misalignment and tendency to provide answers users want to hear


Topics

Cybersecurity | Legal and regulatory


Both speakers maintain an optimistic view that AI can be a positive force for peace and security when properly governed, rejecting the notion that innovation must come at the expense of security or ethical considerations.

Speakers

– Jingjie He
– Alexi Drew

Arguments

AI can empower international peace, security, and non-proliferation missions through improved analytical capabilities


Innovation can coexist with security and adherence to values, not a zero-sum game


Topics

Legal and regulatory | Human rights principles


Unexpected consensus

Global South representation and power imbalances

Speakers

– Yasmin Afina
– George Aden Maggett (Audience)
– Alexi Drew

Arguments

AI governance should disrupt monopolies and ensure all voices from all layers of society are heard


Big tech companies in powerful countries hold significant control while developing countries have limited influence


All stakeholders have levers they can use, including participation in standard-setting organizations and procurement strategies


Explanation

Unexpectedly, there was strong consensus across speakers from different sectors (UN, judiciary, ICRC) about the need to address power imbalances between Global North tech companies and Global South stakeholders, with practical suggestions for how developing countries can exercise influence through procurement and standards participation.


Topics

Legal and regulatory | Human rights principles | Development


Limitations of algorithm transparency

Speakers

– Jingjie He
– Michael Karimian

Arguments

Algorithm openness faces feasibility challenges due to intellectual property concerns


AI signatures may be needed for specific critical use cases rather than universal application


Explanation

Both academic and industry perspectives unexpectedly converged on the practical limitations of full algorithmic transparency, acknowledging intellectual property constraints while still supporting targeted transparency measures for critical applications.


Topics

Legal and regulatory | Intellectual property rights


Overall assessment

Summary

The discussion revealed remarkably high consensus among speakers on fundamental principles of AI governance, including shared responsibility, multi-stakeholder engagement, lifecycle management, and the need for technical solutions to content authenticity. There was also unexpected agreement on addressing Global South representation and practical limitations of algorithmic transparency.


Consensus level

High level of consensus with significant implications for AI governance frameworks. The agreement across diverse stakeholders (academic, industry, humanitarian, military, judicial) suggests these principles have broad legitimacy and could form the foundation for effective global AI governance mechanisms. The consensus on shared responsibility and multi-stakeholder approaches particularly validates current UN and multilateral efforts in this space.


Differences

Different viewpoints

Feasibility of AI algorithm transparency and openness

Speakers

– Jingjie He
– Michael Karimian

Arguments

Algorithm openness faces feasibility challenges due to intellectual property concerns


Industry must develop clear standards ensuring AI systems are transparent with accountability mechanisms


Summary

Jingjie He expresses strong skepticism about algorithm transparency due to IP concerns and industry practices of protecting core technology, while Michael Karimian advocates for transparency standards and accountability mechanisms in AI systems used in security applications.


Topics

Legal and regulatory | Intellectual property rights


Unexpected differences

Practical implementation of AI transparency in security contexts

Speakers

– Jingjie He
– Michael Karimian

Arguments

Algorithm openness faces feasibility challenges due to intellectual property concerns


Industry must develop clear standards ensuring AI systems are transparent with accountability mechanisms


Explanation

This disagreement is unexpected because both speakers are advocates for responsible AI governance, yet they have fundamentally different views on whether transparency is achievable. Jingjie He’s practical industry experience leads her to question feasibility, while Michael Karimian’s industry perspective emphasizes the necessity and possibility of transparency standards.


Topics

Legal and regulatory | Intellectual property rights


Overall assessment

Summary

The discussion shows remarkably high consensus among speakers on fundamental principles of AI governance, with only one significant disagreement on algorithm transparency feasibility. Most differences are about emphasis and approach rather than fundamental disagreement.


Disagreement level

Low level of disagreement with high implications – the transparency debate touches on core tensions between security, commercial interests, and accountability that are central to AI governance in security contexts. The consensus on multi-stakeholder responsibility suggests strong foundation for collaborative approaches, but the transparency disagreement highlights practical implementation challenges that could impede progress.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasize the critical importance of knowledge transfer and transparency between those who develop AI technologies and those who make decisions about their deployment, particularly in security contexts.

Speakers

– Jingjie He
– Michael Karimian

Arguments

Knowledge sharing between technology developers and decision-makers is crucial in military contexts


Industry actors are first to encounter AI risks due to direct involvement in development and deployment


Topics

Legal and regulatory | Cybersecurity


Both speakers highlight the critical need for military personnel to understand AI system limitations and potential failure modes to make informed decisions about trust and deployment in security contexts.

Speakers

– Alexi Drew
– Bagus Jatmiko

Arguments

Military users need training to understand AI system capabilities, limitations, and failure modes


AI systems in military face risks of emergent misalignment and tendency to provide answers users want to hear


Topics

Cybersecurity | Legal and regulatory


Both speakers maintain an optimistic view that AI can be a positive force for peace and security when properly governed, rejecting the notion that innovation must come at the expense of security or ethical considerations.

Speakers

– Jingjie He
– Alexi Drew

Arguments

AI can empower international peace, security, and non-proliferation missions through improved analytical capabilities


Innovation can coexist with security and adherence to values, not a zero-sum game


Topics

Legal and regulatory | Human rights principles


Takeaways

Key takeaways

Multi-stakeholder engagement is essential for effective AI governance in security contexts, requiring inclusive participation from governments, industry, civil society, and international organizations


Industry has a critical responsibility as both developers and deployers of AI technology, with obligations under UN guiding principles to prevent human rights abuses


Lifecycle management approach is crucial – governance must be integrated at development, validation, and deployment stages rather than added as an afterthought


AI serves as both a force multiplier and threat multiplier in military contexts, requiring careful consideration of risks to both civilians and combatants


Everyone shares responsibility for AI governance, though different stakeholders have different levers of influence including procurement standards and participation in standard-setting organizations


AI has positive applications for peace and security, such as enhancing satellite imagery analysis for conflict monitoring and humanitarian purposes


Content authenticity and AI signature identification are important for preventing misinformation and violence instigation


Knowledge sharing between technology developers and decision-makers is crucial, especially in military contexts where commanders may not fully understand AI system limitations


Resolutions and action items

Continue supporting and participating in UN-sponsored platforms like UNIDIR’s RAISE and the RE-AIM process for responsible AI in military domains


Implement robust due diligence processes across the AI lifecycle from design through deployment and decommissioning


Develop clear standards ensuring AI systems used in security applications are transparent with accountability mechanisms


Provide training for military users to understand AI system capabilities, limitations, and failure modes


Integrate compliance requirements like International Humanitarian Law (IHL) into each stage of AI system development rather than treating it as a checkbox exercise


Support capacity-building initiatives particularly in regions where regulatory frameworks are still emerging


Unresolved issues

Funding sustainability for UN-sponsored AI governance platforms and multi-stakeholder initiatives


Technical feasibility of requiring algorithm openness due to intellectual property concerns


Power imbalance between big tech companies in developed countries and developing nations with limited influence over AI governance


Lack of meaningful and trustworthy use cases to understand how AI is actually being used in security domains


How to effectively implement AI signatures universally versus only for specific critical use cases


Addressing emergent misalignment and AI systems’ tendency to provide answers users want to hear rather than critical assessment


Ensuring compliance with humanitarian law in high-pressure battlefield situations where commanders seek quick AI-generated answers


Suggested compromises

Focus AI signature requirements on specific critical use cases rather than universal application across all AI-generated content


Balance innovation with security through integrated governance approaches rather than viewing them as zero-sum trade-offs


Combine technical and non-technical means for identifying machine-generated content rather than relying solely on one approach


Use procurement standards as leverage for governance compliance even when governments don’t own the AI systems or services


Develop counter-innovations and defensive measures alongside AI advancement to address emerging threats


Thought provoking comments

AI is not only a force multiplier, but also a threat multiplier. It is not only about the risk of civilians. It’s also about increasing risk of your own combatants when you have a poorly designed, unverified AI system with uncertainties and you cannot be confident about it and there’s a whole black box.

Speaker

Jingjie He


Reason

This comment reframes the AI security discussion by highlighting that AI risks aren’t just external threats to civilians, but internal risks to military forces themselves. The ‘threat multiplier’ concept introduces a crucial dual perspective that challenges the common narrative of AI as purely advantageous in military contexts.


Impact

This shifted the conversation from viewing AI governance as primarily about protecting others to recognizing it as essential for protecting one’s own forces. It provided a strategic incentive framework that could motivate military adoption of responsible AI practices based on self-interest rather than just ethical obligations.


Governance is not something that can be added on after the fact. It’s not an afterthought. It needs to be something which is designed to fit in each stage of the life cycle… we have a series of things which is simply checked off as complete without sufficient evidence to the fact without the ability to understand is this system suitable for what it’s being used for.

Speaker

Alexi Drew


Reason

This fundamentally challenges the conventional approach to AI governance by arguing against treating it as a compliance checklist. It introduces the critical insight that governance must be embedded throughout the development process, not retrofitted, and warns against the dangerous illusion of safety through checkbox exercises.


Impact

This comment elevated the technical discussion to a more sophisticated understanding of systemic governance challenges. It influenced subsequent speakers to address implementation gaps and moved the conversation from ‘what should be done’ to ‘how governance actually fails in practice’ and why current approaches are insufficient.


Who is responsible for the mitigation of AI risks? Is it high tech big companies who are creating AI and developing AI? Because it is not in the hand of the government, especially in the developing countries right now… we can see how AI is being used in current ongoing wars. And the victims behind the use of AI technology in autonomous weapon, for example, how civilians are being killed without accountability.

Speaker

George Aden Maggett (Egyptian Supreme Court Judge)


Reason

This comment powerfully highlighted the global power imbalance in AI governance and connected abstract policy discussions to real-world consequences. Coming from a judicial perspective from the Global South, it brought urgent moral clarity about accountability gaps and the disconnect between those who develop AI and those who suffer its consequences.


Impact

This intervention fundamentally shifted the tone from technical optimization to urgent ethical accountability. It forced all subsequent speakers to address the responsibility question directly and grounded the abstract governance discussion in current conflict realities. It also highlighted the Global South perspective that had been somewhat absent from the technical discussions.


I bring concern about how AI is being used in a way that some of the commander or the user within the military domain is unaware of the possibility that AI might be corrupted during the use… And I also would like to bring the concern about the possibility of… AI being psychopath in a way that… would provide the answers that the users would like to seek. And being in the battlefield, that kind of tendency would be very, in a way, very risky and maybe dangerous.

Speaker

Commander Bagus Jatmiko (Indonesian Navy)


Reason

This comment introduced the critical concept of AI systems potentially being designed to tell users what they want to hear rather than what they need to know, especially dangerous in high-stakes military decisions. The ‘psychopath’ characterization, while provocative, highlighted how AI systems lack genuine critical thinking and may enable confirmation bias in life-or-death situations.


Impact

This shifted the discussion from technical reliability to psychological and cognitive risks in human-AI interaction. It introduced the concept of AI as potentially manipulative rather than just unreliable, adding a new dimension to the governance challenge that subsequent speakers had to address in their responses about training and system design.


You can in fact have security and innovation with adherence to values… innovation is being posed as a solution to our increasing state of insecurity and a risk to peace. And it’s been posited as a zero-sum game between innovation and security or insecurity and constraint on innovation. That is not the case.

Speaker

Alexi Drew


Reason

This comment directly challenged the false dichotomy often presented in AI policy discussions – that we must choose between innovation and safety/ethics. It reframed the entire governance challenge as a design problem rather than a trade-off, suggesting that responsible development is not inherently constraining but rather a different approach to innovation.


Impact

This provided a positive, solution-oriented conclusion that synthesized the various concerns raised throughout the discussion. It shifted the final tone from problem-focused to possibility-focused, suggesting that the governance challenges discussed were solvable through better design rather than fundamental limitations on AI development.


Overall assessment

These key comments transformed what could have been a technical policy discussion into a nuanced exploration of power, accountability, and practical implementation challenges. The progression moved from technical considerations (lifecycle management, signatures) to strategic reframing (threat multiplier concept), to urgent moral questions (Global South accountability concerns), to psychological risks (AI manipulation), and finally to a synthesis that rejected false trade-offs. The most impactful comments came from practitioners with direct experience (military officer, judge) who grounded abstract governance concepts in real-world consequences. This created a discussion that was both technically informed and ethically urgent, with each major intervention building complexity and shifting the conversation toward more fundamental questions about power, responsibility, and the human costs of AI deployment in security contexts.


Follow-up questions

How can we make multi-stakeholder AI governance platforms like RAISE more sustainable and address funding challenges?

Speaker

Jingjie He


Explanation

She noted that platforms like RAISE face funding issues and sustainability concerns, emphasizing this should be a collective effort requiring more resources and contributions from all stakeholders.


How can we better address adversarial attacks on AI systems used for peace and security monitoring?

Speaker

Jingjie He


Explanation

She mentioned that adversarial attacks pose challenges to AI systems used in satellite imagery analysis for conflict monitoring, making discussions more complex and requiring further research.


What specific technical standards and accountability mechanisms should be developed for AI systems in security applications?

Speaker

Michael Karimian


Explanation

He emphasized the need for clear standards ensuring transparency in AI capabilities and limitations, with robust documentation, monitoring, and auditing capabilities.


How can we develop more effective technical, ethical, and humanitarian governance that intersects with all stages of the AI lifecycle?

Speaker

Alexi Drew


Explanation

She stressed the need for governance mechanisms that work across development, validation, and deployment stages rather than being added as an afterthought.


How can AI-generated content be reliably identified and distinguished from human-generated content to prevent misinformation and violence?

Speaker

Francis Alaneme


Explanation

He raised concerns about AI-generated video content being used to spread false information and instigate violence, emphasizing the need for signature systems to identify AI-generated content.


How can we address emergent misalignment and the risk of AI systems becoming ‘psychopathic’ in military decision-making contexts?

Speaker

Commander Bagus Jatmiko


Explanation

He expressed concern about AI systems potentially being corrupted or misaligned during use in battlefield conditions, and the tendency of AI to provide answers users want to hear rather than accurate assessments.


Who should be held responsible for mitigating AI risks, particularly when big tech companies from powerful countries control the technology while developing countries bear the consequences?

Speaker

Judge George Aden Maggett


Explanation

He raised concerns about accountability for AI-related civilian casualties in current conflicts and the power imbalance between tech companies in developed countries and affected populations in developing countries.


Will there be a policy shift toward greater AI openness in peace and security domains, similar to civilian contexts?

Speaker

Rowan Wilkinson


Explanation

The question explores whether open-source approaches and community oversight models used in civilian AI safety could be applied to AI systems used for peace and security purposes.


How can we improve access to meaningful and trustworthy use cases to better understand how AI is actually being used in security domains?

Speaker

Michael Karimian


Explanation

He noted that the academic community, civil society, industry, and governments currently rely on limited examples that may not be reflective of actual AI use in security contexts.


How can procurement standards be used as a lever to ensure AI systems comply with international humanitarian law and ethical standards?

Speaker

Alexi Drew


Explanation

She suggested that even countries without direct control over AI development could use procurement conditions to enforce governance standards, requiring further exploration of implementation mechanisms.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.