How AI Is Transforming Diplomacy and Conflict Management
20 Feb 2026 13:00h - 14:00h
How AI Is Transforming Diplomacy and Conflict Management
Session at a glance
Summary
This discussion centered on the MOVE 37 initiative, a new project by Harvard’s Belfer Center exploring how artificial intelligence can augment human capabilities in diplomatic negotiations and international affairs. J. Michael McQuade introduced the project, explaining that it aims to develop AI tools to support diplomats in managing the complex, multi-stakeholder negotiations that characterize modern international relations, while maintaining human authority and responsibility in decision-making.
The project team, represented by Slavina Ancheva and Charlie Posniak, outlined the complexity of diplomatic negotiations, which involve multiple countries, vast amounts of documentation, time pressures, and strategic considerations that extend far beyond simple bilateral exchanges. They emphasized that their approach goes beyond large language models to incorporate game theory, decision analysis, and machine learning techniques, breaking down diplomatic tasks into research, analysis, strategizing, and execution phases that AI could enhance.
Three expert panelists shared perspectives from their extensive negotiation experience. Gabriela Ramos, former UNESCO official, discussed her experience negotiating AI ethics standards among 193 countries, highlighting how AI could help map country positions and provide strategic insights about negotiating partners. Nandita Balakrishnan from the Special Competitive Studies Project emphasized the importance of explainable AI outputs and the need for public sector AI literacy, noting that intelligence work increasingly supports diplomatic efforts. Robyn Scott from Apolitical warned about the risks of “sleeping at the wheel” with AI tools and stressed the importance of keeping humans “above the algorithm” rather than being controlled by it.
Key concerns raised included ensuring cultural diversity in AI training data, maintaining human agency, avoiding over-reliance on AI recommendations, and addressing potential power imbalances when different countries have varying access to AI capabilities. The discussion concluded with an invitation for broader collaboration as this interdisciplinary project develops tools and guidelines for responsible AI use in diplomacy.
Keypoints
Major Discussion Points:
– Introduction of the MOVE 37 Initiative: The Belfer Center’s Emerging Tech Program launched a new project to explore how artificial intelligence can augment human capabilities in diplomatic negotiations and international relations, emphasizing that AI should support rather than replace human decision-makers in these complex, high-stakes processes.
– Complexity of Modern Diplomatic Negotiations: Panelists highlighted how contemporary negotiations involve multiple stakeholders, vast amounts of information, time pressures, and cultural considerations that go far beyond simple bilateral discussions – creating opportunities for AI to help manage this complexity through better information processing and strategic analysis.
– AI Applications Beyond Language Models: The discussion emphasized that while Large Language Models (LLMs) are useful, effective diplomatic AI requires a broader toolkit including game theory, decision analysis, predictive modeling, and transparent, modular systems that can handle strategic misrepresentation and changing institutional frameworks.
– Human Agency and Responsible Implementation: A central theme was maintaining human authority and accountability while leveraging AI tools, with concerns about “sleeping at the wheel” when AI becomes too trusted, the need for explainable outputs, and ensuring diverse cultural perspectives are represented in AI systems used for international negotiations.
– Practical Challenges and Readiness Gaps: The panel revealed significant gaps between AI enthusiasm and actual implementation in the public sector, including lack of evaluation frameworks for AI pilots, insufficient understanding of ethical guidelines among implementers, and the critical need for hands-on experience with AI tools among decision-makers.
Overall Purpose:
The discussion aimed to introduce the MOVE 37 project and gather expert insights on how artificial intelligence can responsibly augment diplomatic negotiations and international policy-making. The session sought to bridge technical AI capabilities with real-world diplomatic experience, exploring both opportunities and risks while establishing principles for human-centered AI implementation in high-stakes international relations.
Overall Tone:
The discussion maintained a consistently thoughtful and cautiously optimistic tone throughout. Participants demonstrated genuine enthusiasm for AI’s potential while expressing well-founded concerns about implementation challenges. The conversation was collaborative and constructive, with panelists building on each other’s insights rather than debating opposing viewpoints. The tone remained professional and forward-looking, emphasizing the importance of careful, responsible development rather than rushing to deploy AI tools in diplomatic contexts.
Speakers
Speakers from the provided list:
– J. Michael McQuade – Major figure in international policy for the United States, Director of the Emerging Tech Program at the Belfer Center, Harvard Kennedy School
– Slavina Ancheva – Research fellow and MPP student at the Belfer Center, working on the MOVE 37 initiative
– Charlie Posniak – Full-time research fellow with the Emerging Tech Program at the Belfer Center
– Gabriela Ramos – Former Assistant Director General for Social and Human Sciences at UNESCO, experienced in international negotiations on climate change, international taxation, gender equality, and artificial intelligence through roles at G20, G7, and OECD
– Nandita Balakrishnan – Director of Intelligence at the Special Competitive Studies Project in Washington, with career spanning academia (Stanford), public sector intelligence, and private sector
– Robyn Scott – CEO and co-founder of Apolitical, a global platform for policymakers specializing in government innovation
– Audience – Multiple audience members who asked questions during the Q&A session
Additional speakers:
– Sam Dawes – Senior advisor to the Oxford University AI Governance Initiative and director of multilateral AI, former diplomat who worked for Kofi Annan and the Foreign Office
– Devika Rao – Indian classical dance teacher working on cultural frameworks between India and UK
– Arman – Works for JPL South Asia
Full session report
This discussion introduced the MOVE 37 initiative, a project launched by Harvard’s Belfer Center to explore how artificial intelligence can responsibly augment human capabilities in diplomatic negotiations and international affairs. The session brought together leading practitioners, researchers, and policy experts to examine both the potential and challenges of integrating AI tools into international diplomacy.
Project Introduction and Vision
J. Michael McQuade, who leads the Belfer Center’s Emerging Tech Program, introduced MOVE 37 as part of a broader effort to understand where emerging technologies create new policy frontiers and opportunities for engagement in global governance. The project’s core premise is that artificial intelligence will inevitably play an increasingly important role in shaping relationships between states, but this integration must be approached with careful consideration of how AI tools can augment rather than replace the fundamentally human process of diplomatic negotiation.
McQuade acknowledged Carme Artigas, Spain-India Ambassador for AI, who couldn’t attend but was integral to the project’s development. The initiative represents a collaborative effort to bridge the gap between AI technical capabilities and diplomatic practice.
The Complexity of Modern Diplomatic Negotiations
The project team, represented by researchers Slavina Ancheva and Charlie Posniak, provided analysis of why contemporary diplomatic negotiations present complex challenges that AI augmentation could meaningfully address. Slavina illustrated this complexity through the example of EU AI Act negotiations, where a single negotiator might face seven counterparts from different political groups whilst representing the interests of 27 member states—all whilst managing thousands of documents, transcripts, and drafts under intense time pressure.
This complexity extends beyond simple bilateral exchanges. Modern negotiations involve multiple stakeholders with varying interests, cultural perspectives, and strategic objectives. The information management burden alone can be overwhelming, with negotiators needing to track evolving positions, understand historical precedents, and identify potential leverage points across numerous interconnected issues.
The researchers emphasized that this complexity creates several specific challenges: information overload that exceeds human processing capabilities, resource constraints that limit thorough analysis, strategic elements including groupthink behaviors, and time pressures that force suboptimal decision-making. These challenges create opportunities for AI augmentation whilst highlighting why human judgment and cultural sensitivity remain irreplaceable.
Technical Framework Beyond Language Models
Charlie Posniak argued that whilst large language models are remarkable tools, they must be carefully scoped within a broader technical framework. He identified three critical limitations: their fluency isn’t necessarily verifiable in international politics contexts, their opacity makes accountability difficult in high-stakes negotiations, and they fail to leverage the extensive toolkit of game theory, decision analysis, and machine learning methods developed specifically for strategic interactions.
The project’s technical approach breaks down diplomatic tasks into interconnected phases: research, analysis, strategizing, and execution. This framework recognizes that effective diplomatic AI requires integration of multiple learning paradigms and technical architectures, similar to systems that have achieved success in strategic games.
However, Charlie identified three fundamental challenges in expanding these techniques to diplomacy: representation (the “game” of diplomacy involves changeable institutions and rules), inference (environments with strategic misrepresentation and deception), and specifying success (bringing together diverse counterparts with coherent preferences across massive negotiations). These challenges require novel approaches that go beyond current AI capabilities whilst maintaining transparency and human oversight.
Expert Perspectives from Diplomatic Practice
Gabriela Ramos: Multilateral Negotiations and Cultural Considerations
Gabriela Ramos, drawing from her experience negotiating the UNESCO AI ethics recommendation among 193 countries during COVID-19, highlighted how AI could have enhanced her ability to map country positions and conduct extensive bilateral consultations. She described using AI to help process 55,000 public comments on the draft recommendation—a task where AI proved valuable—whilst emphasizing the deeply personal nature of diplomatic persuasion.
Ramos provided a striking example of unexpected alliances, noting how Russia and the UK aligned in ways that surprised her during negotiations. She also introduced crucial cultural considerations through her example of a Namibian representative who criticized the draft’s individualistic focus, advocating instead for Ubuntu philosophy—”I am because you are, and we are because nature exists, and we are interlinked.” This highlighted how AI systems might fundamentally misunderstand non-Western philosophical frameworks.
When discussing the potential for AI to provide strategic information about individual negotiators, Ramos humorously referred to maintaining “a woman in the loop” rather than just “a human in the loop,” drawing laughter from the panel while emphasizing the importance of human oversight.
Nandita Balakrishnan: Implementation Gaps and Intelligence Applications
Nandita Balakrishnan brought a sobering perspective on the current state of AI adoption in the public sector. Her most concerning statistic was that only 26% of public servants implementing AI globally understand their own country’s ethical frameworks—meaning approximately three-quarters are “freestyling” without proper guidance. She emphasized that the public sector finds itself in the “passenger seat” of AI development compared to private industry.
From her intelligence background, Balakrishnan emphasized that intelligence work increasingly supports diplomatic efforts, not just military applications. She illustrated AI’s potential through a personal example: spending extensive time on analysis only to have a mentor identify a decade-old piece of data that completely negated her argument—exactly the type of comprehensive information processing where AI could provide enormous value whilst maintaining human accountability.
Robyn Scott: Human Agency and Psychological Considerations
Robyn Scott provided perhaps the most psychologically sophisticated analysis of human-AI interaction, warning against getting “drunk on the idea of AI agency” whilst neglecting human agency. Her framework of keeping people “above the algorithm” rather than “below it” offered a conceptual tool for thinking about AI implementation.
Scott’s data from surveying 5,000 public servants revealed widespread “pilotitis”—70% of leaders have AI pilots planned or underway, but only 45% have evaluation plans. This suggests enthusiasm without systematic assessment. Her recommendation for maintaining a “battle mentality” with AI—treating it as a tool whose interests don’t necessarily align with human goals—provided a practical psychological framework for responsible AI use.
Audience Questions and Additional Concerns
The discussion included several important audience questions that highlighted additional concerns. Sam Dawes raised questions about cultural representation and the risks of data poisoning attacks that could compromise AI systems serving as neutral mediators. Devika Rao asked about cultural education and ensuring diverse perspectives in AI development.
A final audience question addressed power imbalances: if AI tools provide substantial advantages in negotiations, countries with superior AI capabilities could gain unfair leverage over those without such resources, potentially altering the balance of international relations.
Critical Challenges and Implementation Readiness
The discussion revealed significant gaps between AI enthusiasm and actual implementation readiness. Cultural representation emerged as a fundamental challenge, with speakers emphasizing that AI systems must capture diverse philosophical frameworks and approaches to conflict resolution.
Security vulnerabilities present another critical concern, with the potential for deliberate manipulation of AI systems in strategic negotiations. The psychological dimensions of AI interaction also require careful consideration, particularly the “sleeping at the wheel” phenomenon where users gradually increase trust in AI systems, potentially assuming perfect reliability in high-stakes situations.
Principles for Responsible Development
Several key principles emerged for responsible AI development in diplomatic contexts. Human authority must remain central, with AI serving as augmentation rather than replacement. Transparency and explainability are essential—diplomatic AI systems must be modular and transparent, allowing users to understand and trace AI recommendations.
Appropriate scoping ensures that AI augmentation matches specific needs of different teams, institutions, and negotiating contexts. Cultural sensitivity and representation must be built into AI systems from the ground up, requiring diverse training data, multicultural development teams, and ongoing testing across different cultural contexts.
Future Directions
The MOVE 37 initiative represents an ambitious attempt to bridge the gap between AI technical capabilities and diplomatic practice. The project’s interdisciplinary approach, combining computer science, international relations, and practitioner expertise, offers a model for responsible AI development in high-stakes domains.
McQuade emphasized the project’s commitment to ongoing collaboration with current and former diplomats, ensuring that technical development remains grounded in real-world needs. The team’s invitation for broader international collaboration recognizes that diplomatic AI tools must be developed with input from diverse cultural and institutional perspectives.
Conclusion
The discussion revealed both enormous potential and significant risks in applying AI to diplomatic negotiations. The strong consensus among speakers on fundamental principles—human authority, transparency, cultural representation, and responsible deployment—suggests readiness for coordinated development of diplomatic AI tools. However, the significant implementation gaps and readiness challenges underscore the need for systematic capacity-building and evaluation frameworks.
As McQuade emphasized, this represents the beginning of a collaborative effort to ensure that AI tools enhance rather than undermine international cooperation. The ultimate goal is not merely to make diplomacy more efficient but to make it more effective at achieving peaceful, sustainable solutions while respecting diverse perspectives and maintaining the essentially human elements of diplomatic negotiation.
Session transcript
I’ve been a major figure in international policy for the United States and in education at the Belfer Center, where our objective is to teach, train, and do research on subjects related to the applications of science and technology for international affairs. We have scholars, practitioners, students, all working to address the gaps and the opportunities for technology, science, and geopolitics. I’m very delighted to have everybody here today. The Emerging Tech Program, which I have the honor of running, was launched about a year ago, specifically to look at where emerging technologies are creating new policy frontiers, new opportunities to use technology to engage in policy, and the implications that technologies are creating for governance, geopolitics, global stability, and global conflict.
And the MOVE 37 initiative that we’re here to talk to you about today. is a part of that program. As you can imagine, in a program that’s relating technologies to modern issues around geopolitics, that artificial intelligence is one of the major aspects of our work. We have a terrific panel here today. It’s my pleasure to introduce them by name, and we’ll talk a little bit more about each one in just a moment. The missing chair, which we expect shortly, is Gabriela Ramos, who’s the former Assistant Director General for Social and Human Sciences at UNESCO. Nandita Balakrishnan is the Director of Intelligence at the Special Competitive Studies Project in Washington. And Robyn Scott is the CEO and co -founder of Apolitical.
And then at the far end, two of my colleagues, researchers on our program at the Belfer Center. Charlie Posniak is a full -time fellow with the program, research fellow with the program. And Slavina Ancheva is a current. student within our program, an MPP student at the Belfer Center. I also want to acknowledge a colleague of ours. who is not here, was not able to get to India in time for the conference, Carme Artigas, who is the former co -chair of the UN AI Advisory Panel, who has been an integral part of starting this work and the ongoing progress we are making. Carme is also the Spain -India Ambassador for AI, High Commissioner in Spain -India for AI, and is here with us in spirit and maybe even on the live stream that we’re doing here today.
So a big shout -out to Karame for all of her help. So why are we here? We are embarking on a major new project, specifically looking at the use of artificial intelligence in diplomacy and negotiation. We are here at a conference about the use of artificial intelligence and the implications of artificial intelligence in so many aspects of society. And our work, is looking at how one will engage non -human intelligences in the process. of diplomacy and negotiation. So we’re here because this is a broad -based project. It is not something that is solely the purview of a small team in Cambridge where we are located, but something that we are looking for collaborators, partners, and input from the community here in the community around the world as we build what will surely be a place where AI is used and surely will be a place where we need to be cognizant and careful about how AI will be used in that exercise.
So the work we do plays on an increasingly bigger role in shaping the relationships between states and within states. We want to have this conversation about the role for AI specifically because of the global nature and the integrated way that AI will play, and more specifically, how will we use artificial intelligence tools to augment humans in what is at its core a fundamentally human process of negotiation. Thank you. diplomacy. Diplomatic negotiations are very high stakes. They are very different, as you will hear from our team, very different than classic one -on -one win -lose negotiations or win -win negotiations. They are very much more complex than that, and that means they require both a unique human touch and a unique application of how artificial intelligence might be used in that process.
But it’s also an area where an enormous amount of potential exists to tackle resource constraints, to find better outcomes, and to use artificial intelligence to enable a more stable and prosperous world, and one for which the follow -up to negotiations can be a subject for the tools and applications of modern technology. How we go about that is crucial, and how we talk about it from the beginning is crucial. There are already a number of tools emerging in this space, but it’s our belief that a more rigorous approach is needed, and you’ll hear some of that in just a moment. So what are we going to do? So what are we going to do? Our team is going to do a brief overview of the way we are thinking about this problem.
And after that, we’re going to have this amazing panel that we will engage in a conversation about views from others who are involved in negotiation and or diplomacy writ large and their views on how technology can be used, be used well or be used not so well, and what the implications of that will be. So with that, let me turn it over to Charlie and Slavina to talk about the project itself. I think Slavina is up first.
Thank you, Michael. And thank you all for being here this morning. A big welcome. Over the next 10 minutes or so, Charlie and myself would like to present you with a little bit of a framing for the expert discussion that we’ll be getting into right after. We’ll broadly focus on three areas. How do negotiation processes currently look and the complexity that comes with them? The potential for AI to augment many of these challenges and processes. Thinking beyond just LLM. and the need for responsible deployment of these tools. Before we do that, I’d like you to close your eyes and imagine. You walk into a negotiation, you look down at the agenda, and there’s 10 items on it.
But as any good negotiator, you know that it’s not just about those 10 items. It’s a lot of other factors that are happening both inside and outside that room that are affecting how that negotiation process is happening. So for one example, you’re sitting across seven counterparts from seven political groups, seven different countries, and behind you, you have your own team, but also 27 other countries that you’re representing, that you’ve promised a certain outcome or a certain deal. Of course, this is not my story. It’s the story of Carme Artigas that Michael mentioned, who was one of the chief negotiators of the EU AI Act, and later the UN AI Advisory Body and many other negotiations.
But it’s not just her story. It’s the story of many of you. It’s the negotiations that you’ve engaged in at the UN, at COP, bilateral. It’s the negotiations that you’ve engaged in at the UN, interagency negotiations within your own organizations. So you know very well that negotiations are complex and they evolve over time. So what might look like just two states negotiating with each other bilaterally is actually a whole set of issues that are on the table. It could be natural resources, it could be AI, it could be climate, and a whole lot of external and internal stakeholders that are also trying to influence that process. We start to dive into some of this complexity.
And more than that, there’s a lot of teams that are sitting behind these principal negotiators, the different departments and agencies that are supporting them with evidence, with documents. And we’d really like to stress that this is a fundamentally interpersonal process. We’re not looking to replace diplomats or negotiators here, but just to give them the tools to manage these complexities much better. And finally, rarely in this world do we have just two states negotiating nowadays. There’s often a third state at the table. In the case of the EU, maybe 27 member states and, of course, a lot of hundreds of others that could be out there. So with that being said, what are some of the impacts of this complexity?
Well, for one, there’s a whole lot of information that needs to be managed. There’s a simple negotiation can generate thousands of documents, transcripts, drafts. On top of that, there’s a certain amount of finite resources that any team has as they grapple with many other challenges throughout the day. There’s a lot of strategic elements. Sometimes in groups, you might have a group think or herding that leads you in one direction as opposed to exploring your full set of options. And finally, there’s the time pressure. So most negotiations do have some sort of time element and handover element to future teams. So with that being said, how can AI help? And I’d like to turn over to Charlie.
Thanks, Lavinia. So AI systems can now beat some of the best human players at Go, at chess, at video games. At board games, language models, as we’ve heard, have become increasingly competent at delivering a range of sophisticated. legal, academic, technical, software contributions, the pace of change has been staggering. And so what our interdisciplinary team has been looking at is we’re trying to envision a better future for diplomacy where computational methods can transform the practice of diplomatic negotiations and statecraft that Slavina just outlined. So supporting better communications, better resolutions, and processes between states can augment their functions. So we’re trying to chart existing technical tools, develop new ones, and provide a range of policy guidelines to ensure that this happens responsibly, safely, effectively.
So the classic question that we get in response is, why can’t you just ask an LLM? Lots of people are interested in trying to see if language models can simulate diplomacy or if chatbots can guide people through a negotiation all in one step. But ultimately, language models are remarkable, and they used to be carefully scoped for three key reasons. Firstly, their fluency isn’t necessarily verifiable in this international and world politics. Secondly, the opacity where you can’t tell what’s going on inside a model is not always verifiable, is not always viable. Because high -stakes negotiations require… accountability both democratically and internally to understand why recommendations shape treaties in certain ways. And additionally, there’s a toolkit that’s 80 years old here.
We have game theory, decision analysis, machine learning, a great range of theoretical developments that exist precisely to model strategic interactions under uncertainty. And so we see LLMs as playing this role at the heart of a really broad set of learning paradigms that’s tying together. We’re both like supervised and unsupervised, self -supervised learning here. But LLMs provide a really strong way to interact with all of these different learning paradigms and technical architectures that the best advances in AI have been built from. So whether that’s the systems that play chess or Go or board games, these are all pulling together lots of different methods. And if we just rely on chatbots at the heart of things, we miss out on all of the technical developments that the last 80 years have experienced.
But there are three key challenges with trying to expand these techniques into the world of diplomacy and world policy. One is to be able to do it in a way that’s not just a way to do it in a way that’s a way to do it in a way that’s not just a way to do it in a way that’s not just a way to do it in a way that’s not just a way to do it in a way that’s not just a way Firstly, representation. As Sabina was touching on, the game that’s being played here isn’t a board game. These are things that are these interactions are fundamentally changeable over time. The institutions that constrain the actions of states can be made and unmade over the course of a negotiation.
Inference like these are environments where there’s real strategic misrepresentation, where people are lying or deceiving or trying to shape outcomes for their own advantages in ways that the current methods aren’t quite well suited to handle. And finally, there’s as we’re touching on, there’s this sense of specifying success. How can you bring together all of the different counterparts and come up with a relatively set, coherent set of preferences and priorities over the course of a really massive negotiation? So these are three challenges that we’re trying to embark on. And one of the ways that we’re approaching this is by breaking down the the tasks of diplomacy and the tasks of negotiation for AI applications. So just broadly, one of the ways we’ve looked at this, is saying that there are some foundational tasks of research analysis.
analysis, strategizing, and execution that build this evidence base with research, that analysis processes the information that you’ve managed to gather. Strategizing relies on using the analysis and the research to come up with a map from your preferences to your outcomes. And then finally, in the room, executing a negotiation, you’ve got to be able to dynamically adapt and adjust over time. And this isn’t a linear process, but a re -entrant cyclical sense of you have all of these things as they change, feeding up through this knowledge base. And so you need this really strong computational infrastructure to be able to even begin to apply some of the really exciting and fascinating AI and ML methods we’re touching on.
So with this, we see a future where research can be done with autonomous research agents, and you can have source validations and get immediately generated counterpart biographies, analysis of gaps and preferences and evidence bases, strategy sandboxes, red team training, and trying to simulate how both the public and the public can interact with each other. And so you need this really strong data set to be able to do that. And then finally, in the room, executing a negotiation, you’ve got to be able to be able to identify the best way to do that. And so you need this really strong data set to be able to identify the best way to do that. And so you need this really strong data set to be able to identify the best way to do that.
And so and then in real time having transcription and translation services that AI and ML methods are doing a really phenomenal job at. All of these things we think will play a role in this multi -model, multi -method world of computational support for diplomacy negotiations. And so this is just a sense of how we’ve tried to break down this problem and get a grasp on the existing and future technical developments. Finally, we want to end on these three commitments that are central to a lot of the stuff that we’re talking about. One, human authority has to remain central. We can’t have any objection of responsibility over decisions of war and peace. We have to make sure that the tools themselves are modular and transparent so that you can see what’s happening at each stage of the process and which parts of which computational systems are supporting analysis.
And then finally, making sure that augmentation is appropriately scoped for the team, the institution, and the setting that it’s in. So with that, I’d like to hand over to Michael in the panel, our director of the program. And what I hope will be a wonderfull discussion.
Great. Thanks, Charlie. Thanks, Slavina. So just as everybody’s sort of getting settled in, so we have a plan for a project. We have a vision of how one has a set of signposts and goalposts in what is essentially the ability to augment with intelligence, human intelligence and participation. So there are lots of technical elements of that. We’ll be developing tools. We’re looking at evaluation methodologies, et cetera, et cetera, et cetera. It’s the whole technical side. But one of the benefits of the approach we’re taking is that we have access to a large body of people for whom the day -to -day practice of negotiation diplomacy, not necessarily constrained by the definition of diplomacy, meaning state -to -state to get to an answer, but organization -to -organization, people -to -people, negotiation -to -negotiation.
And I am delighted then to have three people here who can talk a little bit about their views on how artificial intelligence will be used in the process of their work. That allows us to then learn from that experience and how we map that into the Move 37 project. So Gabriela, let me start with you. Welcome. Thank you for negotiating all the traffic to get here. So you’ve been at the center of international policy design and negotiations on issues such as climate change, international taxation, gender equality, artificial intelligence, a whole list of things in a brilliant career. You’ve done this through key roles at UNESCO, but also at the G20 and G7 and at the OECD.
We’re delighted to have you here. And let me ask you just to sort of start the discussion, if you would, to just talk a little bit about what it’s like to sit in the driver’s seat as a mediator trying to bridge sides, and how you would think about AI capability augmenting you in that process.
Well, thank you. Thank you so much for inviting me. Is it working? For inviting me to this early morning. And I find this topic fascinating. because when you are a diplomat and when you have negotiated many standards or agreements you don’t think about this taxonomy you never think about the taxonomy you just think that you need to get it someday and that you need to find consensus and that you need to find where the problems will be and therefore it’s very interesting that you asked me something to structure better how we do things and I’m going to refer to the negotiation of the recommendation on the ethics of artificial intelligence because that was a very difficult one, 193 countries negotiating during COVID and actually it was very helpful to have a zoom where I could see where all the countries were positioning themselves which actually helped a lot but the interesting thing is that it was about artificial intelligence we were negotiating and we had to map out where countries were and it was very interesting to see that some of the usual suspects that are always blocking the effectiveness of international instruments were aligning with countries that are very supportive of those but that didn’t want to see UNESCO playing a role in this field so I have Russia and I have UK in the same position that helped me because I called the UK and say are you happy to be in the same position and then they just hold one second but the interesting thing is it’s a very heavy document it’s very very because there are so many cultures we have to almost define the step by step and the interesting thing is that when thinking how can AI help us organize better at the moment it did not provide with so many inputs it was 2021 but UNESCO has this idea of being super inclusive we developed the recommendation with all the regions in the world represented and all the disciplines but then we put it out to the world and we receive 55 thousand comments therefore we use AI to integrate them.
That was, no? But then when you think about how do you map the positioning of countries, I think that would have been super useful to have more AI. I used to have full teams providing me with briefings for the people we were going to talk because you need to be conducting a lot of, one thing is the negotiation in the room, another thing is all the legwork that you need to be doing, talking to the different actors, knowing where they stand. And that would have been amazing, just to have a repository of what is the traditional position of certain countries or certain negotiations, which had to do with the substance, but probably has to do also with the positioning of that country in the international context and how much they abide by the rules and how much they support these things.
And then what I find fascinating, but this is always, as my colleagues here said, how you keep the woman, the woman in the loop, I love it. Yes, woman in the loop, not human in the loop, human in the loop. it was a lapsus it’s not lost on me my panelists here it was a lapsus but I like my lapsus the whole point is when you are in front of a person and you’re trying to convince that person that he’s alone nobody’s supporting his position and therefore he should not continue blocking the negotiation how would it be that you can have more information about that person what moves them how can you offer something that will be important for them because this is the kind of things that we do negotiating what would you want to have out of it I know you have your bosses in your shoulder and you need to bring them something to the table but tell them you’re alone that you’re blocking it and imagine you can have the information about that person but that’s also risky because it deals with privacy and all of those things but I feel it would be fantastic because this is strategic thinking and using the right words to get the countries to agree that will bring you on some places and that I think is a very important thing that’s a capacity that can be augmented by AI.
Thank you very much.
Yeah. Is it on? Yeah. Thank you very much for, actually it’s a terrific transition because Charlie was talking about the complexity of these negotiations, about how they’re not dynamic. I can think of nothing more dynamic than a UNESCO negotiation. Just trying to understand where people’s positions are by itself is a complexity. Trying to integrate 20 or 30 of those positions or 190 of those positions and then trying to find what are the right levers that I might be able to pull. We do this now all the time with people, with you. And the question is how can modern tools help in that process without removing or absolving responsibility for people. So thank you. So Nandita, you have had an amazing career in academia, the public, the private sector.
You’ve been at Stanford. You’ve worked in intelligence and advisory. And you’ve seen all the different things sides of these negotiations, both from the government side, the private sector side, inside and outside. You are currently at the Director of Intelligence at the Special Competitive Studies Project. For those of you who don’t know, SCSP is a major effort funded by Eric Schmidt, sponsored by Eric Schmidt, after the conclusion of the National Security Commission on Artificial Intelligence in the U.S. in the way that technology will be used in competition for economics, national security, et cetera. So it’s a big, broad role. Every day you are negotiating. Every day in your life you have negotiating. So can I just ask you to talk a little bit about your view, both from an SCSP point of view, but also from your career about how you would see this evolving?
Absolutely. And thank you so much for having me. And good morning, everyone. So my career, as you mentioned, has sort of spanned three distinct sectors. And they kind of came at different times, academia, public sector, and private sector. Now, there was a time, I would say, that all… three of these groups would have, the particular ways they would have leveraged technology would have obviously its variations, but the access and adoption of it were much more similar. This is just fundamentally not true of AI. The public sector has been more in the passenger seat, if not the backseat, especially over the last decade. And so what was really interesting to me is I started in academia, then went to the public sector, came out of the private sector, and I saw that dip in my access to AI.
Now, I have been in intelligence, and one important thing about intelligence and maybe a misconception of it is that it is primarily used for military, feeding information for military applications. It’s not true. We are just as vital to diplomatic efforts because every opportunity you’re looking for that something bad could happen, you’re just as much looking at what are the opportunities for something positive to happen. How can you open the negotiating space? So we’re looking at everything from both sides. So I wanted to give that perspective. As an analyst, I can say personally, it was very valuable. valuable to have the rigorous training I had to do things very, very manually. So learning how to write an assessment without the access to AI.
But now that I’m on the back end of it, I can tell you every day I ask myself, like, if I had access to these tools as an analyst, how could I have worked much faster and much smarter? Because at the end of the day, and something that Gabriella was mentioning, there’s a lot of data out there, but a human analyst is never going to be able to manually process most of that by themselves. The story I always like to tell is the very first time I wrote this intelligence piece, I was so proud of it. I thought the argumentation was great. The data I had used was great. I showed it to a mentor, and they said, this is awesome, but you didn’t consider this one piece of data from 10 years ago that completely negates your argument.
And here’s the thing. It’s not that I didn’t do good work. It’s just that there was no way I was going to know that that piece of information exists. Now imagine a tool that can help you not only identify that that data exists, but learn how to synthesize it. Okay. Thank you. Obviously, as Charlie and Slavina mentioned, human in the loop is always going to be important because you want these assessments to have a human level at the end of it. But there is a way to move better and smarter. And this is something that SESP is really advocating for in sort of three distinct ways. So first, at a very meta level, we make the argument that AI has fundamentally changed the threat landscape and the scope for global competition.
It is now kind of the foundational way we need to think about geopolitics, especially as this technology is rapidly evolving. So you really cannot divorce AI and AI adoption when you’re trying to understand geopolitics and foreign policy. Number two, in order to have an ability to make assessments about this emerging technology, to understand geopolitics, you have to have a public sector that is actually leveraging these tools to the best of their ability. Now, there are a lot of ways that AI is being adopted at the public sector. sector, you know, you’re obviously thinking about sort of the, again, military application of drones, but you need to have your day -to -day workflows integrating this technology.
And this is something that we are really focusing on, especially about how to build up AI literacy within the public sector, not just at the military level, but within the intelligence community, within the State Department, and even within, like, commerce, OPM, all the, like, any federal sector employee at some point needs to be moving smarter and faster with AI. And third, we’re looking at the, like, specific use cases. So one of the projects that we were working on last year is looking at how AI can be used for predicting geopolitical events, both for military applications, but also for State Department applications. So, and the reason we do that is because in order to convince the public sector that they should be using AI, you almost need to show them how it could look like 10 years from now as we’re moving to that future.
So by kind of demystifying its use and showing them targeted ways that you can use it, it actually solves your meta -problem of understanding why AI is so important to geopolitics. .
So I’m hearing a couple of things. I’m hearing this general statement that much of the world is going to be about AI and much of the world is going to be AI creating that world. And that’s the metaphor that comes directly to the project we’re looking at, which is we’re negotiating with AI tools. We have to have a baseline of capability. And yet the landscape in which negotiation and diplomacy are happening is being fundamentally changed by AI itself. And so that whole issue around preparedness and around setting the ground rules. I also heard not just the thought process around artificial intelligence as a trusted agent to accumulate information. Both of you mentioned that. But also as an agent to help understand new pathways for success, new pathways for leverage, whether those are national security or whether those are economic vitality.
The scope of the negotiations doesn’t really change that. So. In the area of. sort of preparation, let me come to you, Robin. Robin is the co -founder and CEO of Apolitical. Apolitical is a global platform for policymakers that specializes in government innovation. She’ll talk about that in just a second. In particular, you have courses in helping governments prepare their workforces for the modern world in which they live. Your AI courses have reached hundreds of thousands of people around the world. And much of what you are trying to do is to prepare the world for the kind of things that we are talking about in Move 37, obviously much broader than just that topic. So let me ask you to talk a little bit about that, if you wouldn’t mind.
What sort of lessons you’ve learned from the field and how you think about policymakers’ willingness or how you change policymakers’ willingness to embark on journeys with new tools and new capabilities.
Robyn Scott:
Stanford HAI is one of our collaborators. So we are more context experts than content experts, and we bring the content experts into the middle. So where are we at? Let me give you some data. And this is from a 5 ,000 -person survey that we’ve recently run. Overall, public servants are incredibly optimistic about AI. North of 90 % think there is huge possibility in the public sector. And there are lots of paradoxes here. They’re also wary of it, right? There is a huge value creation opportunity. One figure from BCG estimates that there’s 1 .75 trillion of public sector value to be unlocked if we harness AI in the right way, because AI loves bureaucracy, all these repeatable processes.
And about a third of most public officials daily watch And I’m going to do something about it. And I’m going to do something about it. is research and writing related. AI is great at that. So the prize is very, very big and that’s just the painkiller prize. When you get to the vitamin prize, when you get to what AI could do in terms of predictive policy making and responsive policy and adaptive policy, et cetera, then you get into a space that’s only really bounded by the imagination. So there’s lots of AI talk. There’s less AI action. Increasingly, we’re in a pilotitis zone where almost everyone’s got pilots. 70 % of leaders say they’ve either got AI pilots or plan to launch them this year, but only 45 % of them say they have any plan to evaluate their pilots.
So that’s a pretty big gap to close and we see gaps like this all the time. One of the biggest gaps is leaders not using the technology themselves, which is a real problem because you can’t understand this technology in the abstract. You cannot look over your grandson’s shot. older and see them using it. You’ve got to use it. You’ve got to feel the speed of change. Of the public servants who are implementing AI in the public sector globally, these are people who self -identify of having AI in their jobs. Only 26 % of them say they understand their own country’s ethical frameworks. So approximately three -quarters of all the people rolling out this technology are freestyling.
That’s terrifying. So that’s a skills and knowledge gap not even closed within an institution. It’s not even getting to how do we actually understand the basics of this technology. Just to close, and there’s a whole lot more fascinating data, but one of the things that is increasingly worrying me talking to leaders around the world working on this is that we are now getting quite drunk on the idea of AI agency. Thank you. but we’re not talking about human agency in the process and maintaining it. So I think we risk getting into a zero sum dynamic where, and I think this is relevant to diplomacy, where the agency drains away to AI and that all comes at a cost to humans.
So we need to be building up humans at the same time. And the framing and heuristic I found most helpful for this overall is this idea of this recently merged of being below or above the algorithm. If you’re below the algorithm, you might be an Uber driver being dispatched, an Amazon packing worker being allocated to put stuff into boxes. If you’re above the algorithm, you are using tools to further your goals. We need, when we think about closing that capability gap, and I think in diplomacy, to keep moving people up above the algorithm.
Great, fantastic comments from all of you, thank you. I’m gonna ask Charlie and Sylvina just a couple of quick comments to make comments on the project. All right. Because then I’m going to come back to the three of you and I’ll prompt you with the question now, which is what would you want to be comfortable with knowing about the tools and the capabilities that you will be asked to use or be offered to use? So, Slavina, can you just follow on Robin’s comments? One of the benefits we have of doing this program at the Belfer Center and the Kennedy School is a large set of people who have done this for a living. This is what diplomats and negotiators have done.
Can you talk a little bit about how we engage that group and what we’re trying to get from that?
Definitely. So I think, as Michael put it, obviously at the Belfer Center we have quite a variety of current, former diplomats, practitioners, not just from the U.S., but from all over the world. And a large part of the work that we’ve been doing is sitting down for one -on -one interviews with all of them and really getting a sense of how they think. Thank you. just about the content of the major negotiations that they’ve been leading, but more about the process. So very similar to the panel discussion we’re having here today, what are some of the uses that you see one day you could be using AI for? So a lot of what we’ve heard so far, I mean, the position tracking that I think Gabriella referenced.
We’ve heard a lot about historical precedent, the generating of options and strategic options, and really uncovering the deepest interests. And I think where this ties really well to what Robin was saying is a lot of them are also expressing their hesitancy. So they’re being very forthcoming in that, and I think that allows us to take a really sober look at what are the risks of integrating these tools. One of the main questions we get is if you’re using these tools, we ask them, what would you like to know? So exactly what Michael’s saying now. So a lot of these interviews have really been integrated across the different work streams of our project, and we really put diplomats and practitioners at the heart of the rest of the work that we’re doing.
All right. Thank you.
And Charlie, you talked a little bit before about, you know, there’s an obvious role that LLS has. I think that’s a really good point. And I think that’s a really good point. in helping people accumulate and synthesize a very large amount of information. But there are many more aspects of a negotiation. Can you talk just a little bit about some of the other ways the tools are going to be used in the project that we’re doing?
Yeah, absolutely. Thanks so much, and thanks so much for the comments as well so far. The panelists have touched on a couple of really interesting applications, whether it’s, especially as Robin was talking about, with predictive or adaptive policies, if you’re looking at, and Anita as well, with the predictive geopolitical events. We have a really fascinating array of algorithms that are incredibly competent at these sorts of predictive tasks. And what we now have with the current computational ability and also the ability of language models is we can process vast amounts of unstructured data in ways that makes these algorithms even more accessible to a wider range of people. So I think that that’s one area that I’m particularly excited about.
There’s also a bunch of stuff, as Gabriella was touching on, is how do you take these vast unstructured transcripts and come up with natural… natural language processing additions on top, and can we represent positions that they track? and change over time. I think that is one of the big parts of cognitive load that I think diplomats have spoken about a lot.
Great, thank you. We’re going to see if questions from the audience are in just a moment, but Nandita, I’m going to start with you. I’m going to, forgive me for doing this, I’m going to characterize your career as an analyst. Every job that I read and see about has been an analyst of some kind. You’re constantly in a place where people are suggesting new tools and new capabilities. So think about AI in the world you’ve inhabited. What’s going to make you most comfortable when somebody shows up and says, here’s the thing, it’s going to make your life better?
That they can explain what the outputs are. So one of the things, like when we were looking at, for example, could we be using AI for predicting geopolitical events, a lot of people, both in industry, in academia, and in the public sector, that are working on these types of projects, they all say the same thing, which is this should be seen as a data point or a shaper of the way you should view the world, not as finished intelligence. Finished intelligence, should always… ultimately be done by a human who is accountable to their policymakers. Now, if you are working in policy, you understand actually how this works. You’ll have your head of state or your head of government come to you and ask you to explain exactly how you got to your assessment.
Right now, we have a human who ultimately has to do that. But what happens when we’re starting to rely more and more on AI tools is you never want to lose that ability to explain the outputs and particularly demonstrate that you’ve looked at all the counterpoints that you could possibly do. Now, this is where I think AI is super helpful because oftentimes when I was trying to figure out, especially in academia, what are all the things I could have done wrong? How could I have measured this differently? You’re always prepared to thinking about all the decisions that you made and how to justify and validate them. But as the scenarios get broader, as they get more complicated, your ability to figure out what the counterarguments are are going to just dwindle over time.
Oftentimes, the argument we make is that humans are biased at the end of the day. We make our decisions differently. We make our decisions differently. We make our decisions differently. We make our decisions differently. We make our decisions differently. We make our decisions based on how we got to where we are today. the experience that you have. So AI can be really, really helpful in helping you sort out the counterarguments, but you still need to understand how those counterarguments work and why ultimately you’ve come to the assessment that you have. So where I would feel super comfortable is this is how I relied on AI. This is how it came to the output that it did.
This is fundamentally why I made the assessment that I did.
Great. Thank you. And Robyn, I think this is the world you live in every day of helping governments and government officials and civil service workers. So project that onto an AI for diplomacy landscape. What do you think is going to be important to get people to say, I’m going to trust this, I’m going to work it, but I’m at least going to try?
Robyn Scott:
Well, at the risk of stating the obvious, I think we should just acknowledge that the people developing these models don’t even have full legibility over how they’re working. So that’s where we’re starting from. So that’s the kind of ceiling. On where we can get to. You can break down the thinking process as it were, but you still have that black box. I don’t think it’s insurmountable. I think some of the things I’m worried about relate to the more psychological aspect of this, and in particular, sleeping at the wheel, this phenomenon where we have this strange relationship with AI where we get false negatives too quickly. So it does a bunch of clever things, except it didn’t do this one thing, and therefore we can’t use it for anything.
And if you check back in in like a month’s time, often it can do the thing. So you have that, the false negative, and then the phenomenon of sleeping at the wheel is where it starts to get very, very good, creeping upwards of like 85%, 90 % accuracy, and then you assume it’s 100 % accurate. And it’s really quite hard to edit your assumptions and say, no, it’s not. And you’ve probably all found this. Because if they’re power users of AI, and I’m one of them, some of them are not. Sometimes it comes across as so smart and so brilliant and comes up with a whole lot of counter -arguments. I use it for sort of kicking the tires from different perspectives all the time on stuff I’m doing.
That it’s almost overwhelmingly smart, and you’re like, it must have covered everything. That’s a default. So I think giving us the human tools and the psychological sort of counter -arguments and weaponry to deal with this is really, really important. I already have a heuristic that whenever I open my phone and I’m dealing with anything with an algorithm, I am in opposition to that algorithm because its interests don’t generally coincide with mine. So I try and get all algorithmic stuff off my phone as a starting point. But the dynamic with AI is a bit different, but I still think you have to have that sort of battle mentality with the technology. So that would be my…
There are many other things to consider, but that’s top of mind.
I think that’s terrific. And I think this idea of calibrating on the… Like what I want from completeness when I ask for analysis may be very different than what I want from can you just give me some different ideas that I haven’t thought about before. And different stages in a negotiation are going to require different levels of calibration. So, Gabriela, what do you think?
Well, it’s very difficult to follow these two girls. But the fact is that when you know a little bit more about how these things work, I’m not a technologist. But I have been looking at all of what can go wrong. Misrepresentation, over -representation of certain cultures, certain languages, assumptions. Therefore, if I am negotiating and you’re going to offer me a tool to improve my negotiating skills, I need to be sure that the assumptions that you use to build that tool are not just to beat the person in front of me. or not just to maximize efficiency or not just to do the kind of things that we are teaching the AI to do. And therefore, it’s much more complex.
Because what you want to do is to open a space of human understanding. How do you do that? And therefore, I will be questioning, as Robin said, always questioning, but it’s not what we do. And the other point is that the AI, what is amazing, is that it’s just reproducing cognitive abilities that humans do. So when you go into the using whatever chat box you use to get information, you take for granted what it comes out. What you would never do if you hire somebody in the first week. Even if you have done all the checkpoints for that person to have the capacities that you are looking in the market. So I feel that there is this question of, first, really bringing to the table the AI.
tools that are going to be reliable and trustworthy, and I know that these words are almost a cliche, but the reality is that sometimes they’re not. And the other point is that you can become very lazy. And how do you avoid just to grab the thing and say, that’s perfect? How do you keep that space for ourselves to take the decisions and be not only in the driver’s seat, but actually to think of AI as a supporter cast. And if we get the Oscar, it’s us and not the AI.
That’s fantastic. I have this mental picture in my mind for those of you who’ve done negotiations in any kind. The first thing you do is you grab a bunch of your team in a room and you say, let’s talk about strategy, and what are we going to do? And Bob in the corner says, here’s an option, and you realize Bob had a bad night last night, so maybe you discount what Bob says. So what happens when the AI says, here’s a thing? I don’t just trust it, it’s a priority. I have to apply a human judgment to what I’m hearing. So terrific points. Okay, we have time for a question or two, and I see one.
Just say who you are, where you’re from, and a quick question.
Thanks so much, Michael.
We have a microphone. Thank you.
Thanks so much. I’m Sam Dawes. I’m a senior advisor to the Oxford University AI Governance Initiative and director of multilateral AI. But my background is in diplomacy, working for Kofi Annan when he was Secretary General, and then for the Foreign Office and Cabinet Office. I wish we had had AI tools back then. So I was really inspired. It’s such a timely, rich panel, so thank you all for that. Something that Gabriella said around culture I think is so important, and I’m thinking about the positives and the risks with applying AI in this space. How can we ensure that the diverse cultural inputs of the world’s most diverse countries, of different societies, are… embedded in the data sets and the models which inform negotiations.
So is that something that UNESCO is working on in the long term and connects to the tools we use? And the second question is around the flip side. If AI is to be a useful neutral mediator in disputes or an assistant to a mediator, a human mediator, then what do we do about data poisoning and prompt injection and those kinds of risks? Thank you.
Very fast, not on the question, the question of culture. Culture is expressed by language. And therefore the more we can try to represent those languages in the models we use, I think the best we will be prepared to understand it. And I’m fascinated by that. I’m not a linguist, but if I… I would choose another life, I would do that. Because when you hear, for example, there was this Namibian representative during the negotiations of the ethics of AI, and she was saying, I find your draft very individualistic. It’s always about the human. It’s always about the outcomes for people, improving their welfare. And at the end, what I’m thinking about is the Ubuntu philosophy, which is I am because you are, and we are because it’s nature, and we are interlinked.
And therefore, how do you capture this when the models that we are developing are maximizing individual welfare? And so the only answer I have is try to be representative, and I think this is nothing new. We have seen how much these tools can discriminate if you are just built in one language or with the representation of certain characteristics of people. or countries. but really to be sure that you are capturing the richness that comes through language and opening up the sources and that’s the other point the sources this is one thing that I would always ask the answer you’re giving me is based on what sources and that would might help but these are checkpoints that we always need to be testing on the ground
I think you also if I just raise one other thing I think you raise one other really important point which is you know there’s a whole spectrum of things here there is negotiation because we have a set of interested parties to get to a common good understanding we also have very adversarial negotiations so adversarial negotiations open up this whole possibility of data poisoning of training set differentiation etc. so it’s a very complex world I really appreciate you bringing it up we have time for one more question I think let’s go right here can somebody tell me are we counting down to zero or are we counting down to zero or are we counting down to five Are we okay to keep going to zero here?
Okay, good. We’re going to go to zero no matter what.
Good morning. Good morning. Namaste. My name is Devika Rao. I meet 300 to 600 people per day, and I work around different languages. So basically I’m an Indian classical dance teacher. Okay. So I have data. I have a human connection. So, and what we want to do, how this cultural education can be supported by AI. So what is the step I can take further? Presently I’m actually working on a framework, cultural framework, which is India and UK POCC 2025, 2030. So I’m also interested in NEP and national health policy because people connected to their health and education. And education, which is the center point. So where I can go. and what kind of co -creation, co -collaboration can happen in this?
Robyn, is this something you want to jump in on? Maybe give her your email address.
Robyn Scott:
I wish I had an immediate response to that. I don’t think there is any default place to go, but I do think this is where the conversation is evolving, and there’s more and more recognition of the cultural oversight and importance. So I would just encourage you to please keep making those points. And I will just make one comment on the first question. The Swiss have built sort of a quasi -Swiss government, quasi -multilateral initiative to build an LLM that is trained from the outset on more than 100 languages, and it is actually run by a friend of mine who’s a former Swiss diplomat, so she’s coming at it very much with a diplomatic context. I’m very happy to make that connection.
Education. Super. Super complex. Don’t look at the technology. because we always focus on the technology the countries that have introduced so much technology in their educational systems didn’t get better student outcomes because of content we go to the internet and we go to the systems and we try to bring tools to help kids and we never see if they are contextually relevant culturally linked and therefore if you don’t produce the content the tools will not make it.
I’ll just add one last thing I think the way to think about AI is also is it actually solving a problem or are you just trying to introduce it to create a new problem I think this is where you have to think about the point of AI augmentation I think there are a lot of ways we can think about how AI can augment the problem sets that we have but sometimes you don’t actually have the problem that AI is going to solve and you don’t need to force AI to fix it
Thank you very much Okay we’re going to negotiate If you have a really quick question you can ask it No behind you Thank you it’s got to be quick though
My name is Arman I’m working for JPL South Asia just a quick question on how do you think this would impact balance of power like given that every country has different access to the kind of data sets that they have and as we saw there can be three states also in the play how it would look like state A knows everything about the rest of the players and the others don’t.
So we think a lot about I’ll answer this if you guys are okay we think a lot about this in the project which is what’s the evolution of a set of AI tools it’s like everything else that we are here at this conference which is where will tools provide competitive leverage where are the kind of tools in the world we live in ones that should be dispersed actively and offensively not defensively in a world where some of the negotiation is about getting everybody to a positive and some of the negotiation is adversarial so I think it is a huge element of how it will change power structures not just because it’s a thing we think about from a negotiation diplomacy but because the general AI tool.
Okay with that we are just about out of time. I want to thank this amazing panel. Gabriela, Nandita and Robin. I want to thank my colleagues Slavina and Charlie for and I want to thank all of you. We are at the beginning of a long process. When you work at a place like I do at the Belfer you think about projects that have beginnings, middles and ends and you think about projects that can grow into something really really important. So any of you who have interest in what we’re doing please let us know if you feel like you have questions that we ought to be asking or you have answers or you have answers to questions we have asked.
We would love to hear from you as we begin to build what we think is a really important discipline. So thank you and thank you to the sponsors and hosts. I appreciate everybody joining us. Thank you.
J. Michael McQuade
Speech speed
182 words per minute
Speech length
2899 words
Speech time
954 seconds
Complexity of diplomatic negotiations & AI support
Explanation
McQuade highlights that negotiations are inherently complex, involving many parties and stages that require calibrated AI assistance. He stresses that AI must augment, not replace, the fundamentally human nature of negotiation.
Evidence
“We want to have this conversation about the role for AI specifically because of the global nature and the integrated way that AI will play, and more specifically, how will we use artificial intelligence tools to augment humans in what is at its core a fundamentally human process of negotiation” [4]. “And different stages in a negotiation are going to require different levels of calibration” [8]. “Just trying to understand where people’s positions are by itself is a complexity” [17]. “Trying to integrate 20 or 30 of those positions or 190 of those positions and then trying to find what are the right levers that I might be able to pull” [18].
Major discussion point
Complexity of diplomatic negotiations & AI support
Topics
Artificial intelligence | Capacity development
Tools must preserve human responsibility
Explanation
He questions how modern AI tools can assist without removing accountability from human negotiators, emphasizing the need for human judgment in the process.
Evidence
“And the question is how can modern tools help in that process without removing or absolving responsibility for people” [58].
Major discussion point
Principles for responsible AI in diplomacy
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence
AI changes power structures and creates asymmetry
Explanation
McQuade notes that AI will alter power dynamics in diplomacy, potentially giving advantage to those with superior tools and data, thereby reshaping negotiations.
Evidence
“we think a lot about this … where will tools provide competitive leverage … it is a huge element of how it will change power structures not just because it’s a thing we think about from a negotiation diplomacy but because the general AI tool” [11]. “So there are lots of technical elements of that” [156].
Major discussion point
Challenges: bias, data security, power asymmetry
Topics
Artificial intelligence | Data governance
Adoption barriers & need for evaluation methodology
Explanation
He points out the lack of systematic evaluation and baseline capabilities, which hampers effective AI integration in diplomatic work.
Evidence
“We’re looking at evaluation methodologies, et cetera, et cetera, et cetera” [19]. “We have to have a baseline of capability” [66]. “We have a vision of how one has a set of signposts and goalposts in what is essentially the ability to augment with intelligence, human intelligence and participation” [25].
Major discussion point
Adoption barriers & capacity building
Topics
Capacity development | Monitoring and measurement
Charlie Posniak
Speech speed
211 words per minute
Speech length
1310 words
Speech time
371 seconds
Complexity of diplomatic negotiations & AI support
Explanation
Posniak proposes breaking down diplomatic and negotiation tasks to identify where AI can be applied, framing AI as a set of modular supports across the workflow.
Evidence
“And one of the ways that we’re approaching this is by breaking down the the tasks of diplomacy and the tasks of negotiation for AI applications” [2]. “analysis, strategizing, and execution that build this evidence base with research, that analysis processes the information that you’ve managed to gather” [30].
Major discussion point
Complexity of diplomatic negotiations & AI support
Topics
Artificial intelligence | Capacity development
AI across research, analysis, strategy, execution
Explanation
He envisions autonomous research agents, source validation, strategy sandboxes, and real‑time translation as AI‑enabled capabilities that support diplomatic work end‑to‑end.
Evidence
“So with this, we see a future where research can be done with autonomous research agents, and you can have source validations and get immediately generated counterpart biographies, analysis of gaps and preferences and evidence bases, strategy sandboxes, red team training, and trying to simulate how both the public and the public can interact with each other” [39]. “And then in real time having transcription and translation services that AI and ML methods are doing a really phenomenal job at” [120].
Major discussion point
Practical AI use cases envisioned
Topics
Artificial intelligence | Information and communication technologies for development
Principles – human authority central, modularity & transparency
Explanation
Posniak stresses that human authority must remain central and that AI tools need to be modular and transparent so users can see how each component contributes to analysis.
Evidence
“One, human authority has to remain central” [50]. “We have to make sure that the tools themselves are modular and transparent so that you can see what’s happening at each stage of the process and which parts of which computational systems are supporting analysis” [68].
Major discussion point
Principles for responsible AI in diplomacy
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Limitations of LLM‑only approaches
Explanation
He argues that relying solely on large language models is problematic because their fluency is not verifiable in international politics and they cannot replace broader technical toolsets.
Evidence
“But LLMs provide a really strong way to interact with all of these different learning paradigms and technical architectures that the best advances in AI have been built from” [71]. “Firstly, their fluency isn’t necessarily verifiable in this international and world politics” [81]. “So the classic question that we get in response is, why can’t you just ask an LLM?” [74].
Major discussion point
Limitations of LLM‑only approaches
Topics
Artificial intelligence | Data governance
Predictive geopolitical event modeling
Explanation
Posniak notes that game theory, decision analysis, and machine‑learning models can be used to forecast geopolitical events, providing data points rather than definitive intelligence.
Evidence
“We have game theory, decision analysis, machine learning, a great range of theoretical developments that exist precisely to model strategic interactions under uncertainty” [35]. “We have a really fascinating array of algorithms that are incredibly competent at these sorts of predictive tasks” [127].
Major discussion point
Practical AI use cases envisioned
Topics
Artificial intelligence | Social and economic development
Gabriela Ramos
Speech speed
164 words per minute
Speech length
1455 words
Speech time
531 seconds
Position mapping & comment integration
Explanation
Ramos describes how AI helped map country positions during a UNESCO AI ethics negotiation and process 55 000 public comments, illustrating AI’s role in organizing large‑scale diplomatic input.
Evidence
“…when you are a diplomat … I could see where all the countries were positioning themselves which actually helped a lot … we put it out to the world and we receive 55 thousand comments therefore we use AI to integrate them” [10]. “But then when you think about how do you map the positioning of countries, I think that would have been super useful to have more AI” [23]. “There’s also a bunch of stuff, as Gabriella was touching on, is how do you take these vast unstructured transcripts and come up with natural… natural language processing additions on top, and can we represent positions that they track?” [24].
Major discussion point
Practical AI use cases envisioned
Topics
Artificial intelligence | Data governance
Principles – maintain human decision & avoid over‑reliance
Explanation
She stresses that AI tools must support diplomats without dictating outcomes, keeping humans in the driver’s seat and preventing tools from being used merely to ‘beat’ counterparts.
Evidence
“if I am negotiating and you’re going to offer me a tool to improve my negotiating skills, I need to be sure that the assumptions that you use to build that tool are not just to beat the person in front of me” [108]. “It’s always about the human” [97]. “How do you keep that space for ourselves to take the decisions and be not only in the driver’s seat, but actually to think of AI as a supporter cast” [94].
Major discussion point
Principles for responsible AI in diplomacy
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence
Bias & cultural representation challenges
Explanation
Ramos warns that AI tools can discriminate if trained on limited languages or cultural data, urging inclusive datasets to avoid misrepresentation.
Evidence
“We have seen how much these tools can discriminate if you are just built in one language or with the representation of certain characteristics of people” [109]. “Misrepresentation, over‑representation of certain cultures, certain languages, assumptions” [136]. “Culture is expressed by language” [137].
Major discussion point
Challenges: bias, data security, power asymmetry
Topics
Data governance | Closing all digital divides
Reliability and trustworthiness of AI tools
Explanation
She calls out that while reliability and trustworthiness are often cited, they are not guaranteed, highlighting the need for rigorous validation.
Evidence
“tools that are going to be reliable and trustworthy, and I know that these words are almost a cliche, but the reality is that sometimes they’re not” [64].
Major discussion point
Principles for responsible AI in diplomacy
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Nandita Balakrishnan
Speech speed
214 words per minute
Speech length
1355 words
Speech time
379 seconds
AI for analyst efficiency & data synthesis
Explanation
Balakrishnan reflects on how AI could dramatically speed up analysts’ work, allowing faster identification and synthesis of data.
Evidence
“But now that I’m on the back end of it, I can tell you every day I ask myself, like, if I had access to these tools as an analyst, how could I have worked much faster and much smarter?” [44]. “Now imagine a tool that can help you not only identify that that data exists, but learn how to synthesize it” [46]. “AI is great at that” [32].
Major discussion point
AI for analyst efficiency & data synthesis
Topics
Capacity development | Artificial intelligence
Predictive geopolitical event modeling
Explanation
She describes projects using AI to forecast geopolitical events, emphasizing that AI outputs should be treated as data points, not definitive intelligence.
Evidence
“one of the projects that we were working on last year is looking at how AI can be used for predicting geopolitical events, both for military applications, but also for State Department applications” [125]. “It is now kind of the foundational way we need to think about geopolitics, especially as this technology is rapidly evolving” [128].
Major discussion point
Practical AI use cases envisioned
Topics
Artificial intelligence | Social and economic development
Explainable outputs & accountability
Explanation
She stresses that AI must provide transparent explanations of its outputs so that human decision‑makers remain accountable.
Evidence
“That they can explain what the outputs are” [85]. “Finished intelligence, should always… ultimately be done by a human who is accountable to their policymakers” [102]. “This is how it came to the output that it did” [104].
Major discussion point
Principles for responsible AI in diplomacy
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence
Capacity development & AI literacy in the public sector
Explanation
Balakrishnan highlights the need to build AI literacy across the public sector, from intelligence agencies to commerce, to ensure effective and responsible adoption.
Evidence
“building up AI literacy within the public sector, not just at the military level, but within the intelligence community, within the State Department, and even within, like, commerce, OPM, all the, like, any federal sector employee” [160]. “And this is something that we are really focusing on, especially about how to build up AI literacy within the public sector” [160].
Major discussion point
Adoption barriers & capacity building
Topics
Capacity development | Artificial intelligence
Ensure AI solves real problems, not create new ones
Explanation
She cautions that AI should be introduced only when it addresses a genuine need, otherwise it risks generating new problems.
Evidence
“the way to think about AI is also is it actually solving a problem or are you just trying to introduce it to create a new problem” [153].
Major discussion point
Challenges: bias, data security, power asymmetry
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Robyn Scott
Speech speed
Default speed
Speech length
Default length
Speech time
Default duration
Human‑above‑algorithm principle
Explanation
Scott proposes a heuristic that places humans above algorithmic systems, using tools to further goals while retaining agency.
Evidence
“And the framing and heuristic I found most helpful for this overall is this idea of this recently merged of being below or above the algorithm” [16]. “If you’re above the algorithm, you are using tools to further your goals” [48]. “We need, when we think about closing that capability gap, and I think in diplomacy, to keep moving people up above the algorithm” [49]. “If you’re below the algorithm, you might be an Uber driver being dispatched, an Amazon packing worker being allocated to put stuff into boxes” [55].
Major discussion point
Principles for responsible AI in diplomacy
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence
Tools must preserve human responsibility & agency
Explanation
She stresses that AI should augment human decision‑making without eroding agency, warning against complacency and over‑reliance on AI.
Evidence
“but we’re not talking about human agency in the process and maintaining it” [62]. “So I think giving us the human tools and the psychological sort of counter‑arguments and weaponry to deal with this is really, really important” [60]. “We risk getting into a zero sum dynamic where … the agency drains away to AI and that all comes at a cost to humans” [148].
Major discussion point
Principles for responsible AI in diplomacy
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence
Bias, data security & data‑poisoning risks
Explanation
Scott highlights the opacity of model development and the danger of data poisoning, especially in adversarial negotiations, calling for safeguards.
Evidence
“people developing these models don’t even have full legibility over how they’re working” [72]. “adversarial negotiations open up this whole possibility of data poisoning of training set differentiation etc” [145]. “If AI is to be a useful neutral mediator … what do we do about data poisoning and prompt injection and those kinds of risks?” [143].
Major discussion point
Challenges: bias, data security, power asymmetry
Topics
Building confidence and security in the use of ICTs | Data governance
Adoption barriers – pilotitis & skill gaps
Explanation
She notes that many organizations are stuck in pilot projects without scaling and that leaders often lack hands‑on experience, creating a knowledge gap.
Evidence
“Increasingly, we’re in a pilotitis zone where almost everyone’s got pilots” [157]. “One of the biggest gaps is leaders not using the technology themselves, which is a real problem because you can’t understand this technology in the abstract” [75]. “It’s not even getting to how do we actually understand the basics of this technology” [77].
Major discussion point
Adoption barriers & capacity building
Topics
Capacity development | Building confidence and security in the use of ICTs
AI as a catalyst for value creation
Explanation
Scott emphasizes that AI offers a substantial opportunity to generate new value in diplomatic work, unlocking efficiencies and insights that were previously unattainable.
Evidence
“There is a huge value creation opportunity” [5].
Major discussion point
Value creation & opportunity
Topics
Artificial intelligence | Social and economic development
Speed of AI change demands proactive adaptation
Explanation
She warns that the rapid evolution of AI technologies requires diplomats and institutions to feel and respond to the speed of change, otherwise they risk falling behind.
Evidence
“You’ve got to feel the speed of change” [15].
Major discussion point
Adaptation to rapid AI evolution
Topics
Artificial intelligence | Capacity development
AI excels at data synthesis and analysis
Explanation
She points out that AI’s core strength lies in processing large volumes of information quickly, a capability essential for modern diplomatic analysis.
Evidence
“AI is great at that.” [4]
Major discussion point
AI strengths in data processing
Topics
Artificial intelligence | Data governance
Collaboration with research institutions strengthens AI integration
Explanation
She highlights the partnership with Stanford HAI as an example of how academic‑industry collaboration can accelerate the development and responsible deployment of AI tools in diplomacy.
Evidence
“Stanford HAI is one of our collaborators” [14].
Major discussion point
Partnerships for capacity building
Topics
Artificial intelligence | Capacity development
Perceived AI barriers are surmountable
Explanation
Scott expresses confidence that the challenges surrounding AI adoption are not insurmountable, encouraging a problem‑solving mindset rather than paralysis.
Evidence
“I don’t think it’s insurmountable” [3].
Major discussion point
Overcoming adoption challenges
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Active use of AI tools is essential for impact
Explanation
She stresses that merely having AI tools is insufficient; practitioners must actually use them to realize their benefits in diplomatic workflows.
Evidence
“You’ve got to use it” [2].
Major discussion point
Adoption imperative
Topics
Artificial intelligence | Capacity development
Addressing fear and uncertainty about AI
Explanation
Scott acknowledges that AI can feel intimidating and stresses the need to demystify the technology so diplomats can engage with it confidently, turning fear into constructive curiosity.
Evidence
“That’s terrifying.” [1]
Major discussion point
Challenges: bias, data security, power asymmetry
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence
Providing concrete data to inform decisions
Explanation
She emphasizes that AI should be used to supply clear, evidence‑based data that underpins diplomatic analysis and policy making, moving beyond anecdote to measurable insight.
Evidence
“Let me give you some data.” [12]
Major discussion point
AI strengths in data processing
Topics
Data governance | Artificial intelligence
Facilitating cross‑sector connections and collaborations
Explanation
Scott highlights the importance of linking diplomatic actors with research institutions and tech partners, creating ecosystems where expertise and tools can be shared effectively.
Evidence
“I’m very happy to make that connection.” [10]. “Stanford HAI is one of our collaborators.” [14]
Major discussion point
Partnerships for capacity building
Topics
Capacity development | The enabling environment for digital development
Encouraging intergenerational AI adoption
Explanation
She points out that showcasing older professionals successfully using AI can inspire broader uptake across age groups, helping to close skill gaps within diplomatic services.
Evidence
“older and see them using it.” [6]
Major discussion point
Adoption barriers & capacity building
Topics
Capacity development | Artificial intelligence
Establishing baseline AI readiness
Explanation
Scott stresses that any AI integration effort must begin with a clear assessment of current capabilities and gaps, establishing a baseline before moving to more advanced deployments.
Evidence
“So that’s where we’re starting from.” [7]
Major discussion point
Adoption barriers & capacity building
Topics
Capacity development | Artificial intelligence
Defining a roadmap for AI integration
Explanation
She highlights the importance of articulating a forward‑looking vision of where AI can take diplomatic work, setting concrete milestones to guide progressive implementation.
Evidence
“On where we can get to.” [8]
Major discussion point
Adaptation to rapid AI evolution
Topics
Artificial intelligence | The enabling environment for digital development
Cutting through AI hype to focus on practical use
Explanation
Scott points out that the flood of AI discussion can obscure real‑world needs, urging diplomats to prioritize tangible applications over buzzwords.
Evidence
“So there’s lots of AI talk.” [11]
Major discussion point
Principles for responsible AI in diplomacy
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Leadership commitment to act on AI challenges
Explanation
She demonstrates a proactive stance by declaring personal intent to address AI issues, signalling that senior leaders must move beyond rhetoric to concrete action.
Evidence
“And I’m going to do something about it.” [13]
Major discussion point
Overcoming adoption challenges
Topics
Capacity development | Artificial intelligence
From fear to proactive leadership
Explanation
Acknowledging the intimidating nature of AI can motivate leaders to take concrete steps, turning anxiety into decisive action for responsible AI deployment.
Evidence
“That’s terrifying.” [1]. “And I’m going to do something about it.” [13].
Major discussion point
Challenges: bias, data security, power asymmetry
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence
Data‑driven diplomatic culture
Explanation
Providing concrete data through AI supports evidence‑based decision‑making, fostering a culture where policy is grounded in verifiable information.
Evidence
“Let me give you some data.” [12]. “AI is great at that.” [4].
Major discussion point
AI strengths in data processing
Topics
Data governance | Artificial intelligence
Strategic investment justified by value potential
Explanation
Recognizing the huge value‑creation opportunity of AI, combined with the need to keep pace with rapid technological change, underpins the case for targeted investment.
Evidence
“There is a huge value creation opportunity.” [5]. “You’ve got to feel the speed of change.” [15].
Major discussion point
Value creation & opportunity
Topics
Artificial intelligence | The enabling environment for digital development
Personal connections accelerate AI integration
Explanation
Facilitating direct links between diplomatic actors and research partners speeds up knowledge transfer and builds trust in AI tools.
Evidence
“I’m very happy to make that connection.” [10]. “Stanford HAI is one of our collaborators.” [14].
Major discussion point
Partnerships for capacity building
Topics
Capacity development | The enabling environment for digital development
Baseline assessment and future vision
Explanation
Starting from a clear assessment of current AI readiness and articulating where the organization wants to be creates a realistic roadmap for progressive adoption.
Evidence
“So that’s where we’re starting from.” [7]. “On where we can get to.” [8].
Major discussion point
Adoption barriers & capacity building
Topics
Capacity development | Monitoring and measurement
Managing AI‑induced anxiety
Explanation
Scott acknowledges that the rapid rise of AI can feel frightening, highlighting the emotional barrier that diplomats may face when confronting new technologies.
Evidence
“That’s terrifying.” [1]
Major discussion point
Challenges: bias, data security, power asymmetry
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence
Necessity of active AI utilization
Explanation
She stresses that simply having AI tools is insufficient; practitioners must actually employ them to reap any benefits in diplomatic work.
Evidence
“You’ve got to use it.” [2]
Major discussion point
Adoption imperative
Topics
Artificial intelligence | Capacity development
Optimism about overcoming AI challenges
Explanation
Scott conveys confidence that the obstacles to AI adoption are not insurmountable, encouraging a problem‑solving mindset.
Evidence
“I don’t think it’s insurmountable.” [3]
Major discussion point
Overcoming adoption challenges
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Significant value‑creation opportunity
Explanation
Scott highlights the huge potential for AI to generate new value in diplomatic work, from efficiency gains to novel insights.
Evidence
“There is a huge value creation opportunity.” [5]
Major discussion point
Value creation & opportunity
Topics
Artificial intelligence | Social and economic development
Showcasing intergenerational AI adoption
Explanation
She notes that seeing senior staff successfully use AI can inspire younger colleagues and help close skill gaps across the diplomatic corps.
Evidence
“older and see them using it.” [6]
Major discussion point
Adoption barriers & capacity building
Topics
Capacity development | Artificial intelligence
Establishing a baseline AI readiness
Explanation
Scott argues that any AI programme must start with a clear assessment of current capabilities and gaps before moving forward.
Evidence
“So that’s where we’re starting from.” [7]
Major discussion point
Adoption barriers & capacity building
Topics
Capacity development | Monitoring and measurement
Defining future AI goals
Explanation
She stresses the importance of articulating where an organization wants to be with AI, providing a roadmap for progressive implementation.
Evidence
“On where we can get to.” [8]
Major discussion point
Adaptation to rapid AI evolution
Topics
Artificial intelligence | The enabling environment for digital development
Expressing gratitude for collaborative effort
Explanation
A simple thank‑you underscores the collaborative spirit needed to advance AI initiatives within diplomatic communities.
Evidence
“Thank you.” [9]
Major discussion point
Partnerships for capacity building
Topics
Artificial intelligence | The enabling environment for digital development
Facilitating connections with research partners
Explanation
Scott highlights her role in linking diplomatic actors with external research institutions, accelerating knowledge transfer and trust in AI tools.
Evidence
“I’m very happy to make that connection.” [10]
Major discussion point
Partnerships for capacity building
Topics
Artificial intelligence | Capacity development
Navigating AI hype
Explanation
She warns that the flood of AI discussion can obscure real needs, urging diplomats to focus on concrete, mission‑relevant applications.
Evidence
“So there’s lots of AI talk.” [11]
Major discussion point
Principles for responsible AI in diplomacy
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Providing data‑driven insights
Explanation
Scott emphasizes that AI should be used to supply clear, evidence‑based data that underpins diplomatic decision‑making.
Evidence
“Let me give you some data.” [12]
Major discussion point
AI strengths in data processing
Topics
Data governance | Artificial intelligence
Leadership taking concrete AI action
Explanation
She demonstrates personal commitment by stating she will act on AI challenges, modeling the proactive stance needed from senior officials.
Evidence
“And I’m going to do something about it.” [13]
Major discussion point
Overcoming adoption challenges
Topics
Capacity development | Artificial intelligence
Collaborating with academic AI labs
Explanation
Scott points to the partnership with Stanford HAI as an example of how academic‑industry collaboration can accelerate responsible AI deployment in diplomacy.
Evidence
“Stanford HAI is one of our collaborators.” [14]
Major discussion point
Partnerships for capacity building
Topics
Artificial intelligence | Capacity development
Adapting to the rapid pace of AI change
Explanation
She stresses that diplomats must feel and respond to the speed at which AI evolves, or risk being left behind.
Evidence
“You’ve got to feel the speed of change.” [15]
Major discussion point
Adaptation to rapid AI evolution
Topics
Artificial intelligence | Capacity development
Slavina Ancheva
Speech speed
193 words per minute
Speech length
880 words
Speech time
273 seconds
Complexity of diplomatic negotiations & AI support
Explanation
Ancheva underscores the inherent complexity and evolving nature of negotiations, arguing that AI can help manage the massive information load and strategic stakes.
Evidence
“So you know very well that negotiations are complex and they evolve over time” [15]. “Well, for one, there’s a whole lot of information that needs to be managed” [105]. “The potential for AI to augment many of these challenges and processes” [6]. “So with that being said, how can AI help?” [14].
Major discussion point
Complexity of diplomatic negotiations & AI support
Topics
Artificial intelligence | Capacity development
Human touch, not replacement
Explanation
She stresses that AI should support diplomats rather than replace them, preserving the interpersonal nature of diplomacy.
Evidence
“We’re not looking to replace diplomats or negotiators here, but just to give them the tools to manage these complexities much better” [117]. “And we’d really like to stress that this is a fundamentally interpersonal process” [116].
Major discussion point
Principles for responsible AI in diplomacy
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence
Co‑design with diplomats for capacity building
Explanation
She describes conducting one‑on‑one interviews with diplomats to embed their perspectives into AI tool design, ensuring relevance and usability.
Evidence
“So a lot of these interviews have really been integrated across the different work streams of our project, and we really put diplomats and practitioners at the heart of the rest of the work that we’re doing” [165]. “And a large part of the work that we’ve been doing is sitting down for one‑on‑one interviews with all of them and really getting a sense of how they think” [167].
Major discussion point
Adoption barriers & capacity building
Topics
Capacity development | Artificial intelligence
Responsible deployment of AI tools
Explanation
She calls for sober assessment of risks and benefits when integrating AI into diplomatic workflows.
Evidence
“So they’re being very forthcoming in that, and I think that allows us to take a really sober look at what are the risks of integrating these tools” [144]. “and the need for responsible deployment of these tools” [59].
Major discussion point
Challenges: bias, data security, power asymmetry
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence
Time pressure and impact of complexity
Explanation
She notes that the sheer volume of information and tight timelines intensify the need for effective AI assistance.
Evidence
“And finally, there’s the time pressure” [152]. “So with that being said, what are some of the impacts of this complexity?” [149].
Major discussion point
Complexity of diplomatic negotiations & AI support
Topics
Artificial intelligence | Capacity development
Audience
Speech speed
158 words per minute
Speech length
399 words
Speech time
151 seconds
Bias, data security & power asymmetry concerns
Explanation
Audience members raise questions about data poisoning, prompt injection, and unequal data access that could shift diplomatic power balances.
Evidence
“If AI is to be a useful neutral mediator in disputes or an assistant to a mediator, a human mediator, then what do we do about data poisoning and prompt injection and those kinds of risks?” [143]. “how do you think this would impact balance of power like given that every country has different access to the kind of data sets that they have…” [154].
Major discussion point
Challenges: bias, data security, power asymmetry
Topics
Building confidence and security in the use of ICTs | Data governance
Cultural inclusion & multilingualism
Explanation
The audience emphasizes the need for AI systems to embed diverse cultural and linguistic data to avoid bias and ensure equitable representation.
Evidence
“How can we ensure that the diverse cultural inputs of the world’s most diverse countries, of different societies, are… embedded in the data sets and the models which inform negotiations” [139]. “So, and what we want to do, how this cultural education can be supported by AI” [138].
Major discussion point
Challenges: bias, data security, power asymmetry
Topics
Closing all digital divides | Data governance
Agreements
Agreement points
Human authority must remain central in AI-assisted diplomacy
Speakers
– Charlie Posniak
– Gabriela Ramos
– Robyn Scott
Arguments
AI tools should augment human negotiators rather than replace them, maintaining human authority in decision-making
AI should be treated as supporting cast while humans remain in the driver’s seat and receive credit for outcomes
Maintaining human agency is crucial – people should be ‘above the algorithm’ rather than ‘below’ it
Summary
All speakers strongly agree that humans must maintain ultimate control and responsibility in diplomatic negotiations, with AI serving as a support tool rather than a replacement for human judgment
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Transparency and explainability are essential for AI tools in diplomacy
Speakers
– Charlie Posniak
– Gabriela Ramos
– Nandita Balakrishnan
Arguments
AI systems must be modular and transparent to maintain accountability in high-stakes negotiations
Source transparency is essential – users should always know what sources AI recommendations are based on
Users need to understand AI outputs and be able to explain how they reached their assessments
Summary
There is strong consensus that AI systems used in diplomacy must be transparent, explainable, and allow users to understand and verify the sources and reasoning behind recommendations
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
AI can significantly enhance information processing and analysis capabilities
Speakers
– Slavina Ancheva
– Gabriela Ramos
– Nandita Balakrishnan
Arguments
AI can help manage complexity in multilateral negotiations by tracking positions of multiple countries and stakeholders
AI can assist in processing vast amounts of negotiation documents, transcripts, and drafts more efficiently
AI can help identify counterarguments and alternative perspectives that human analysts might miss
Summary
All speakers agree that AI’s ability to process large volumes of information and identify patterns or insights that humans might miss is one of its most valuable applications in diplomatic contexts
Topics
Artificial intelligence | Data governance
Cultural representation and bias mitigation are critical concerns
Speakers
– Gabriela Ramos
– Audience
Arguments
Cultural representation in AI models is crucial to avoid individualistic bias and capture diverse philosophical perspectives like Ubuntu
There are significant risks of data poisoning and prompt injection when AI serves as a neutral mediator
Summary
There is agreement that AI systems must address cultural biases and ensure diverse representation to be effective in international diplomatic contexts
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Closing all digital divides
Similar viewpoints
Both speakers identify a significant gap between the potential of AI in the public sector and its actual implementation, with the public sector lagging behind in adoption
Speakers
– Nandita Balakrishnan
– Robyn Scott
Arguments
The public sector has been in the passenger seat regarding AI adoption compared to private sector
Public servants are optimistic about AI potential but there’s a gap between AI talk and actual implementation
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Both emphasize the importance of maintaining a critical, questioning stance toward AI rather than blindly trusting its outputs
Speakers
– Gabriela Ramos
– Robyn Scott
Arguments
AI should be treated as supporting cast while humans remain in the driver’s seat and receive credit for outcomes
Users should maintain a questioning, battle mentality with AI technology rather than blind trust
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Both speakers emphasize that AI applications in diplomacy require sophisticated, multi-method approaches rather than simple chatbot solutions
Speakers
– Charlie Posniak
– Nandita Balakrishnan
Arguments
Language models alone are insufficient; need integration with game theory, decision analysis, and machine learning methods
AI has fundamentally changed the threat landscape and scope for global competition, making it central to geopolitics
Topics
Artificial intelligence
Unexpected consensus
Need for systematic evaluation and responsible deployment
Speakers
– Robyn Scott
– J. Michael McQuade
– Charlie Posniak
Arguments
Many organizations are stuck in ‘pilotitis’ – running pilots without proper evaluation plans
Need for rigorous evaluation methodologies and responsible deployment guidelines for AI tools
AI systems must be modular and transparent to maintain accountability in high-stakes negotiations
Explanation
Despite coming from different sectors (government innovation, academic research, and technical development), all speakers converged on the critical need for systematic evaluation and responsible deployment practices rather than ad hoc AI implementation
Topics
Artificial intelligence | The enabling environment for digital development
AI should solve existing problems rather than create new ones
Speakers
– Nandita Balakrishnan
– Gabriela Ramos
Arguments
AI should only be introduced when it solves an actual problem, not to create new problems
Cultural representation in AI models is crucial to avoid individualistic bias and capture diverse philosophical perspectives like Ubuntu
Explanation
Both speakers, despite their different backgrounds (intelligence analysis and international diplomacy), agreed on the importance of purposeful AI implementation that addresses real needs rather than forcing AI solutions where they’re not needed
Topics
Artificial intelligence | The enabling environment for digital development
Overall assessment
Summary
There is remarkably strong consensus among speakers on key principles: human authority must remain central, AI systems must be transparent and explainable, AI excels at information processing and analysis, and cultural representation is crucial. The main areas of agreement span technical implementation, ethical considerations, and practical deployment challenges.
Consensus level
High level of consensus with no fundamental disagreements identified. This strong alignment suggests a mature understanding of both the opportunities and risks of AI in diplomacy, and indicates that the field may be ready for coordinated development of responsible AI tools for diplomatic applications. The consensus spans practitioners, researchers, and policy experts, suggesting broad-based support for the principles outlined.
Differences
Different viewpoints
Approach to AI integration – comprehensive toolkit vs. LLM-focused solutions
Speakers
– Charlie Posniak
– General audience/practitioners
Arguments
Language models alone are insufficient; need integration with game theory, decision analysis, and machine learning methods
Why can’t you just ask an LLM? Lots of people are interested in trying to see if language models can simulate diplomacy or if chatbots can guide people through a negotiation
Summary
Charlie argues against the common approach of relying solely on language models, advocating for a multi-method approach that integrates established theoretical frameworks, while many practitioners want simple LLM-based solutions
Topics
Artificial intelligence
Level of AI agency vs. human control
Speakers
– Robyn Scott
– General AI enthusiasm
Arguments
Risk of getting drunk on the idea of AI agency while not talking about human agency – need to keep people above the algorithm
Public servants are optimistic about AI potential with huge possibility in the public sector
Summary
Robyn warns against excessive enthusiasm for AI agency that could diminish human control, while acknowledging widespread optimism about AI’s potential in government
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Trust and skepticism toward AI outputs
Speakers
– Robyn Scott
– Gabriela Ramos
Arguments
Users should maintain a questioning, battle mentality with AI technology rather than blind trust
AI should be treated as supporting cast while humans remain in the driver’s seat and receive credit for outcomes
Summary
While both advocate for human control, Robyn emphasizes maintaining an oppositional stance toward AI, while Gabriela focuses more on keeping AI in a supporting role with humans taking credit
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Unexpected differences
Scope of AI implementation in government
Speakers
– Nandita Balakrishnan
– Robyn Scott
Arguments
Government officials need AI literacy training across all departments, not just military applications
Many organizations are stuck in ‘pilotitis’ – running pilots without proper evaluation plans
Explanation
While both work on government AI adoption, Nandita advocates for broad, comprehensive AI integration across all government departments, while Robyn warns about the current trend of unfocused pilot programs without proper evaluation, suggesting a more cautious, systematic approach
Topics
Artificial intelligence | Capacity development
Relationship with AI technology
Speakers
– Robyn Scott
– Nandita Balakrishnan
Arguments
Users should maintain a questioning, battle mentality with AI technology rather than blind trust
AI can help identify counterarguments and alternative perspectives that human analysts might miss
Explanation
Unexpectedly, despite both being AI advocates, Robyn emphasizes maintaining an adversarial relationship with AI systems, while Nandita focuses more on AI as a collaborative tool that enhances human analytical capabilities
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Overall assessment
Summary
The discussion revealed surprisingly few fundamental disagreements among speakers, with most tensions arising around implementation approaches rather than core principles. Main areas of disagreement centered on: 1) Technical approach to AI integration (comprehensive vs. simple solutions), 2) Degree of skepticism toward AI systems, and 3) Pace and scope of AI adoption in government
Disagreement level
Low to moderate disagreement level. The speakers largely aligned on fundamental principles (human control, transparency, responsible deployment) but differed on tactical approaches. This suggests a mature field where basic ethical frameworks are established, but practical implementation strategies are still being debated. The implications are positive for the Move 37 project, as there appears to be broad consensus on core values with room for diverse implementation approaches.
Partial agreements
Partial agreements
All speakers agree that humans must maintain ultimate control and responsibility, but they disagree on implementation approaches – Charlie emphasizes technical modularity and transparency, Nandita focuses on explainability and accountability, Robyn advocates for psychological tools and critical engagement, while Gabriela emphasizes questioning AI assumptions and maintaining decision-making space
Speakers
– Charlie Posniak
– Nandita Balakrishnan
– Robyn Scott
– Gabriela Ramos
Arguments
AI tools should augment human negotiators rather than replace them, maintaining human authority in decision-making
Users need to understand AI outputs and be able to explain how they reached their assessments
Maintaining human agency is crucial – people should be ‘above the algorithm’ rather than ‘below’ it
AI should be treated as supporting cast while humans remain in the driver’s seat and receive credit for outcomes
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Both recognize the importance of addressing bias and representation in AI systems, but Gabriela focuses on cultural and philosophical diversity while the audience member emphasizes security vulnerabilities and technical manipulation risks
Speakers
– Gabriela Ramos
– Audience
Arguments
Cultural representation in AI models is crucial to avoid individualistic bias and capture diverse philosophical perspectives like Ubuntu
There are significant risks of data poisoning and prompt injection when AI serves as a neutral mediator
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs
Similar viewpoints
Both speakers identify a significant gap between the potential of AI in the public sector and its actual implementation, with the public sector lagging behind in adoption
Speakers
– Nandita Balakrishnan
– Robyn Scott
Arguments
The public sector has been in the passenger seat regarding AI adoption compared to private sector
Public servants are optimistic about AI potential but there’s a gap between AI talk and actual implementation
Topics
Artificial intelligence | Capacity development | The enabling environment for digital development
Both emphasize the importance of maintaining a critical, questioning stance toward AI rather than blindly trusting its outputs
Speakers
– Gabriela Ramos
– Robyn Scott
Arguments
AI should be treated as supporting cast while humans remain in the driver’s seat and receive credit for outcomes
Users should maintain a questioning, battle mentality with AI technology rather than blind trust
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Both speakers emphasize that AI applications in diplomacy require sophisticated, multi-method approaches rather than simple chatbot solutions
Speakers
– Charlie Posniak
– Nandita Balakrishnan
Arguments
Language models alone are insufficient; need integration with game theory, decision analysis, and machine learning methods
AI has fundamentally changed the threat landscape and scope for global competition, making it central to geopolitics
Topics
Artificial intelligence
Takeaways
Key takeaways
AI should augment rather than replace human negotiators in diplomacy, with humans maintaining ultimate authority and decision-making responsibility
Diplomatic negotiations require a multi-method AI approach combining language models with game theory, decision analysis, and machine learning rather than relying solely on chatbots
AI can significantly help manage the complexity of multilateral negotiations by tracking positions, processing vast amounts of documents, and identifying historical precedents and counterarguments
Cultural representation and linguistic diversity in AI models is crucial to avoid bias and capture different philosophical perspectives in international negotiations
There is a significant gap between AI optimism and actual implementation in the public sector, with many organizations stuck in pilot phases without proper evaluation
Transparency and explainability of AI outputs are essential for maintaining accountability in high-stakes diplomatic negotiations
AI adoption in diplomacy could create power imbalances between countries with different levels of access to AI capabilities and datasets
Users must maintain a questioning approach to AI recommendations and avoid ‘sleeping at the wheel’ by assuming AI is 100% accurate
Resolutions and action items
The MOVE 37 initiative will continue developing AI tools for diplomatic negotiations as part of the Belfer Center’s Emerging Tech Program
The team will conduct ongoing interviews with current and former diplomats to understand their processes and requirements for AI tools
The project will develop evaluation methodologies and policy guidelines for responsible AI deployment in diplomatic contexts
The team invited collaboration and input from the global community to build this new discipline
Participants offered to make connections for cultural education initiatives and multilingual AI development projects
Unresolved issues
How to ensure diverse cultural inputs are embedded in AI datasets and models used for international negotiations
How to address data poisoning and prompt injection risks when AI serves as a neutral mediator
How to balance the competitive advantages of AI tools with the need for equitable access in diplomatic negotiations
How to maintain human agency and prevent over-reliance on AI recommendations in high-stakes situations
How to develop proper evaluation frameworks for AI pilots in government settings
How to build sufficient AI literacy across all levels of government beyond just military applications
How to handle strategic misrepresentation and deception inherent in diplomatic negotiations through AI systems
Suggested compromises
AI tools should be modular and transparent to allow users to understand which computational systems are supporting different aspects of analysis
AI should be treated as ‘supporting cast’ while humans remain in the driver’s seat and receive credit for outcomes
Users should maintain a ‘battle mentality’ with AI technology – questioning outputs rather than blindly trusting them
AI implementation should focus on solving actual problems rather than introducing technology for its own sake
Source transparency should be maintained so users always know what sources AI recommendations are based on
AI augmentation should be appropriately scoped for the specific team, institution, and setting where it will be used
Thought provoking comments
When you are in front of a person and you’re trying to convince that person that he’s alone nobody’s supporting his position and therefore he should not continue blocking the negotiation how would it be that you can have more information about that person what moves them how can you offer something that will be important for them because this is the kind of things that we do negotiating… but that’s also risky because it deals with privacy and all of those things
Speaker
Gabriela Ramos
Reason
This comment reveals the deeply personal and psychological dimensions of high-stakes diplomacy that AI tools would need to navigate. It highlights the tension between effectiveness and ethics in AI-augmented negotiation, introducing the critical question of privacy boundaries.
Impact
This shifted the discussion from technical capabilities to ethical considerations and the human psychology of negotiation. It established privacy and ethical boundaries as central concerns that would thread through subsequent comments from other panelists.
The public sector has been more in the passenger seat, if not the backseat, especially over the last decade… Of the public servants who are implementing AI in the public sector globally… Only 26% of them say they understand their own country’s ethical frameworks. So approximately three-quarters of all the people rolling out this technology are freestyling. That’s terrifying.
Speaker
Nandita Balakrishnan
Reason
This stark statistic exposed a fundamental gap between AI deployment and governance understanding in the public sector. It revealed that the very people implementing AI tools lack understanding of the ethical frameworks meant to guide their use.
Impact
This comment introduced urgency to the discussion about AI literacy and governance. It reframed the conversation from ‘how can we use AI?’ to ‘are we prepared to use AI responsibly?’ and influenced subsequent discussions about training and preparedness.
We are now getting quite drunk on the idea of AI agency… but we’re not talking about human agency in the process and maintaining it… We need, when we think about closing that capability gap, and I think in diplomacy, to keep moving people up above the algorithm.
Speaker
Robyn Scott
Reason
This metaphor of being ‘drunk on AI agency’ powerfully captured the risk of humans ceding too much control to AI systems. The ‘above/below the algorithm’ framework provided a clear conceptual tool for thinking about human-AI relationships.
Impact
This comment fundamentally reframed the entire project’s approach from AI augmentation to human empowerment through AI. It influenced how other panelists discussed maintaining human authority and became a recurring theme in subsequent responses.
Sometimes it comes across as so smart and so brilliant and comes up with a whole lot of counter-arguments… That it’s almost overwhelmingly smart, and you’re like, it must have covered everything. That’s a default… I already have a heuristic that whenever I open my phone and I’m dealing with anything with an algorithm, I am in opposition to that algorithm because its interests don’t generally coincide with mine.
Speaker
Robyn Scott
Reason
This insight into the psychological trap of AI’s apparent intelligence was particularly valuable because it came from a power user’s experience. The ‘battle mentality’ approach offered a practical psychological framework for maintaining critical thinking.
Impact
This deepened the discussion about the human psychology of AI interaction, moving beyond technical considerations to practical cognitive strategies. It influenced Gabriela’s subsequent comments about questioning AI outputs and maintaining human decision-making authority.
I find your draft very individualistic. It’s always about the human. It’s always about the outcomes for people, improving their welfare. And at the end, what I’m thinking about is the Ubuntu philosophy, which is I am because you are, and we are because you are, and we are because it’s nature, and we are interlinked. And therefore, how do you capture this when the models that we are developing are maximizing individual welfare?
Speaker
Gabriela Ramos (quoting Namibian representative)
Reason
This example brilliantly illustrated how AI systems trained on Western individualistic frameworks might miss entirely different cultural worldviews. The Ubuntu philosophy example showed how fundamental assumptions embedded in AI could undermine cross-cultural negotiation.
Impact
This comment elevated the discussion to address fundamental philosophical and cultural biases in AI systems. It connected directly to the audience question about cultural representation and influenced the conversation toward considering diverse epistemological frameworks in AI development.
Don’t look at the technology… because we always focus on the technology the countries that have introduced so much technology in their educational systems didn’t get better student outcomes because of content… if you don’t produce the content the tools will not make it.
Speaker
Gabriela Ramos
Reason
This comment challenged the entire premise of technology-first approaches, arguing that content and context matter more than technological sophistication. It provided a sobering reality check about technology implementation.
Impact
This shifted the final portion of the discussion toward fundamental questions about problem-solution fit and whether AI actually addresses existing problems or creates new ones, as Nandita then reinforced.
Overall assessment
These key comments transformed what could have been a purely technical discussion about AI capabilities into a nuanced exploration of human-AI relationships, cultural representation, ethical boundaries, and psychological dynamics. The most impactful comments came from practitioners with real-world experience who could ground theoretical possibilities in practical realities. Gabriela’s insights about cultural philosophy and negotiation psychology, Nandita’s stark statistics about implementation gaps, and Robyn’s frameworks about human agency and AI psychology created a multi-layered conversation that addressed technical, ethical, cultural, and psychological dimensions. The discussion evolved from initial optimism about AI capabilities toward a more sophisticated understanding of the challenges and responsibilities involved in AI-augmented diplomacy, ultimately emphasizing human empowerment over technological replacement.
Follow-up questions
How can we ensure that diverse cultural inputs from the world’s most diverse countries and societies are embedded in the datasets and models which inform negotiations?
Speaker
Sam Dawes (audience member)
Explanation
This addresses the critical need to prevent cultural bias in AI systems used for diplomacy and ensure global representation in training data
What do we do about data poisoning and prompt injection risks when AI is used as a neutral mediator or assistant to human mediators?
Speaker
Sam Dawes (audience member)
Explanation
This highlights security vulnerabilities that could compromise the integrity of AI-assisted diplomatic processes
How would AI tools impact the balance of power given that every country has different access to datasets, especially when state A knows everything about other players and others don’t?
Speaker
Arman (audience member)
Explanation
This raises concerns about how unequal access to AI capabilities could create asymmetric advantages in diplomatic negotiations
How can AI help with position tracking and mapping the positioning of countries during negotiations?
Speaker
Gabriela Ramos
Explanation
She identified this as something that would have been ‘super useful’ during her UNESCO negotiations, suggesting a need for better tools to understand negotiating positions
How can AI provide more information about individual negotiators – what moves them and how to offer something important to them – while addressing privacy concerns?
Speaker
Gabriela Ramos
Explanation
This explores the potential for AI to enhance strategic thinking in negotiations while navigating ethical boundaries
How can we develop evaluation methodologies for AI pilots in the public sector, given that only 45% of leaders with AI pilots have plans to evaluate them?
Speaker
Robyn Scott
Explanation
This addresses a critical gap in measuring the effectiveness of AI implementations in government
How can we build up AI literacy within the public sector across all departments, not just military applications?
Speaker
Nandita Balakrishnan
Explanation
This identifies the need for comprehensive AI education across government agencies to enable effective adoption
How can we maintain human agency while leveraging AI agency, avoiding a zero-sum dynamic where agency drains away to AI?
Speaker
Robyn Scott
Explanation
This addresses the fundamental challenge of keeping humans ‘above the algorithm’ rather than being controlled by it
How can we capture philosophical frameworks like Ubuntu (‘I am because you are’) in AI models that tend to maximize individual welfare?
Speaker
Gabriela Ramos
Explanation
This highlights the challenge of incorporating non-Western philosophical approaches into AI systems designed for diplomatic applications
What sources are AI recommendations based on, and how can we ensure transparency in the information foundation of AI outputs?
Speaker
Gabriela Ramos
Explanation
This emphasizes the need for explainable AI that can trace its reasoning back to specific sources for accountability in high-stakes negotiations
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

