How AI Is Transforming Diplomacy and Conflict Management
20 Feb 2026 13:00h - 14:00h
How AI Is Transforming Diplomacy and Conflict Management
Summary
The Belfer Center’s Emerging Tech Program launched the MOVE 37 project to examine how artificial intelligence can be integrated into diplomatic negotiation and policy-making processes [4-5][19-22]. Panelists highlighted that modern negotiations involve dozens of parties, multiple issues, and thousands of documents, creating information overload and time pressure that AI could help manage [41-48][66-72]. Gabriela Ramos illustrated this by describing the UNESCO AI ethics recommendation, which required processing 55,000 public comments and mapping the positions of 193 countries, a task she said would have benefited from more AI support [139-144]. Nandita Balakrishnan noted that AI access varies across academia, government, and industry, and that public-sector analysts still perform labor-intensive manual assessments that could be accelerated with AI tools [170-176][181-189]. Charlie Posniak warned that large language models are opaque, lack verifiable fluency, and cannot alone provide the accountability needed for high-stakes negotiations, emphasizing the need for a broader set of computational methods built over the past 80 years [82-85][87-92]. He identified three technical challenges: representing dynamic, strategic interactions; handling intentional misrepresentation by actors; and defining success criteria for multi-party outcomes [94-100]. To address these, his team proposes a cyclical workflow of research, analysis, strategy formulation, and real-time execution, supported by autonomous research agents, data validation, and live transcription services [102-107][108-115]. All speakers agreed that human authority must remain central, with AI tools kept modular, transparent, and scoped to augment rather than replace negotiators [117-121][284-290]. Robyn Scott presented survey results showing that over 90 % of public servants are optimistic about AI’s potential, yet most pilots lack systematic evaluation and many officials do not understand their own ethical frameworks, underscoring a skills gap [228-236][242-250]. She also cautioned against “sleeping at the wheel”-over-reliance on AI that can lead to false confidence-and advocated keeping users “above the algorithm” to preserve agency [254-259][330-334]. Gabriela stressed that cultural and linguistic diversity must be reflected in training data to avoid bias, citing examples where single-language models mischaracterized negotiators’ perspectives [391-399][423-425]. The panel concluded that while AI can improve data handling, predictive insights, and strategic option generation, its deployment must be carefully governed to maintain accountability, cultural sensitivity, and human judgment [286-290][350-357]. Michael McQuade summarized that MOVE 37 will develop AI-augmented tools, evaluation methodologies, and collaborative networks, positioning the project as an early step toward a new discipline at the intersection of technology and international diplomacy [125-129][204-209][432-438].
Keypoints
Major discussion points
– AI as an augment-tool for diplomatic negotiations, not a replacement for humans – the panel repeatedly stressed that negotiations are “fundamentally interpersonal” and that any AI system must keep “human authority … central” and serve as a support rather than a decision-maker [24-30][118-121].
– Technical and ethical challenges of deploying AI in diplomacy – concerns were raised about model opacity, accountability, strategic mis-representation, cultural bias, data-poisoning and “sleeping at the wheel” over-reliance [83-86][94-100][322-327][330-334][391-403].
– Concrete AI functionalities being explored – the team outlined a task-breakdown (research, analysis, strategizing, execution) and described prototypes such as autonomous research agents, real-time transcription/translation, position-tracking dashboards and predictive geopolitics models [102-108][284-290][286-290].
– Capacity-building and institutional adoption gaps – surveys show public-sector optimism about AI but also “pilotitis,” low evaluation rates, limited AI literacy and a large skills gap; the panel highlighted the need for training, clear ethical frameworks, and systematic rollout of pilots [226-242][248-250].
– Ensuring cultural and linguistic diversity in AI systems – participants warned that AI must reflect the world’s many languages and cultural perspectives to avoid reinforcing individualistic or biased outcomes; they cited UNESCO’s multilingual work and the Swiss multilingual LLM initiative as examples [391-403][424-425].
Overall purpose / goal
The discussion was convened to launch and shape the MOVE 37 project – an initiative of the Belfer Center’s Emerging Tech Program that aims to design, prototype, and responsibly integrate artificial-intelligence tools into the practice of diplomacy and negotiation. The organizers sought input from scholars, practitioners, and policymakers to map the problem space, identify research and development priorities, and build a collaborative community that will guide the project’s roadmap.
Overall tone and its evolution
– Opening segment (0:00-10:00): Formal and upbeat, emphasizing opportunity, collaboration, and the excitement of pioneering a new research frontier [1-7][19-23].
– Middle segment (10:00-30:00): Becomes more analytical and cautious, detailing the complexity of negotiations, the technical limits of LLMs, and the ethical risks of opacity, bias, and over-reliance [83-86][94-100][322-327][330-334].
– Later segment (30:00-end): Shifts toward pragmatic optimism, focusing on concrete use-cases, capacity-building, and concrete next steps while still acknowledging the need for vigilance and human oversight [226-242][284-290][391-403].
Overall, the tone moves from enthusiastic introduction, through a sober appraisal of challenges, to a constructive, solution-oriented outlook that calls for continued collaboration and responsible development.
Speakers
– J. Michael McQuade – Director of the Emerging Tech Program at the Belfer Center, runs the MOVE 37 initiative; expertise in international policy, technology, geopolitics, and AI for diplomacy. [S8]
– Charlie Posniak – Full-time fellow and research fellow at the Belfer Center; works on AI-enabled diplomatic negotiation tools and policy guidelines. [S1]
– Slavina Ancheva – MPP student and research fellow at the Belfer Center; focuses on framing negotiation complexity and AI augmentation for diplomacy. [S4]
– Gabriela Ramos – Former Assistant Director General for Social and Human Sciences at UNESCO; former co-chair of the UN AI Advisory Panel and Spain-India Ambassador for AI; expertise in AI ethics, international negotiations, and UNESCO AI recommendations. [S14]
– Nandita Balakrishnan – Director of Intelligence at the Special Competitive Studies Project (SCSP); expertise in intelligence, AI, geopolitics, and public-sector AI adoption.
– Robyn Scott – CEO and co-founder of Apolitical; collaborates with Stanford HAI; expertise in government innovation, AI training for public-sector policymakers. [S6]
– Audience – Various attendees (e.g., senior advisor Sam Dawes, Indian classical-dance teacher Devika Rao, JPL South Asia staff member Arman); roles not specified.
Additional speakers:
– Sam Dawes – Senior Advisor to the Oxford University AI Governance Initiative and Director of Multilateral AI; background in diplomacy (worked for Kofi Annan, UK Foreign Office, Cabinet Office).
– Devika Rao – Indian classical dance teacher; involved in cultural-education frameworks linking India and the UK.
– Arman – Staff member at JPL South Asia; interested in AI’s impact on balance of power in negotiations.
The session opened with J. Michael McQuade, director of the Belfer Centre’s Emerging Tech Programme, which “teaches, trains, and does research on the applications of science and technology for international affairs” and convenes scholars, practitioners and students to explore the intersection of technology, science and geopolitics [1-3]. He announced the launch of the MOVE 37 initiative – a component of the Emerging Tech Programme created “to look at where emerging technologies are creating new policy frontiers… and the implications… for governance, geopolitics, global stability and global conflict” [4-5]. Because artificial intelligence is a “major aspect of our work” in relating technology to modern geopolitical issues, a panel of experts was introduced: Gabriela Ramos (UNESCO), Nandita Balakrishnan (Special Competitive Studies Project), Robyn Scott, CEO and co-founder of Apolitical, and two Belfer researchers, Charlie Posniak and Slavina Ancheva [6-15].
The gathering’s purpose was to mobilise collaborators for a “major new project… looking at the use of artificial intelligence in diplomacy and negotiation” [19-22]. McQuade stressed that the work is not confined to a small Cambridge team but seeks “collaborators, partners, and input from the community… around the world” to shape how AI will be used responsibly in high-stakes diplomatic processes [23-26]. He framed diplomacy as a fundamentally human activity that could be augmented by AI to improve outcomes while preserving human agency [24-30][118-121].
Agenda and framing - Slavina Ancheva set the discussion’s three focal areas: (1) the current complexity of negotiation processes; (2) AI’s potential to alleviate those challenges; and (3) the need to think beyond large-language models (LLMs) toward responsible deployment [40-44]. She asked participants to “close your eyes and imagine” a negotiation with ten agenda items, noting that “it’s not just about those 10 items” because “a lot of other factors… both inside and outside that room” influence the outcome [45-48]. A typical negotiation may involve “seven counterparts from seven political groups, seven different countries… and behind you… 27 other countries you are representing” [48-49], illustrating the multi-layered nature of modern diplomacy. The resulting “information overload” includes “thousands of documents, transcripts, drafts” and is compounded by “finite resources, strategic group-think, and time pressure” [66-72].
Challenges illustrated - Gabriela Ramos described negotiating the UNESCO Recommendation on the Ethics of Artificial Intelligence, a process that involved “193 countries negotiating during COVID” and generated “55 000 comments” [139-144]. She noted that AI could have helped “map the positioning of countries” and provided a “repository of what is the traditional position of certain countries” to streamline briefings and stakeholder outreach [144-145]. Ramos warned that AI tools must avoid “misrepresentation, over-representation of certain cultures, certain languages, assumptions” and that any system should “open a space of human understanding” rather than simply “beat the person in front of me” [354-367]. When asked about cultural inclusivity, she stressed that “culture is expressed by language” and that models need multilingual training to capture philosophies such as Ubuntu, otherwise they risk “maximising individual welfare” at the expense of collective worldviews [391-403][424-425].
Audience questions - Sam Dawes asked (a) how to ensure diverse cultural inputs are embedded in AI models and (b) how to guard against data-poisoning or prompt-injection attacks. Ramos answered that culture is conveyed through language, so training on a wide range of languages and continuously validating source material are essential, and she emphasized the need for continual ground-truth testing to detect poisoning [391-403][424-425][430-432]. Arman raised concerns about the impact on the balance of power when data access is uneven. McQuade responded that the project is examining how AI tools can create competitive leverage but also risk exacerbating asymmetrical information if not widely disseminated [208-212][431-432].
Sectoral perspectives - Nandita Balakrishnan observed that “the public sector has been more in the passenger seat, if not the backseat” while academia and industry enjoy broader toolsets [170-172]. She recounted a mentor pointing out a “ten-year-old data point that completely negates” her analysis, a gap that “AI could have identified and synthesised” [185-189]. From this perspective she argued that AI should be viewed as a “data point” that requires human explanation and accountability, especially in high-stakes policy work [297-304]. Balakrishnan also highlighted AI’s strategic importance for geopolitics, noting that “AI has fundamentally changed the threat landscape” and that “the public sector must leverage these tools… in intelligence, the State Department, commerce, OPM” to stay competitive [194-200]. She cited ongoing projects that use AI to “predict geopolitical events” for both military and diplomatic applications, arguing that demonstrable use-cases are needed to convince policymakers of AI’s value [201-203].
Technical roadmap - Charlie Posniak provided a technical perspective, first dismissing the notion that “you can just ask an LLM” because “their fluency isn’t necessarily verifiable in international and world politics” and the models are “opaque… not always viable” for accountability [82-85]. He reminded the audience of an “80-year-old toolkit” of game theory, decision analysis and machine-learning methods that must be integrated with modern AI rather than replaced by chat-bots [86-92]. Posniak identified three core challenges for diplomatic AI: (1) representing dynamic, strategic interactions that evolve over time [94-98]; (2) handling intentional mis-representation by actors [98-99]; and (3) defining success criteria for multi-party outcomes [100-101]. To address these, his team proposes a cyclical workflow of “research, analysis, strategising, and execution” supported by “autonomous research agents, source validation, real-time transcription and translation services” and modular, transparent tools [102-115][117-121]. He later expanded on concrete functionalities such as “position-tracking dashboards, strategy sandboxes, red-team training and predictive geopolitics models” that can process “vast amounts of unstructured data” [284-290][286-290].
Human-in-the-loop emphasis - During the discussion a brief slip (“woman in the loop”) illustrated the panel’s humor and reinforced the emphasis on keeping humans central to AI-augmented processes [350-352].
Consensus - All panelists agreed that AI must remain an augment-tool rather than a replacement for diplomats. Both Ancheva and Posniak stressed that negotiations are “fundamentally interpersonal” and that AI should “give them the tools to manage these complexities much better” while keeping “human authority… central” [61-62][117-121]. Ramos echoed this, insisting that any AI-driven recommendation must be “questioned” and that “the AI tools… should not be built to beat the person in front of me” [352-367]. Scott reinforced the idea of staying “above the algorithm” – using AI as support rather than surrendering agency – and warned that without careful framing AI could create a zero-sum dynamic that diminishes human agency [254-259].
Capacity gaps - Robyn Scott presented empirical evidence on capacity gaps. A survey of 5 000 public servants showed that “north of 90 % think there is huge possibility in the public sector” yet “only 26 % of them say they understand their own country’s ethical frameworks” and many pilots lack systematic evaluation [226-242][248-250]. She described the phenomenon of “sleeping at the wheel,” where users over-trust AI after it reaches high accuracy, leading to false confidence [330-334]. Her “below-the-algorithm / above-the-algorithm” heuristic urges policymakers to “move people up above the algorithm” to preserve decision-making power [256-259].
Strategic implications - McQuade argued that AI will provide “competitive leverage” and that tools should be “dispersed actively and offensively, not defensively” to reshape power balances in both cooperative and adversarial negotiations [208-212][431-432]. Balakrishnan reinforced this view, noting that AI is now “a foundational way we need to think about geopolitics” and that “you cannot divorce AI… when you’re trying to understand geopolitics and foreign policy” [194-196].
Points of disagreement - Posniak warned that the opacity of LLMs makes them unsuitable for treaty-shaping work because “accountability… is not always viable” [82-85], whereas Scott acknowledged the black-box nature but argued that “the lack of transparency is not insurmountable” and that work-arounds are possible [254-259]. A second tension concerned the level of trust to place in AI outputs: McQuade suggested AI could serve as a trusted augment-tool that aggregates information and offers new levers for negotiators [210-212]; Balakrishnan counter-pointed that AI should remain a “data point” requiring human validation and explanation [297-304]. Finally, Ramos advocated for AI that “opens space for human understanding” rather than being used to “beat” counterparts, while Scott cautioned that without careful framing AI could create a zero-sum dynamic that diminishes human agency [354-367][254-259].
Next steps - MOVE 37 will develop a suite of AI-augmented tools covering the four phases identified by Posniak (research, analysis, strategy, execution) [102-115] and will create “evaluation methodologies” to assess their impact [125-129]. The project will continue “one-on-one interviews” with current and former diplomats to capture “process-level insights” and to inform the design of position-tracking repositories and strategic-option generators [267-278]. It will also pursue multilingual, culturally inclusive datasets, drawing on examples such as the Swiss quasi-governmental LLM trained on over 100 languages for diplomatic use [424-425]. Capacity-building initiatives will be launched to raise AI literacy across intelligence, the State Department and other federal agencies, and pilot programmes will be equipped with systematic evaluation frameworks to close the “pilotitis” gap [242-250][194-200].
Key take-aways
– AI should augment, not replace, human negotiators.
– Transparency, modularity and human-in-the-loop design are non-negotiable.
– Multilingual, culturally representative data are essential to avoid bias.
– Capacity-building and rigorous evaluation are required to move beyond pilots.
– Governance frameworks must balance strategic advantage with ethical safeguards.
Overall, the discussion highlighted strong consensus on AI as an augment-tool, the necessity of transparent, modular, human-centered systems, and the importance of multilingual inclusivity and capacity-building. At the same time, disagreements over model opacity, the appropriate level of trust, and whether AI should be framed as a collaborative aid or a competitive lever indicate that MOVE 37 will need flexible governance structures that balance strategic advantage with ethical safeguards. The panel’s thought-provoking remarks – from Posniak’s challenge to “just ask an LLM” [82-85] to Scott’s “below/above the algorithm” heuristic [256-259] – set the tone for a pragmatic, yet cautious, roadmap toward AI-augmented diplomacy.
I’ve been a major figure in international policy for the United States and in education at the Belfer Center, where our objective is to teach, train, and do research on subjects related to the applications of science and technology for international affairs. We have scholars, practitioners, students, all working to address the gaps and the opportunities for technology, science, and geopolitics. I’m very delighted to have everybody here today. The Emerging Tech Program, which I have the honor of running, was launched about a year ago, specifically to look at where emerging technologies are creating new policy frontiers, new opportunities to use technology to engage in policy, and the implications that technologies are creating for governance, geopolitics, global stability, and global conflict.
And the MOVE 37 initiative that we’re here to talk to you about today. is a part of that program. As you can imagine, in a program that’s relating technologies to modern issues around geopolitics, that artificial intelligence is one of the major aspects of our work. We have a terrific panel here today. It’s my pleasure to introduce them by name, and we’ll talk a little bit more about each one in just a moment. The missing chair, which we expect shortly, is Gabriela Ramos, who’s the former Assistant Director General for Social and Human Sciences at UNESCO. Nandita Balakrishnan is the Director of Intelligence at the Special Competitive Studies Project in Washington. And Robyn Scott is the CEO and co -founder of Apolitical.
And then at the far end, two of my colleagues, researchers on our program at the Belfer Center. Charlie Posniak is a full -time fellow with the program, research fellow with the program. And Slavina Ancheva is a current. student within our program, an MPP student at the Belfer Center. I also want to acknowledge a colleague of ours. who is not here, was not able to get to India in time for the conference, Carme Artigas, who is the former co -chair of the UN AI Advisory Panel, who has been an integral part of starting this work and the ongoing progress we are making. Carme is also the Spain -India Ambassador for AI, High Commissioner in Spain -India for AI, and is here with us in spirit and maybe even on the live stream that we’re doing here today.
So a big shout -out to Karame for all of her help. So why are we here? We are embarking on a major new project, specifically looking at the use of artificial intelligence in diplomacy and negotiation. We are here at a conference about the use of artificial intelligence and the implications of artificial intelligence in so many aspects of society. And our work, is looking at how one will engage non -human intelligences in the process. of diplomacy and negotiation. So we’re here because this is a broad -based project. It is not something that is solely the purview of a small team in Cambridge where we are located, but something that we are looking for collaborators, partners, and input from the community here in the community around the world as we build what will surely be a place where AI is used and surely will be a place where we need to be cognizant and careful about how AI will be used in that exercise.
So the work we do plays on an increasingly bigger role in shaping the relationships between states and within states. We want to have this conversation about the role for AI specifically because of the global nature and the integrated way that AI will play, and more specifically, how will we use artificial intelligence tools to augment humans in what is at its core a fundamentally human process of negotiation. Thank you. diplomacy. Diplomatic negotiations are very high stakes. They are very different, as you will hear from our team, very different than classic one -on -one win -lose negotiations or win -win negotiations. They are very much more complex than that, and that means they require both a unique human touch and a unique application of how artificial intelligence might be used in that process.
But it’s also an area where an enormous amount of potential exists to tackle resource constraints, to find better outcomes, and to use artificial intelligence to enable a more stable and prosperous world, and one for which the follow -up to negotiations can be a subject for the tools and applications of modern technology. How we go about that is crucial, and how we talk about it from the beginning is crucial. There are already a number of tools emerging in this space, but it’s our belief that a more rigorous approach is needed, and you’ll hear some of that in just a moment. So what are we going to do? So what are we going to do? Our team is going to do a brief overview of the way we are thinking about this problem.
And after that, we’re going to have this amazing panel that we will engage in a conversation about views from others who are involved in negotiation and or diplomacy writ large and their views on how technology can be used, be used well or be used not so well, and what the implications of that will be. So with that, let me turn it over to Charlie and Slavina to talk about the project itself. I think Slavina is up first.
Thank you, Michael. And thank you all for being here this morning. A big welcome. Over the next 10 minutes or so, Charlie and myself would like to present you with a little bit of a framing for the expert discussion that we’ll be getting into right after. We’ll broadly focus on three areas. How do negotiation processes currently look and the complexity that comes with them? The potential for AI to augment many of these challenges and processes. Thinking beyond just LLM. and the need for responsible deployment of these tools. Before we do that, I’d like you to close your eyes and imagine. You walk into a negotiation, you look down at the agenda, and there’s 10 items on it.
But as any good negotiator, you know that it’s not just about those 10 items. It’s a lot of other factors that are happening both inside and outside that room that are affecting how that negotiation process is happening. So for one example, you’re sitting across seven counterparts from seven political groups, seven different countries, and behind you, you have your own team, but also 27 other countries that you’re representing, that you’ve promised a certain outcome or a certain deal. Of course, this is not my story. It’s the story of Carme Artigas that Michael mentioned, who was one of the chief negotiators of the EU AI Act, and later the UN AI Advisory Body and many other negotiations. But it’s not just her story.
It’s the story of many of you. It’s the negotiations that you’ve engaged in at the UN, at COP, bilateral. It’s the negotiations that you’ve engaged in at the UN, interagency negotiations within your own organizations. So you know very well that negotiations are complex and they evolve over time. So what might look like just two states negotiating with each other bilaterally is actually a whole set of issues that are on the table. It could be natural resources, it could be AI, it could be climate, and a whole lot of external and internal stakeholders that are also trying to influence that process. We start to dive into some of this complexity. And more than that, there’s a lot of teams that are sitting behind these principal negotiators, the different departments and agencies that are supporting them with evidence, with documents.
And we’d really like to stress that this is a fundamentally interpersonal process. We’re not looking to replace diplomats or negotiators here, but just to give them the tools to manage these complexities much better. And finally, rarely in this world do we have just two states negotiating nowadays. There’s often a third state at the table. In the case of the EU, maybe 27 member states and, of course, a lot of hundreds of others that could be out there. So with that being said, what are some of the impacts of this complexity? Well, for one, there’s a whole lot of information that needs to be managed. There’s a simple negotiation can generate thousands of documents, transcripts, drafts. On top of that, there’s a certain amount of finite resources that any team has as they grapple with many other challenges throughout the day.
There’s a lot of strategic elements. Sometimes in groups, you might have a group think or herding that leads you in one direction as opposed to exploring your full set of options. And finally, there’s the time pressure. So most negotiations do have some sort of time element and handover element to future teams. So with that being said, how can AI help? And I’d like to turn over to Charlie.
Thanks, Lavinia. So AI systems can now beat some of the best human players at Go, at chess, at video games. At board games, language models, as we’ve heard, have become increasingly competent at delivering a range of sophisticated. legal, academic, technical, software contributions, the pace of change has been staggering. And so what our interdisciplinary team has been looking at is we’re trying to envision a better future for diplomacy where computational methods can transform the practice of diplomatic negotiations and statecraft that Slavina just outlined. So supporting better communications, better resolutions, and processes between states can augment their functions. So we’re trying to chart existing technical tools, develop new ones, and provide a range of policy guidelines to ensure that this happens responsibly, safely, effectively.
So the classic question that we get in response is, why can’t you just ask an LLM? Lots of people are interested in trying to see if language models can simulate diplomacy or if chatbots can guide people through a negotiation all in one step. But ultimately, language models are remarkable, and they used to be carefully scoped for three key reasons. Firstly, their fluency isn’t necessarily verifiable in this international and world politics. Secondly, the opacity where you can’t tell what’s going on inside a model is not always verifiable, is not always viable. Because high -stakes negotiations require… accountability both democratically and internally to understand why recommendations shape treaties in certain ways. And additionally, there’s a toolkit that’s 80 years old here.
We have game theory, decision analysis, machine learning, a great range of theoretical developments that exist precisely to model strategic interactions under uncertainty. And so we see LLMs as playing this role at the heart of a really broad set of learning paradigms that’s tying together. We’re both like supervised and unsupervised, self -supervised learning here. But LLMs provide a really strong way to interact with all of these different learning paradigms and technical architectures that the best advances in AI have been built from. So whether that’s the systems that play chess or Go or board games, these are all pulling together lots of different methods. And if we just rely on chatbots at the heart of things, we miss out on all of the technical developments that the last 80 years have experienced.
But there are three key challenges with trying to expand these techniques into the world of diplomacy and world policy. One is to be able to do it in a way that’s not just a way to do it in a way that’s a way to do it in a way that’s not just a way to do it in a way that’s not just a way to do it in a way that’s not just a way to do it in a way that’s not just a way Firstly, representation. As Sabina was touching on, the game that’s being played here isn’t a board game. These are things that are these interactions are fundamentally changeable over time. The institutions that constrain the actions of states can be made and unmade over the course of a negotiation.
Inference like these are environments where there’s real strategic misrepresentation, where people are lying or deceiving or trying to shape outcomes for their own advantages in ways that the current methods aren’t quite well suited to handle. And finally, there’s as we’re touching on, there’s this sense of specifying success. How can you bring together all of the different counterparts and come up with a relatively set, coherent set of preferences and priorities over the course of a really massive negotiation? So these are three challenges that we’re trying to embark on. And one of the ways that we’re approaching this is by breaking down the the tasks of diplomacy and the tasks of negotiation for AI applications. So just broadly, one of the ways we’ve looked at this, is saying that there are some foundational tasks of research analysis.
analysis, strategizing, and execution that build this evidence base with research, that analysis processes the information that you’ve managed to gather. Strategizing relies on using the analysis and the research to come up with a map from your preferences to your outcomes. And then finally, in the room, executing a negotiation, you’ve got to be able to dynamically adapt and adjust over time. And this isn’t a linear process, but a re -entrant cyclical sense of you have all of these things as they change, feeding up through this knowledge base. And so you need this really strong computational infrastructure to be able to even begin to apply some of the really exciting and fascinating AI and ML methods we’re touching on.
So with this, we see a future where research can be done with autonomous research agents, and you can have source validations and get immediately generated counterpart biographies, analysis of gaps and preferences and evidence bases, strategy sandboxes, red team training, and trying to simulate how both the public and the public can interact with each other. And so you need this really strong data set to be able to do that. And then finally, in the room, executing a negotiation, you’ve got to be able to be able to identify the best way to do that. And so you need this really strong data set to be able to identify the best way to do that. And so you need this really strong data set to be able to identify the best way to do that.
And so and then in real time having transcription and translation services that AI and ML methods are doing a really phenomenal job at. All of these things we think will play a role in this multi -model, multi -method world of computational support for diplomacy negotiations. And so this is just a sense of how we’ve tried to break down this problem and get a grasp on the existing and future technical developments. Finally, we want to end on these three commitments that are central to a lot of the stuff that we’re talking about. One, human authority has to remain central. We can’t have any objection of responsibility over decisions of war and peace. We have to make sure that the tools themselves are modular and transparent so that you can see what’s happening at each stage of the process and which parts of which computational systems are supporting analysis.
And then finally, making sure that augmentation is appropriately scoped for the team, the institution, and the setting that it’s in. So with that, I’d like to hand over to Michael in the panel, our director of the program. And what I hope will be a wonderfull discussion.
Great. Thanks, Charlie. Thanks, Slavina. So just as everybody’s sort of getting settled in, so we have a plan for a project. We have a vision of how one has a set of signposts and goalposts in what is essentially the ability to augment with intelligence, human intelligence and participation. So there are lots of technical elements of that. We’ll be developing tools. We’re looking at evaluation methodologies, et cetera, et cetera, et cetera. It’s the whole technical side. But one of the benefits of the approach we’re taking is that we have access to a large body of people for whom the day -to -day practice of negotiation diplomacy, not necessarily constrained by the definition of diplomacy, meaning state -to -state to get to an answer, but organization -to -organization, people -to -people, negotiation -to -negotiation.
And I am delighted then to have three people here who can talk a little bit about their views on how artificial intelligence will be used in the process of their work. That allows us to then learn from that experience and how we map that into the Move 37 project. So Gabriela, let me start with you. Welcome. Thank you for negotiating all the traffic to get here. So you’ve been at the center of international policy design and negotiations on issues such as climate change, international taxation, gender equality, artificial intelligence, a whole list of things in a brilliant career. You’ve done this through key roles at UNESCO, but also at the G20 and G7 and at the OECD.
We’re delighted to have you here. And let me ask you just to sort of start the discussion, if you would, to just talk a little bit about what it’s like to sit in the driver’s seat as a mediator trying to bridge sides, and how you would think about AI capability augmenting you in that process.
Well, thank you. Thank you so much for inviting me. Is it working? For inviting me to this early morning. And I find this topic fascinating. because when you are a diplomat and when you have negotiated many standards or agreements you don’t think about this taxonomy you never think about the taxonomy you just think that you need to get it someday and that you need to find consensus and that you need to find where the problems will be and therefore it’s very interesting that you asked me something to structure better how we do things and I’m going to refer to the negotiation of the recommendation on the ethics of artificial intelligence because that was a very difficult one, 193 countries negotiating during COVID and actually it was very helpful to have a zoom where I could see where all the countries were positioning themselves which actually helped a lot but the interesting thing is that it was about artificial intelligence we were negotiating and we had to map out where countries were and it was very interesting to see that some of the usual suspects that are always blocking the effectiveness of international instruments were aligning with countries that are very supportive of those but that didn’t want to see UNESCO playing a role in this field so I have Russia and I have UK in the same position that helped me because I called the UK and say are you happy to be in the same position and then they just hold one second but the interesting thing is it’s a very heavy document it’s very very because there are so many cultures we have to almost define the step by step and the interesting thing is that when thinking how can AI help us organize better at the moment it did not provide with so many inputs it was 2021 but UNESCO has this idea of being super inclusive we developed the recommendation with all the regions in the world represented and all the disciplines but then we put it out to the world and we receive 55 thousand comments therefore we use AI to integrate them.
That was, no? But then when you think about how do you map the positioning of countries, I think that would have been super useful to have more AI. I used to have full teams providing me with briefings for the people we were going to talk because you need to be conducting a lot of, one thing is the negotiation in the room, another thing is all the legwork that you need to be doing, talking to the different actors, knowing where they stand. And that would have been amazing, just to have a repository of what is the traditional position of certain countries or certain negotiations, which had to do with the substance, but probably has to do also with the positioning of that country in the international context and how much they abide by the rules and how much they support these things.
And then what I find fascinating, but this is always, as my colleagues here said, how you keep the woman, the woman in the loop, I love it. Yes, woman in the loop, not human in the loop, human in the loop. it was a lapsus it’s not lost on me my panelists here it was a lapsus but I like my lapsus the whole point is when you are in front of a person and you’re trying to convince that person that he’s alone nobody’s supporting his position and therefore he should not continue blocking the negotiation how would it be that you can have more information about that person what moves them how can you offer something that will be important for them because this is the kind of things that we do negotiating what would you want to have out of it I know you have your bosses in your shoulder and you need to bring them something to the table but tell them you’re alone that you’re blocking it and imagine you can have the information about that person but that’s also risky because it deals with privacy and all of those things but I feel it would be fantastic because this is strategic thinking and using the right words to get the countries to agree that will bring you on some places and that I think is a very important thing that’s a capacity that can be augmented by AI.
Thank you very much.
Yeah. Is it on? Yeah. Thank you very much for, actually it’s a terrific transition because Charlie was talking about the complexity of these negotiations, about how they’re not dynamic. I can think of nothing more dynamic than a UNESCO negotiation. Just trying to understand where people’s positions are by itself is a complexity. Trying to integrate 20 or 30 of those positions or 190 of those positions and then trying to find what are the right levers that I might be able to pull. We do this now all the time with people, with you. And the question is how can modern tools help in that process without removing or absolving responsibility for people. So thank you. So Nandita, you have had an amazing career in academia, the public, the private sector.
You’ve been at Stanford. You’ve worked in intelligence and advisory. And you’ve seen all the different things sides of these negotiations, both from the government side, the private sector side, inside and outside. You are currently at the Director of Intelligence at the Special Competitive Studies Project. For those of you who don’t know, SCSP is a major effort funded by Eric Schmidt, sponsored by Eric Schmidt, after the conclusion of the National Security Commission on Artificial Intelligence in the U .S. in the way that technology will be used in competition for economics, national security, et cetera. So it’s a big, broad role. Every day you are negotiating. Every day in your life you have negotiating. So can I just ask you to talk a little bit about your view, both from an SCSP point of view, but also from your career about how you would see this evolving?
Absolutely. And thank you so much for having me. And good morning, everyone. So my career, as you mentioned, has sort of spanned three distinct sectors. And they kind of came at different times, academia, public sector, and private sector. Now, there was a time, I would say, that all… three of these groups would have, the particular ways they would have leveraged technology would have obviously its variations, but the access and adoption of it were much more similar. This is just fundamentally not true of AI. The public sector has been more in the passenger seat, if not the backseat, especially over the last decade. And so what was really interesting to me is I started in academia, then went to the public sector, came out of the private sector, and I saw that dip in my access to AI.
Now, I have been in intelligence, and one important thing about intelligence and maybe a misconception of it is that it is primarily used for military, feeding information for military applications. It’s not true. We are just as vital to diplomatic efforts because every opportunity you’re looking for that something bad could happen, you’re just as much looking at what are the opportunities for something positive to happen. How can you open the negotiating space? So we’re looking at everything from both sides. So I wanted to give that perspective. As an analyst, I can say personally, it was very valuable. valuable to have the rigorous training I had to do things very, very manually. So learning how to write an assessment without the access to AI.
But now that I’m on the back end of it, I can tell you every day I ask myself, like, if I had access to these tools as an analyst, how could I have worked much faster and much smarter? Because at the end of the day, and something that Gabriella was mentioning, there’s a lot of data out there, but a human analyst is never going to be able to manually process most of that by themselves. The story I always like to tell is the very first time I wrote this intelligence piece, I was so proud of it. I thought the argumentation was great. The data I had used was great. I showed it to a mentor, and they said, this is awesome, but you didn’t consider this one piece of data from 10 years ago that completely negates your argument.
And here’s the thing. It’s not that I didn’t do good work. It’s just that there was no way I was going to know that that piece of information exists. Now imagine a tool that can help you not only identify that that data exists, but learn how to synthesize it. Okay. Thank you. Obviously, as Charlie and Slavina mentioned, human in the loop is always going to be important because you want these assessments to have a human level at the end of it. But there is a way to move better and smarter. And this is something that SESP is really advocating for in sort of three distinct ways. So first, at a very meta level, we make the argument that AI has fundamentally changed the threat landscape and the scope for global competition.
It is now kind of the foundational way we need to think about geopolitics, especially as this technology is rapidly evolving. So you really cannot divorce AI and AI adoption when you’re trying to understand geopolitics and foreign policy. Number two, in order to have an ability to make assessments about this emerging technology, to understand geopolitics, you have to have a public sector that is actually leveraging these tools to the best of their ability. Now, there are a lot of ways that AI is being adopted at the public sector. sector, you know, you’re obviously thinking about sort of the, again, military application of drones, but you need to have your day -to -day workflows integrating this technology.
And this is something that we are really focusing on, especially about how to build up AI literacy within the public sector, not just at the military level, but within the intelligence community, within the State Department, and even within, like, commerce, OPM, all the, like, any federal sector employee at some point needs to be moving smarter and faster with AI. And third, we’re looking at the, like, specific use cases. So one of the projects that we were working on last year is looking at how AI can be used for predicting geopolitical events, both for military applications, but also for State Department applications. So, and the reason we do that is because in order to convince the public sector that they should be using AI, you almost need to show them how it could look like 10 years from now as we’re moving to that future.
So by kind of demystifying its use and showing them targeted ways that you can use it, it actually solves your meta -problem of understanding why AI is so important to geopolitics. .
So I’m hearing a couple of things. I’m hearing this general statement that much of the world is going to be about AI and much of the world is going to be AI creating that world. And that’s the metaphor that comes directly to the project we’re looking at, which is we’re negotiating with AI tools. We have to have a baseline of capability. And yet the landscape in which negotiation and diplomacy are happening is being fundamentally changed by AI itself. And so that whole issue around preparedness and around setting the ground rules. I also heard not just the thought process around artificial intelligence as a trusted agent to accumulate information. Both of you mentioned that. But also as an agent to help understand new pathways for success, new pathways for leverage, whether those are national security or whether those are economic vitality.
The scope of the negotiations doesn’t really change that. So. In the area of. sort of preparation, let me come to you, Robin. Robin is the co -founder and CEO of Apolitical. Apolitical is a global platform for policymakers that specializes in government innovation. She’ll talk about that in just a second. In particular, you have courses in helping governments prepare their workforces for the modern world in which they live. Your AI courses have reached hundreds of thousands of people around the world. And much of what you are trying to do is to prepare the world for the kind of things that we are talking about in Move 37, obviously much broader than just that topic. So let me ask you to talk a little bit about that, if you wouldn’t mind.
What sort of lessons you’ve learned from the field and how you think about policymakers’ willingness or how you change policymakers’ willingness to embark on journeys with new tools and new capabilities.
Stanford HAI is one of our collaborators. So we are more context experts than content experts, and we bring the content experts into the middle. So where are we at? Let me give you some data. And this is from a 5 ,000 -person survey that we’ve recently run. Overall, public servants are incredibly optimistic about AI. North of 90 % think there is huge possibility in the public sector. And there are lots of paradoxes here. They’re also wary of it, right? There is a huge value creation opportunity. One figure from BCG estimates that there’s 1 .75 trillion of public sector value to be unlocked if we harness AI in the right way, because AI loves bureaucracy, all these repeatable processes. And about a third of most public officials daily watch And I’m going to do something about it.
And I’m going to do something about it. is research and writing related. AI is great at that. So the prize is very, very big and that’s just the painkiller prize. When you get to the vitamin prize, when you get to what AI could do in terms of predictive policy making and responsive policy and adaptive policy, et cetera, then you get into a space that’s only really bounded by the imagination. So there’s lots of AI talk. There’s less AI action. Increasingly, we’re in a pilotitis zone where almost everyone’s got pilots. 70 % of leaders say they’ve either got AI pilots or plan to launch them this year, but only 45 % of them say they have any plan to evaluate their pilots.
So that’s a pretty big gap to close and we see gaps like this all the time. One of the biggest gaps is leaders not using the technology themselves, which is a real problem because you can’t understand this technology in the abstract. You cannot look over your grandson’s shot. older and see them using it. You’ve got to use it. You’ve got to feel the speed of change. Of the public servants who are implementing AI in the public sector globally, these are people who self -identify of having AI in their jobs. Only 26 % of them say they understand their own country’s ethical frameworks. So approximately three -quarters of all the people rolling out this technology are freestyling. That’s terrifying.
So that’s a skills and knowledge gap not even closed within an institution. It’s not even getting to how do we actually understand the basics of this technology. Just to close, and there’s a whole lot more fascinating data, but one of the things that is increasingly worrying me talking to leaders around the world working on this is that we are now getting quite drunk on the idea of AI agency. Thank you. but we’re not talking about human agency in the process and maintaining it. So I think we risk getting into a zero sum dynamic where, and I think this is relevant to diplomacy, where the agency drains away to AI and that all comes at a cost to humans.
So we need to be building up humans at the same time. And the framing and heuristic I found most helpful for this overall is this idea of this recently merged of being below or above the algorithm. If you’re below the algorithm, you might be an Uber driver being dispatched, an Amazon packing worker being allocated to put stuff into boxes. If you’re above the algorithm, you are using tools to further your goals. We need, when we think about closing that capability gap, and I think in diplomacy, to keep moving people up above the algorithm.
Great, fantastic comments from all of you, thank you. I’m gonna ask Charlie and Sylvina just a couple of quick comments to make comments on the project. All right. Because then I’m going to come back to the three of you and I’ll prompt you with the question now, which is what would you want to be comfortable with knowing about the tools and the capabilities that you will be asked to use or be offered to use? So, Slavina, can you just follow on Robin’s comments? One of the benefits we have of doing this program at the Belfer Center and the Kennedy School is a large set of people who have done this for a living. This is what diplomats and negotiators have done.
Can you talk a little bit about how we engage that group and what we’re trying to get from that?
Definitely. So I think, as Michael put it, obviously at the Belfer Center we have quite a variety of current, former diplomats, practitioners, not just from the U .S., but from all over the world. And a large part of the work that we’ve been doing is sitting down for one -on -one interviews with all of them and really getting a sense of how they think. Thank you. just about the content of the major negotiations that they’ve been leading, but more about the process. So very similar to the panel discussion we’re having here today, what are some of the uses that you see one day you could be using AI for? So a lot of what we’ve heard so far, I mean, the position tracking that I think Gabriella referenced.
We’ve heard a lot about historical precedent, the generating of options and strategic options, and really uncovering the deepest interests. And I think where this ties really well to what Robin was saying is a lot of them are also expressing their hesitancy. So they’re being very forthcoming in that, and I think that allows us to take a really sober look at what are the risks of integrating these tools. One of the main questions we get is if you’re using these tools, we ask them, what would you like to know? So exactly what Michael’s saying now. So a lot of these interviews have really been integrated across the different work streams of our project, and we really put diplomats and practitioners at the heart of the rest of the work that we’re doing.
All right. Thank you.
And Charlie, you talked a little bit before about, you know, there’s an obvious role that LLS has. I think that’s a really good point. And I think that’s a really good point. in helping people accumulate and synthesize a very large amount of information. But there are many more aspects of a negotiation. Can you talk just a little bit about some of the other ways the tools are going to be used in the project that we’re doing?
Yeah, absolutely. Thanks so much, and thanks so much for the comments as well so far. The panelists have touched on a couple of really interesting applications, whether it’s, especially as Robin was talking about, with predictive or adaptive policies, if you’re looking at, and Anita as well, with the predictive geopolitical events. We have a really fascinating array of algorithms that are incredibly competent at these sorts of predictive tasks. And what we now have with the current computational ability and also the ability of language models is we can process vast amounts of unstructured data in ways that makes these algorithms even more accessible to a wider range of people. So I think that that’s one area that I’m particularly excited about.
There’s also a bunch of stuff, as Gabriella was touching on, is how do you take these vast unstructured transcripts and come up with natural… natural language processing additions on top, and can we represent positions that they track? and change over time. I think that is one of the big parts of cognitive load that I think diplomats have spoken about a lot.
Great, thank you. We’re going to see if questions from the audience are in just a moment, but Nandita, I’m going to start with you. I’m going to, forgive me for doing this, I’m going to characterize your career as an analyst. Every job that I read and see about has been an analyst of some kind. You’re constantly in a place where people are suggesting new tools and new capabilities. So think about AI in the world you’ve inhabited. What’s going to make you most comfortable when somebody shows up and says, here’s the thing, it’s going to make your life better?
That they can explain what the outputs are. So one of the things, like when we were looking at, for example, could we be using AI for predicting geopolitical events, a lot of people, both in industry, in academia, and in the public sector, that are working on these types of projects, they all say the same thing, which is this should be seen as a data point or a shaper of the way you should view the world, not as finished intelligence. Finished intelligence, should always… ultimately be done by a human who is accountable to their policymakers. Now, if you are working in policy, you understand actually how this works. You’ll have your head of state or your head of government come to you and ask you to explain exactly how you got to your assessment.
Right now, we have a human who ultimately has to do that. But what happens when we’re starting to rely more and more on AI tools is you never want to lose that ability to explain the outputs and particularly demonstrate that you’ve looked at all the counterpoints that you could possibly do. Now, this is where I think AI is super helpful because oftentimes when I was trying to figure out, especially in academia, what are all the things I could have done wrong? How could I have measured this differently? You’re always prepared to thinking about all the decisions that you made and how to justify and validate them. But as the scenarios get broader, as they get more complicated, your ability to figure out what the counterarguments are are going to just dwindle over time.
Oftentimes, the argument we make is that humans are biased at the end of the day. We make our decisions differently. We make our decisions differently. We make our decisions differently. We make our decisions differently. We make our decisions differently. We make our decisions based on how we got to where we are today. the experience that you have. So AI can be really, really helpful in helping you sort out the counterarguments, but you still need to understand how those counterarguments work and why ultimately you’ve come to the assessment that you have. So where I would feel super comfortable is this is how I relied on AI. This is how it came to the output that it did.
This is fundamentally why I made the assessment that I did.
Great. Thank you. And Robyn, I think this is the world you live in every day of helping governments and government officials and civil service workers. So project that onto an AI for diplomacy landscape. What do you think is going to be important to get people to say, I’m going to trust this, I’m going to work it, but I’m at least going to try?
Well, at the risk of stating the obvious, I think we should just acknowledge that the people developing these models don’t even have full legibility over how they’re working. So that’s where we’re starting from. So that’s the kind of ceiling. On where we can get to. You can break down the thinking process as it were, but you still have that black box. I don’t think it’s insurmountable. I think some of the things I’m worried about relate to the more psychological aspect of this, and in particular, sleeping at the wheel, this phenomenon where we have this strange relationship with AI where we get false negatives too quickly. So it does a bunch of clever things, except it didn’t do this one thing, and therefore we can’t use it for anything.
And if you check back in in like a month’s time, often it can do the thing. So you have that, the false negative, and then the phenomenon of sleeping at the wheel is where it starts to get very, very good, creeping upwards of like 85%, 90 % accuracy, and then you assume it’s 100 % accurate. And it’s really quite hard to edit your assumptions and say, no, it’s not. And you’ve probably all found this. Because if they’re power users of AI, and I’m one of them, some of them are not. Sometimes it comes across as so smart and so brilliant and comes up with a whole lot of counter -arguments. I use it for sort of kicking the tires from different perspectives all the time on stuff I’m doing.
That it’s almost overwhelmingly smart, and you’re like, it must have covered everything. That’s a default. So I think giving us the human tools and the psychological sort of counter -arguments and weaponry to deal with this is really, really important. I already have a heuristic that whenever I open my phone and I’m dealing with anything with an algorithm, I am in opposition to that algorithm because its interests don’t generally coincide with mine. So I try and get all algorithmic stuff off my phone as a starting point. But the dynamic with AI is a bit different, but I still think you have to have that sort of battle mentality with the technology. So that would be my…
There are many other things to consider, but that’s top of mind.
I think that’s terrific. And I think this idea of calibrating on the… Like what I want from completeness when I ask for analysis may be very different than what I want from can you just give me some different ideas that I haven’t thought about before. And different stages in a negotiation are going to require different levels of calibration. So, Gabriela, what do you think?
Well, it’s very difficult to follow these two girls. But the fact is that when you know a little bit more about how these things work, I’m not a technologist. But I have been looking at all of what can go wrong. Misrepresentation, over -representation of certain cultures, certain languages, assumptions. Therefore, if I am negotiating and you’re going to offer me a tool to improve my negotiating skills, I need to be sure that the assumptions that you use to build that tool are not just to beat the person in front of me. or not just to maximize efficiency or not just to do the kind of things that we are teaching the AI to do. And therefore, it’s much more complex.
Because what you want to do is to open a space of human understanding. How do you do that? And therefore, I will be questioning, as Robin said, always questioning, but it’s not what we do. And the other point is that the AI, what is amazing, is that it’s just reproducing cognitive abilities that humans do. So when you go into the using whatever chat box you use to get information, you take for granted what it comes out. What you would never do if you hire somebody in the first week. Even if you have done all the checkpoints for that person to have the capacities that you are looking in the market. So I feel that there is this question of, first, really bringing to the table the AI.
tools that are going to be reliable and trustworthy, and I know that these words are almost a cliche, but the reality is that sometimes they’re not. And the other point is that you can become very lazy. And how do you avoid just to grab the thing and say, that’s perfect? How do you keep that space for ourselves to take the decisions and be not only in the driver’s seat, but actually to think of AI as a supporter cast. And if we get the Oscar, it’s us and not the AI.
That’s fantastic. I have this mental picture in my mind for those of you who’ve done negotiations in any kind. The first thing you do is you grab a bunch of your team in a room and you say, let’s talk about strategy, and what are we going to do? And Bob in the corner says, here’s an option, and you realize Bob had a bad night last night, so maybe you discount what Bob says. So what happens when the AI says, here’s a thing? I don’t just trust it, it’s a priority. I have to apply a human judgment to what I’m hearing. So terrific points. Okay, we have time for a question or two, and I see one.
Just say who you are, where you’re from, and a quick question.
Thanks so much, Michael.
We have a microphone. Thank you.
Thanks so much. I’m Sam Dawes. I’m a senior advisor to the Oxford University AI Governance Initiative and director of multilateral AI. But my background is in diplomacy, working for Kofi Annan when he was Secretary General, and then for the Foreign Office and Cabinet Office. I wish we had had AI tools back then. So I was really inspired. It’s such a timely, rich panel, so thank you all for that. Something that Gabriella said around culture I think is so important, and I’m thinking about the positives and the risks with applying AI in this space. How can we ensure that the diverse cultural inputs of the world’s most diverse countries, of different societies, are… embedded in the data sets and the models which inform negotiations.
So is that something that UNESCO is working on in the long term and connects to the tools we use? And the second question is around the flip side. If AI is to be a useful neutral mediator in disputes or an assistant to a mediator, a human mediator, then what do we do about data poisoning and prompt injection and those kinds of risks? Thank you.
Very fast, not on the question, the question of culture. Culture is expressed by language. And therefore the more we can try to represent those languages in the models we use, I think the best we will be prepared to understand it. And I’m fascinated by that. I’m not a linguist, but if I… I would choose another life, I would do that. Because when you hear, for example, there was this Namibian representative during the negotiations of the ethics of AI, and she was saying, I find your draft very individualistic. It’s always about the human. It’s always about the outcomes for people, improving their welfare. And at the end, what I’m thinking about is the Ubuntu philosophy, which is I am because you are, and we are because it’s nature, and we are interlinked.
And therefore, how do you capture this when the models that we are developing are maximizing individual welfare? And so the only answer I have is try to be representative, and I think this is nothing new. We have seen how much these tools can discriminate if you are just built in one language or with the representation of certain characteristics of people. or countries. but really to be sure that you are capturing the richness that comes through language and opening up the sources and that’s the other point the sources this is one thing that I would always ask the answer you’re giving me is based on what sources and that would might help but these are checkpoints that we always need to be testing on the ground
I think you also if I just raise one other thing I think you raise one other really important point which is you know there’s a whole spectrum of things here there is negotiation because we have a set of interested parties to get to a common good understanding we also have very adversarial negotiations so adversarial negotiations open up this whole possibility of data poisoning of training set differentiation etc. so it’s a very complex world I really appreciate you bringing it up we have time for one more question I think let’s go right here can somebody tell me are we counting down to zero or are we counting down to zero or are we counting down to five Are we okay to keep going to zero here?
Okay, good. We’re going to go to zero no matter what.
Good morning. Good morning. Namaste. My name is Devika Rao. I meet 300 to 600 people per day, and I work around different languages. So basically I’m an Indian classical dance teacher. Okay. So I have data. I have a human connection. So, and what we want to do, how this cultural education can be supported by AI. So what is the step I can take further? Presently I’m actually working on a framework, cultural framework, which is India and UK POCC 2025, 2030. So I’m also interested in NEP and national health policy because people connected to their health and education. And education, which is the center point. So where I can go. and what kind of co -creation, co -collaboration can happen in this?
Robyn, is this something you want to jump in on? Maybe give her your email address.
I wish I had an immediate response to that. I don’t think there is any default place to go, but I do think this is where the conversation is evolving, and there’s more and more recognition of the cultural oversight and importance. So I would just encourage you to please keep making those points. And I will just make one comment on the first question. The Swiss have built sort of a quasi -Swiss government, quasi -multilateral initiative to build an LLM that is trained from the outset on more than 100 languages, and it is actually run by a friend of mine who’s a former Swiss diplomat, so she’s coming at it very much with a diplomatic context. I’m very happy to make that connection.
Education. Super. Super complex. Don’t look at the technology. because we always focus on the technology the countries that have introduced so much technology in their educational systems didn’t get better student outcomes because of content we go to the internet and we go to the systems and we try to bring tools to help kids and we never see if they are contextually relevant culturally linked and therefore if you don’t produce the content the tools will not make it.
I’ll just add one last thing I think the way to think about AI is also is it actually solving a problem or are you just trying to introduce it to create a new problem I think this is where you have to think about the point of AI augmentation I think there are a lot of ways we can think about how AI can augment the problem sets that we have but sometimes you don’t actually have the problem that AI is going to solve and you don’t need to force AI to fix it
Thank you very much Okay we’re going to negotiate If you have a really quick question you can ask it No behind you Thank you it’s got to be quick though
My name is Arman I’m working for JPL South Asia just a quick question on how do you think this would impact balance of power like given that every country has different access to the kind of data sets that they have and as we saw there can be three states also in the play how it would look like state A knows everything about the rest of the players and the others don’t.
So we think a lot about I’ll answer this if you guys are okay we think a lot about this in the project which is what’s the evolution of a set of AI tools it’s like everything else that we are here at this conference which is where will tools provide competitive leverage where are the kind of tools in the world we live in ones that should be dispersed actively and offensively not defensively in a world where some of the negotiation is about getting everybody to a positive and some of the negotiation is adversarial so I think it is a huge element of how it will change power structures not just because it’s a thing we think about from a negotiation diplomacy but because the general AI tool.
Okay with that we are just about out of time. I want to thank this amazing panel. Gabriela, Nandita and Robin. I want to thank my colleagues Slavina and Charlie for and I want to thank all of you. We are at the beginning of a long process. When you work at a place like I do at the Belfer you think about projects that have beginnings, middles and ends and you think about projects that can grow into something really really important. So any of you who have interest in what we’re doing please let us know if you feel like you have questions that we ought to be asking or you have answers or you have answers to questions we have asked.
We would love to hear from you as we begin to build what we think is a really important discipline. So thank you and thank you to the sponsors and hosts. I appreciate everybody joining us. Thank you.
However, we must remain masters of our tools. The final analysis, the subtle art of negotiation, the building of trust; these remain profoundly human activities. AI can prepare the ground for a decisi…
TopicAI tools are already here to assist certain aspects of negotiations, from language translation to data analysis. However, the essence of successful negotiation is inherently human. It requires empathy…
BlogThere is much discussion about AI and digital tools changing diplomacy. Yet, it is clear that technology will not replace human negotiators; rather, it can significantly augment their work. For instan…
BlogAnother concerns transparency and accountability. For negotiation support, it is not enough that a system produces a plausible recommendation. Diplomatic services may require records that show how a p…
Blog“We have seen how much these tools can discriminate if you are just built in one language or with the representation of certain characteristics of people”<a href=”https://dig.watch/event/india-ai-impa…
EventFor consulting firms, the path forward involves embracing AI as an enabler whilst focusing on uniquely human capabilities. This includes creative problem-solving, contextual understanding, empathy, an…
EventAI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search interest for ‘AI agents’ surged throughout the year, reflecting a broader shift…
UpdatesSezio Onoe:Yeah, actually the ITU has published over 100 standards and also the 120, around 120 now ongoing. So actually the AI technology can be applied, naturally applied to the widely available tec…
EventThe speakers show broad agreement on AI’s transformative potential for development but significant disagreements on implementation approaches, risk priorities, and institutional capabilities. Key tens…
EventThe tone was pragmatic and solution-oriented throughout, with speakers acknowledging both challenges and opportunities in AI governance. While there was recognition of significant obstacles (infrastru…
EventEllis emphasized that while organizations are investing in AI technology, there’s a significant skills and capability gap that prevents effective and ethical utilization. This gap affects both technic…
EventAwareness and capacity gaps exist in understanding available standards and building blocks
EventAudience:Hamid Hawja is my name, from Morocco, director of Hebdo magazine. I have two questions. First, I’m just wondering, do we have guidelines for algorithm designers? We have seen, for example, Mi…
EventAnother significant risk is the potential for bias in AI algorithms, which can reflect existing prejudices and stereotypes. This was discussed in the context of ensuring fairness and equity, with a sp…
EventCultural diversity | Interdisciplinary approaches Danion propose une vision d’IA qui, face à une question biaisée, pourrait identifier des pratiques similaires dans d’autres cultures et suggérer des …
EventSofiya Zahova: Thank you, Davide. I’m honored and delighted to join you today on this important panel, but even more pleased to receive a question that would allow me to reflect on rather positive exa…
EventThe tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrated mutual respect and shared commitment to inclusive AI development. The atmosph…
EventThe overall tone was formal yet optimistic. Speakers expressed enthusiasm about the potential of digital technologies while acknowledging challenges. There was a strong emphasis on collaboration and i…
EventThe overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological change but expressed confidence in the ability of democratic institutions and mult…
EventThe tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciation and maintains an upbeat, accomplished atmosphere. The speakers express relief…
EventThe tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate for a high-level international gathering, with speakers expressing honor, gratitud…
EventThe lack of global representation in digital trade negotiations is problematic as it can lead to a fragmented approach and the adoption of rules that may not be suitable for countries’ development lev…
EventIsrael criticizes the negotiation process, particularly the final stage, as lacking transparency and not reflecting the spirit of negotiations expected at the UN. They describe the process as marred b…
Event– **Technical Limitations and Trade-offs**: The discussion covered inherent technical challenges including the precision vs. recall trade-off (accuracy vs. comprehensive coverage), hallucinations wher…
EventChristopher Painter:Thank you, Olga, and it’s great to be here, and I should say that you’re wondering what I’m wearing around my neck. I’m wearing a white top and I’m not wearing a black top. And I’m…
EventThe discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s potential in healthcare but was tempered by acknowledgment of serious challenge…
EventThe recognition of Global South leadership and the importance of environmental sustainability represents a maturing of digital governance discussions beyond purely technical considerations. The emphas…
EventIsrael: Good morning and thank you, Chair. We will present in brief, for the sake of time, some main points of our national perspective on the very timely and important topic of capacity building a…
EventThe tone was cautiously optimistic and pragmatic throughout. Panelists demonstrated excitement about quantum’s potential while maintaining realistic expectations about timelines and challenges. The co…
EventConclusively, the discourse maintained an optimistic outlook, acknowledging the transition’s dual aspects of risk and promise, particularly for the environmental and economic uplift of developing nati…
Event“J. Michael McQuade introduced the MOVE 37 initiative, a new project by Harvard’s Belfer Centre exploring AI’s role in diplomatic negotiations.”
The knowledge base explicitly describes MOVE 37 as a new Belfer Center project introduced by J. Michael McQuade to explore AI augmentation of diplomatic negotiations [S2].
“Artificial intelligence is a major aspect of the Emerging Tech Programme’s work and can augment human capabilities in diplomacy and negotiation.”
S2 notes that AI is central to the programme’s aim to augment human capabilities in diplomatic negotiations, and S16 lists specific AI applications for negotiation analysis, confirming the claim [S2] and [S16].
“AI tools can help participants accumulate and synthesize large volumes of information during negotiations.”
S1 highlights the role of large language models in helping people accumulate and synthesize very large amounts of information, providing additional nuance to the claim [S1]; S16 further describes AI-driven data analysis for negotiation scenarios [S16].
“Diplomacy remains a fundamentally human activity; AI should augment but not replace human agency in high‑stakes negotiations.”
Both S17 and S24 stress that AI is a tool that must remain under human control and that the art of negotiation and trust-building are profoundly human, supporting the report’s framing [S17] and [S24].
“Negotiations often involve many counterparts, multiple countries, and generate massive amounts of documents, creating information overload and strategic group‑think pressures.”
S106 discusses the complexity of international negotiations, including the struggle over process, multiple participants, and extensive documentation, adding context to the described information overload [S106].
The panel displayed strong consensus on keeping humans central to AI‑augmented diplomacy, the necessity of building AI capacity and literacy, the importance of multilingual and culturally diverse data, and the need for transparent, evaluated, and modular AI tools. There was also agreement that AI will become a strategic asset influencing power dynamics.
High consensus across most themes, indicating broad alignment among scholars, practitioners, and policymakers on the principles governing AI use in diplomatic negotiations. This convergence suggests that future initiatives like MOVE 37 are likely to adopt human‑in‑the‑loop designs, prioritize capacity building, and address cultural inclusivity, while also preparing for the strategic implications of AI on international power structures.
The panel largely concurs that AI can augment diplomatic work, but key tensions arise around model transparency, the level of trust to place in AI outputs, and whether AI should be framed as a collaborative facilitator or a competitive tool. These disagreements reflect differing risk tolerances and disciplinary backgrounds (technical vs policy).
Moderate – while there is broad consensus on the need for human‑in‑the‑loop and responsible deployment, the speakers diverge on practical governance (opacity, trust, framing). The disagreements suggest that any MOVE 37 implementation will need flexible guidelines that accommodate both high‑accountability requirements and the pragmatic constraints of existing AI technology.
The discussion was steered by a series of pivotal remarks that moved it from a high‑level enthusiasm about AI to a layered, critical examination of its role in diplomacy. Charlie’s warning against over‑reliance on LLMs introduced the need for verification and accountability, which set the stage for Robyn’s agency heuristic and Michael’s geopolitical framing. Personal anecdotes from Gabriela and Nandita grounded the debate in real‑world practice, highlighting both the promise (handling massive comment volumes, surfacing hidden data) and the perils (privacy, cultural bias, power imbalances). Audience questions about cultural representation and data poisoning, answered by Gabriela, further deepened the conversation around fairness and security. Collectively, these comments redirected the panel toward a balanced view that AI should augment, not replace, human negotiators, that transparency and multilingual inclusivity are essential, and that the strategic distribution of AI capabilities will shape future power dynamics.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

