Demystifying AI: How to prepare international organisations for AI transformation?

AI is reshaping how international organizations operate—no longer just a tech upgrade, it demands a fundamental rethink of workflows, culture, and knowledge management. By treating AI as both a public good and a strategic asset—and empowering ‘boundary spanners’ to bridge institutional silos—Geneva’s multilateral community can lead the transformation, a recent event held by Diplo stressed.

 Grass, Plant, Flag

AI as a turning point, not a trend

Jovan Kurbalija, Director of Diplo, opened the conversation by framing AI as both a challenge and an opportunity. It’s not just about adopting a new tool but fundamentally rethinking the structures, workflows, and values underpinning international organisations. This transformation is particularly urgent for Geneva, home to a dense web of multilateral institutions. AI, he argued, needs to be shaped by the values of human rights, public service, and multilateral cooperation. It shouldn’t just be plugged in like a new software package—it has to reflect Geneva’s ethical and institutional DNA and the wider UN system.

Kurbalija emphasised that organisations must stop waiting for mandates or budget allocations to experiment with AI. Change is happening quickly, and the longer institutions wait, the more reactive—and less prepared—they become. It’s not a question of whether AI will become part of international work, but how and on whose terms.

He broke down the process of AI adoption into stages. Setting up a basic AI tool like Chatgpt takes minutes, but truly integrating AI into an institution—to work in harmony with daily operations, internal processes, and organisational culture—can take a year or more. That transformation isn’t about code but people, mindsets, and habits.

To help demystify AI, Kurbalija walked through a simple explanation of how large language models work. These systems operate on pattern recognition and probability—they look for recurring structures in massive datasets to predict what comes next. Using the example of national flags, he showed how AI might group them by common features like colours or symbols. But while AI is good at spotting patterns, it’s not always great at understanding exceptions. Human judgement, nuance, and even rebellion against the expected still matter. The example of Greenland rejecting a typical Nordic cross flag in favour of a unique design served as a reminder: humans don’t always follow the algorithm.

Rethinking knowledge and data

This led to a deeper point about how we think about knowledge. In many digital policy conversations, the term ‘data’ has taken over, while older concepts like ‘knowledge’ or ‘wisdom’ have faded into the background. But AI isn’t just about data—it’s about how we know and interpret the world. When we use tools like ChatGPT, we’re not just feeding in facts but engaging with systems that model human thought, reasoning, and understanding. That’s a big leap from traditional tech tools and requires a different mindset.

One of the most important messages was a caution against ‘plug-and-play’ illusions. Some consultancy firms market AI as a magic solution—something you can install quickly to appear innovative. But that misses the point. Real AI adoption is slow, strategic, and deeply tied to how an organisation functions. The goal isn’t just to install AI—it’s to rethink how decisions are made, how institutional knowledge is captured, and how work gets done.

Diplo’s journey served as an example. With limited funding and a small team, Diplo couldn’t compete with tech giants in terms of scale. However, it focused on enriching its own data, for example, by annotating half a million UN documents to create a highly structured knowledge base. This allowed it to build AI tools that are far more useful and context-aware than generic models. Kurbalija pointed out that while large models keep growing, they hit diminishing returns. The real value now lies in the quality and structure of the underlying data, not just the quantity.

Making it work: From tools to transformation

The second part of the session highlighted how AI is reshaping three core work areas for international organisations: reporting, translation, and training.

In terms of reporting, diplomats spend vast amounts of time summarising meetings, drafting briefs, and crafting position papers. AI can help—tools like ChatGPT can generate drafts, but they need to be trained to reflect specific organisational or national perspectives. A generic summary isn’t enough when it comes to nuanced diplomatic language. The technology can be a time-saver, but only if adapted to context.

Translation and interpretation came next. Geneva depends heavily on these services, and AI tools like DeepL are already widely used. But the challenge goes beyond just language. AI tools struggle with accents, institutional jargon, and acronyms. To be truly effective in Geneva, translation tools must be trained on international diplomacy’s unique linguistic landscape.

Training staff for the AI era was the final major theme. It’s not enough to hold theoretical sessions on AI ethics—what’s needed is hands-on experience. That’s where Diplo’s AI Apprenticeship online course comes in.

AI apprenticeship

Introduced by Anita Lamprecht, the online course helps participants build their own AI agents tailored to their organisation’s needs. The process is surprisingly simple: participants interact with the bot, give it instructions, define tone and values, and teach it to behave like a knowledgeable assistant.

But the training goes deeper than just prompt engineering. The program is designed around systems thinking—it encourages participants to see AI not as a standalone tool, but as part of an interconnected institutional ecosystem. Over several weeks, participants explore everything from risk and data labelling to cybersecurity and knowledge mapping. They test different AI engines, assess their outputs, and finish with a project tailored to their own institution. Future editions of the program are already in the works.

Boundary spanners: The people who connect the dots

The idea of the ‘boundary spanner’ continued throughout the session. These are the people who connect communities—techies, diplomats, policy folks—and help ideas move across domains. Geneva, for all its density of institutions, still operates in silos. A data-driven analysis found that only 3% of hyperlinks on Geneva-based websites connect to other Geneva-based organisations. That’s a stark indicator of how disconnected even closely situated institutions can be.

The solution isn’t to eliminate silos—they’re human and inevitable—but to build more bridges. Whether it’s casual AI meetups or formal partnerships, organisations need more people who can connect the dots. This is where innovation happens—not in isolation, but at the intersections.

The bureaucracy bottleneck

The session also highlighted how bureaucracy remains one of the most significant barriers to innovation. One participant raised a simple, practical idea: instead of using open-source AI tools that store sensitive data externally, why not build an in-house model? Technically, it’s easy and cheap. But institutionally, it’s slow—committees, approval chains, and consultant reports can stall even the simplest project.

The key message was that many young professionals already have the skills and ideas. What they lack is the space to act. If international organisations want to thrive in the AI era, they need to empower their internal talent—give them a sandbox, and let them experiment.

Watch the event in full below.