Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government

25 Jun 2025 14:00h - 15:30h

Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government

Session at a glance

Summary

This OECD open forum discussion focused on implementing AI principles and using AI in government services, featuring two main segments with international experts and policymakers. The first segment examined the OECD AI Principles Implementation Toolkit, a practical initiative designed to help countries, particularly in the Global South, develop responsible AI policies tailored to their local contexts. Costa Rica’s Marlon Avalos explained how his country initiated this toolkit project after recognizing that while OECD principles provide strong ethical guidance, many developing countries lack the tools to translate these principles into actionable policies. The toolkit will feature a self-assessment component and repository of best practices to guide countries through AI governance challenges.


OECD’s Lucia Rossi detailed the toolkit’s structure, emphasizing its co-creation approach through regional workshops with countries in Asia, Africa, and Latin America. Mozilla’s Jibu Elias shared India’s community-driven approach to responsible AI, highlighting successful grassroots initiatives like student-developed accessibility tools and tribal community workshops that demonstrate how AI adoption must be locally rooted and people-centered. Niger’s Anne Rachel Ng discussed African countries’ opportunities and challenges, noting that while AI can address development barriers in healthcare, agriculture, and education, the continent faces significant infrastructure constraints, with only 22% of Africans having broadband access and many AI systems performing poorly on African populations due to training bias.


The second segment explored practical government AI implementation, with Norway’s Katarina de Brisis sharing successful use cases including AI-powered X-ray analysis that reduced patient waiting times by 79 days and tax fraud detection that increased detection rates from 12% to 85%. Korea’s Jungwook Kim emphasized three key pillars for effective AI adoption: innovation in data and infrastructure, inclusion to address digital divides, and strategic investment in capabilities. Both speakers stressed the importance of building employee competence, establishing legal frameworks, and ensuring data security when implementing AI in government services. The discussion concluded that successful AI implementation requires inclusive, context-sensitive approaches that prioritize trustworthiness, local capacity building, and international cooperation to prevent widening digital divides.


Keypoints

## Major Discussion Points:


– **OECD AI Principles Implementation Toolkit Development**: A collaborative initiative led by Costa Rica to create practical tools that help countries, especially in the Global South, translate the high-level OECD AI principles into actionable policies. The toolkit will feature self-assessment tools and region-specific guidance based on best practices from comparable countries.


– **Inclusive AI Development in Emerging Economies**: Speakers from India, Costa Rica, and Niger emphasized the importance of community-rooted, locally-contextualized AI solutions. Examples included student-developed accessibility tools, tribal community workshops, and addressing infrastructure challenges like connectivity and the digital divide.


– **AI Implementation in Government Services**: Discussion of practical AI applications in public sector services, with Norway sharing successful cases like AI-assisted medical diagnosis, tax fraud detection, and police transcription services. The focus was on improving efficiency while maintaining trustworthiness and citizen safety.


– **Challenges and Risks in AI Governance**: Identification of key barriers including inadequate infrastructure, skills gaps, data scarcity, and the need for inclusive governance frameworks. Speakers highlighted risks around bias, exclusion, and the importance of building public trust through transparent, accountable AI systems.


– **International Cooperation and Capacity Building**: Emphasis on the need for collaborative approaches to AI development, with particular attention to supporting developing countries through knowledge sharing, technical assistance, and ensuring no country is left behind in the AI transformation.


## Overall Purpose:


The discussion aimed to showcase practical approaches for implementing responsible AI governance globally, with a particular focus on supporting developing countries. The session sought to bridge the gap between high-level AI principles and concrete policy actions, while demonstrating real-world applications of AI in government services.


## Overall Tone:


The discussion maintained a collaborative and constructive tone throughout, characterized by knowledge sharing and mutual learning. Speakers were optimistic about AI’s potential while remaining realistic about challenges. The tone was particularly inclusive, with strong emphasis on ensuring global participation in AI development. Technical difficulties with some remote speakers added a touch of informality but reinforced the speakers’ points about digital infrastructure challenges. The session concluded on an encouraging note, emphasizing collective action and continued cooperation.


Speakers

– **Moderator (Yoichi Iida)**: Chair of the OECD Committee on Digital Policy


– **Marlon Avalos**: Online Director of Research Development and Innovation at the Ministry of Science and Technology from Costa Rica


– **Lucia Rossi**: Economist at Artificial Intelligence and Digital Emerging Technology Division from OECD


– **Jibu Elias**: Responsible Computing Lead for India from Mozilla


– **Anne Rachel Ng**: Director General at National Agency for Information Society, ANSI from Niger


– **Katarina de Brisis**: Deputy Director General at the Ministry of Digitalization and Public Governance from Norway, and long-standing representative at OECD Digital Policy Committee


– **Jungwook Kim**: Executive Director at Center for International Development from KDI


– **Seong Ju Park**: Policy Analyst at Innovative Digital and Open Government Division from OECD


Additional speakers:


None identified beyond the provided speakers names list.


Full session report

# OECD Open Forum: Implementing AI Principles and Government AI Services – Discussion Report


## Executive Summary


This OECD open forum at the Internet Governance Forum 2025 brought together international experts to discuss two critical aspects of AI governance: implementing AI principles through practical toolkits and deploying AI in government services. The session featured representatives from Costa Rica, Niger, India, Norway, and Korea, alongside OECD officials, creating dialogue between developed and developing nations on shared AI governance challenges.


The discussion was structured in two segments: first examining the OECD AI Principles Implementation Toolkit led by Costa Rica, and second exploring practical government AI applications. Key themes included the need for international cooperation, community-centered approaches to AI development, and addressing infrastructure challenges while scaling AI implementations effectively.


## Session Overview and Structure


The forum was moderated by Yoichi Iida, Chair of the OECD Committee on Digital Policy, who noted Japan’s role in proposing the OECD AI principles in 2016. The session transitioned to Seong Ju Park, Policy Analyst at OECD’s Innovative Digital and Open Government Division, who moderated the second segment on government AI services.


## Segment 1: OECD AI Principles Implementation Toolkit


### Initiative Background


Marlon Avalos, Online Director of Research Development and Innovation at Costa Rica’s Ministry of Science and Technology, explained the toolkit’s origins in Costa Rica’s experience developing their national AI strategy. Despite being politically stable and technically skilled, Costa Rica recognized significant challenges in translating OECD AI principles into actionable policies. As Avalos noted, “even a country like Costa Rica, politically stable, technically skilled and internationally connected, face these challenges, then surely other countries like us will too face that challenge.”


The initiative gained momentum when the Global Partnership on AI (GPAI) joined with the OECD AI community in July 2024, creating opportunities for broader collaboration on practical implementation tools.


### Toolkit Structure and Co-Creation Approach


Lucia Rossi, Economist at OECD’s Artificial Intelligence and Digital Emerging Technology Division, outlined the toolkit’s development through regional co-creation workshops across Asia, Africa, and Latin America. The toolkit will include:


– A self-assessment tool for countries to evaluate their AI governance capabilities


– Region-specific guidance tailored to different developmental contexts


– A repository of best practices from comparable countries


– Resources available through the OECD AI Policy Observatory on oecd.ai


The co-creation workshops serve dual purposes: informing toolkit development and creating knowledge-sharing networks among participating countries.


### Country Experiences and Perspectives


**India – Community-Driven Development**


Jibu Elias, Responsible Computing Lead for India at Mozilla, presented examples of grassroots AI initiatives including student-developed accessibility tools like WebBeast (a web accessibility checker) and PhysioPlay (a physiotherapy game), plus tribal community workshops. He emphasized that “responsible AI must be inclusive, accessible, and rooted in local values, focusing on communities most affected but least represented in AI development.”


Elias posed a fundamental question: “Don’t just ask who builds AI, ask whose future is it building? Because in countries like ours, trust is not a given, it’s earned. And when communities are trusted as co-creators, not just end users, they don’t just adopt technology, they transform it.”


**Niger – African Context and Challenges**


Anne Rachel Ng, Director General at Niger’s National Agency for Information Society (ANSI), highlighted both opportunities and significant barriers for AI adoption in Africa. She identified potential applications in healthcare, agriculture, and education, while noting critical infrastructure constraints: only 22% of Africans have broadband access, and 16 African countries are landlocked.


Ng addressed data bias issues, noting that only 2% of African-generated data is used locally, and facial recognition systems perform poorly on African populations. She referenced how pulse oximeters during COVID-19 were less accurate for people with darker skin tones due to training bias.


Despite challenges, Ng advocated for patient, culturally-grounded approaches, invoking an African saying: “Europeans have watches, we have time,” explaining that “taking the time to develop context-appropriate solutions is more important than rushing implementation without proper understanding.”


## Segment 2: AI in Government Services


### OECD Research Findings


Seong Ju Park presented OECD research showing that while AI offers significant potential for improving public services, implementation faces numerous barriers. AI use cases are unevenly distributed across government functions, with many initiatives remaining at the piloting stage rather than scaling to wider systems.


Government AI carries higher risks than private sector applications, including ethical, operational, exclusion, and public resistance risks. Some government functions face particular barriers, such as stricter data access rules and requirements for audit trails in public integrity functions.


### Country Implementation Examples


**Norway – Systematic Deployment**


Katarina de Brisis, Deputy Director General at Norway’s Ministry of Digitalisation and Public Governance, shared concrete examples of successful AI implementation:


– AI-powered X-ray analysis allowing patients to go home immediately instead of waiting, affecting about 2,000 patients


– Tax administration fraud detection improving from 12% to 85% detection rates, generating 110 million kroner in additional revenue


– Police transcription services streamlining administrative processes


Currently, 70% of Norwegian state agencies use AI, with targets of 80% by 2025 and 100% by 2030. Norway is investing in Norwegian language foundational models and computing infrastructure while implementing the EU AI Act.


**Korea – Strategic Framework**


Jungwook Kim, Executive Director at Korea’s KDI Center for International Development, outlined a three-pillar framework: innovation (data and infrastructure development), inclusion (addressing digital divides through accessibility improvements), and investment (strategic resource allocation).


Kim noted that AI involves “moving targets” requiring “agile measures to take care of the AI safety issues,” highlighting the need for adaptive governance frameworks.


## Key Themes and Consensus Points


### International Cooperation


All speakers emphasized the critical importance of international cooperation for successful AI development. The OECD toolkit represents collaborative efforts to bridge gaps between principles and practice, with support across different developmental contexts.


### Community-Centered Approaches


Multiple speakers stressed involving local communities, especially marginalized groups, in AI development to ensure solutions address real local needs rather than imposing external solutions.


### Infrastructure as Foundation


Representatives from developing countries highlighted connectivity and infrastructure limitations as fundamental barriers requiring attention before sophisticated AI governance frameworks can be effectively implemented.


## Challenges and Implementation Barriers


### Scaling from Pilots to Systems


A significant challenge identified across countries is moving AI initiatives from pilot projects to systematic implementation across government services.


### Capacity Building


The pace of AI development often exceeds the speed at which human capacity can be developed, creating mismatches between technological advancement and workforce readiness.


### Bias and Inclusivity


Current AI systems often fail to serve non-Western populations effectively due to bias and lack of representative training data, requiring both technical solutions and inclusive development processes.


## Next Steps and Commitments


The OECD committed to:


– Launching a comprehensive report on governing with AI


– Creating a dedicated hub for AI in the public sector on oecd.ai


– Organizing regional co-creation workshops, starting with ASEAN countries in Thailand


– Conducting global data collection on AI policies and use cases for the OECD AI Policy Observatory


Regional workshops will continue with African, Central American, and South American countries to inform toolkit development and build knowledge-sharing networks.


## Conclusion


This forum demonstrated both the potential and challenges of implementing responsible AI governance globally. While speakers showed strong consensus on fundamental principles—international cooperation, inclusive approaches, and context-sensitive solutions—they also acknowledged significant differences in implementation approaches based on developmental contexts and available resources.


The discussion revealed that successful AI implementation requires more than technical capabilities; it demands inclusive governance frameworks, robust infrastructure, community engagement, and sustained capacity building. The OECD AI Principles Implementation Toolkit represents an important step toward bridging the gap between high-level principles and practical implementation, supported by ongoing collaboration and knowledge sharing among countries facing similar challenges.


The path forward emphasizes balancing international cooperation with local ownership, ensuring that AI development serves community needs while building the foundational capabilities necessary for sustainable and equitable AI adoption.


Session transcript

Moderator: Thanks for watching, don’t forget to subscribe! Good afternoon everyone, and welcome to this open forum organized by the OECD. Thank you for joining us here in the Lillestorm and also online. This session brings together two connected discussions. Before jumping to the content, my name is Yoichi Iida, the chair of the OECD Committee on Digital Policy, and I’m very happy to be here together with all of you to moderate this session. So as first part, we begin with a panel on the OECD AI Principles Implementation Toolkit that is a practical initiative designed to support countries in strengthening their AI ecosystems and in adapting governance frameworks to local contexts. The toolkit will offer region-specific guidance to help bridge AI divides and advance responsible inclusive AI development. We will then transition to a second segment. focused on how governments are using AI in practice to improve public service deliveries and policy making. Since 2019, the OECD AI principles have guided national strategies and international cooperation on AI. The OECD AI principles also serve as the common foundation guiding the work of the global partnership on AI GPA, which recently joined with the OECD AI community in July 2024 in a new integrated partnership. Despite the transformative potential of AI, access to benefit of this technology remains uneven. Many countries face challenges related to infrastructure, human capacity, and the policy frameworks, along with greater exposure to risks such as task replacement. Today’s discussion will spotlight on policy efforts and initiatives that help close those gaps and promote inclusive AI ecosystems around the world. Please join me in welcoming our four distinguished speakers. Mr Marlon Avalos, Online Director of Research Development and Innovation at the Ministry of Science and Technology from Costa Rica. Second, on my left side, Ms Lucia Rossi, Economist at Artificial Intelligence and Digital Emerging Technology Division from OECD. Third, again online, Jibu Elias. Mr Jibu Elias, Responsible Computing Lead for India from Mozilla. And of course, last but not least, of course, Miss Anne Rachel Ng, Director General at National Agency for Information Society, ANSI from Nigel. Welcome. So we will first hear from the panelists about their experience in designing policies for fostering AI development and diffusion. After the first round of questions, we will go around for a short final reflection from each speaker. We will then hear from our second segment, which will talk about AI in the public sector. Here we will listen from three distinguished speakers. So on my right side, Miss Katarina de Brisis, Deputy Director General at the Ministry of Digitalization and Public Governance from Norway, and also the long-standing representative at OECD Digital Policy Committee. And Dr. Jungwook Kim, Executive Director at Center for International Development from KDI. And Miss Seong Ju Park, Policy Analyst at Innovative Digital and Open Government Division from OECD. So after the second segment on AI in the public sector, we will then open the floor for questions and answers session to hear from you and engage in a conversation. So we will monitor the online chat and take questions from the room also. So as we will be taking questions after the second segment, if you are joining online, feel free to ask your comments and put your questions in the chat box. If you are here with us in the room, please note your questions down on the note, and we will reply to them after the second segment of this open forum. So we start with the first segment, and I would like to start with the discussion on collaboration on trustworthy AI and hear about the designing AI policies and plans for the OECD toolkit to provide support to countries while elaborating these policies. So I will start with Mr. Avalos online. Mr. Avalos Martin from Costa Rica initiated the work on the OECD principles implementation toolkit. So Avalos, what prompted this initiative and what has been Costa Rica’s experience so far up until now in developing a national AI strategy from this perspective?


Marlon Avalos: Thank you very much, Ida-san, for giving me the floor. Good morning and good afternoon, dear colleagues connected virtually and there in Norway. It’s an honor to be in this Internet Governance Forum 2025 to tell a little bit about our experience, design our


Moderator: It seems to have some technical issues online, so please wait a little bit before we get him back, but otherwise we will proceed to the second speaker. Okay, so thank you for your patience. Before we get him back online, I would like to proceed to the second speaker. So, moving to Lucia, I would like to ask you, could you tell us more about the OECD AI principles implementation toolkit, with its objectives, structure, and how it aims to support governments with different levels of AI maturity in policymaking. What is the overall vision for this project going forward? Lucia, the floor is yours.


Lucia Rossi: Thank you, Yoichi, and good afternoon to the audience here and online. It’s a pleasure being here at the IGF. So, as Marlon was starting to say, this project was initiated by Costa Rica, and it started off from the consideration that AI opportunities are manifold across sectors and across the globe. And there are, of course, several potential transformative effects of AI across sectors, and we will hear later on about AI in the public sector. and as well as we know in agriculture, in health care, in education. And these opportunities are however difficult to seize for different countries as there are several bottlenecks that oftentimes prevent countries from having the capacity or the financial resources or the organizational resources to devise effective AI policies. So with these considerations in mind, we started with our delegates in the Global Partnership on AI and with the support of several countries including Japan, Costa Rica, the UK, France, Korea to developing what is a practical toolkit to implement the OECD principles. And just allow me to stay a bit on the principles that as we heard are the foundational document for the OECD in AI governance and that were adopted in 2019. And these principles have since then been the object of further work from the OECD to provide analytical analysis but also guidance on how to implement them. And they are constituted by five policy principles that are recommendations to governments around areas such as research and development, infrastructure, the policy environment, skills and jobs that are required to effectively implement AI across sectors and international cooperation. But also they are values-based principles that cover those values that all stakeholders should strive to embed in. in AI systems and, of course, to respect democratic values, fairness, transparency, explainability, accountability, among others. So what this toolkit aims to do is to provide really practical resources for implementing, facilitating adoption across countries with a specific focus on emerging and developing economies but tailored to the diversity of needs, preferences and available policy options across countries. So ultimately these resources will support advancing a more inclusive and effective AI governance. So in practice what this toolkit will look like is an online tool that will be composed of two main elements, the first one being a self-assessment that countries will be able to navigate autonomously and that would guide them through, on one hand, the areas that they would need to strengthen in AI governance and, on the other hand, priorities that they may want to establish. And then once this self-assessment is completed, the toolkit will provide suggestions based on best practices in regions that are at the same or that are comparable or have similar challenges so that they can take inspiration from these other countries. So the second component will build on the repository of national AI policies that we have on the OECD. AI Policy Observatory and that we aim to strengthen by collecting further information on national initiatives and regional initiatives. And in terms of the design of this toolkit, one key feature is really the co-creation component. So to develop the toolkit, we are currently planning and organizing, and we have already won such a regional workshop planned, to have really engaging engagement with countries, with the designer of AI policies, to understand better on one hand what are the key challenges they face when devising AI policies and when thinking about AI governance in their respective countries. And on the other hand, understand what resources they need, but also, as I mentioned, also understand what practices they have put in place to overcome these challenges. So we will have one first such workshop in Thailand, again supported by Japan with ASEAN countries, and we will then organize several others, for instance, with African countries, with Central American and South American countries. And we plan to make this tool as helpful as possible. I think I will stop here in the interest of time, and I’m just checking online if Marlon is there, but I don’t see him.


Marlon Avalos: So, please. Thank you, Ida-san. This is an immersive experience. I just lost my connection, and this is a challenge that developing countries like us face every day, every time. And, well, I was saying that our decisions to promote this OECD AI principle implementation toolkit wasn’t a coincidence. It was intentioned based on our national experience, as you can see. And we saw a reality while the OECD principles provide strong ethical guidance, and many countries, especially in the Global South, still lack the tools and institutions to turn those principles into actions. And our initiative was motivated by three aspects, necessity, urgency, and opportunity. Why necessity? Because the AI revolution is reaching all countries, but the capacities needed to adopt it responsibly are still unequally shared. Urgency, because we saw how quickly the benefits of AI were concentrated in advanced economies, leaving others behind, mainly in infrastructure and AI compute capacity. And opportunity, we have a chance to move from principles to concrete capabilities, mainly in developing countries. As context, we launched our national AI strategy last October. Currently, it’s being implemented with the support of over 50 entities across government, academia, civil society, and the private sector. And we learned a lot of things with this process. First, that a successful strategy must be grounded in reality. That’s why we try to focus on what truly matters, ensuring the ethical, secure, and responsible use. Development and Adoption of Artificial Intelligence, always with the people at the center and aligned with our national priorities and values. We prioritize key sectors where AI can add tangible value like health, education, agriculture, and public services, reflecting our development goals and our comparative advantage like environmental leadership, political stability, and international engagement. We also decide to build a solid foundation first based on our strategic objective, first, design flexible and adaptive regulatory frameworks, second, strengthen our R&D and innovation ecosystem, three, develop talent and skills for a changing world, and fourth, leverage AI in the public sector as tools for inclusion and efficiency. Our guiding principles emerge through a diverse benchmarking from the OECD and UNESCO recommendations to the Edochime AI process, Code of Conduct, and our national values rooted in peace and human dignity. As I said, we take the best parts of a lot of instruments. For example, we are so inspired by the European Union, AI Act, the U.S. AI Risk Management, and AI policies from our regional peers in Latin America, and several papers and reports. We don’t stop there. We conducted a national risk assessment based on real threats and prior experience. As you can see, we got inspired from a lot of instruments and references, but one of our most important conclusions was international collaboration is essential, mainly for developing countries like us. That’s why we embed these international leaderships as a core line of action. in our strategy based on our active participation in the OECD as a member in GPIE, in regional initiatives, European programs and other programs gave us the path to do it. Design a strategy like this wasn’t easy because we had a lot of goals, we had a lot of priorities, but we lack maybe the knowledge that other countries, that the developed countries have. Even a country like Costa Rica, politically stable, technically skilled and internationally connected, face these challenges, then surely other countries like us will too face that challenge. Just a few days ago, as chair of the OECD Ministerial Council meeting, Costa Rica proposed the development of this OECD AI principles implementation toolkit, a tool now endorsed by several countries, members and non-members. Getting to this point required months of preparation and negotiation with developing and developing countries, thanks to the support and talent of the OECD Secretariat, represented today by Lucia Rossi at the panel, to design a tool that will contain simple and actionable features to help governments in the struggle of building their own AI policies. A self-assessment and implementation guide that my colleague Lucia Rossi will explain more in her intervention or was explained in their intervention after my reconnecting issue. This is not only a Costa Rica initiative, this is a collective project that is entering a phase of regional co-creations with the support of countries like Japan, Korea, Italy, France, the European Union, Slovakia, Republic and other countries that are supporting us not only politically but financially. Countries of different regions, Central American region, Latin American region, Africa, and Asia, will help shape the toolkit’s next iterations, ensuring it adapts as technologies evolve and societies change. Lastly, the success of the toolkit will depend on two things, we hope. Customization, learning, and evidence. We need features that reflect local needs, processes that evolve over time, and metrics that show that AI is actually delivering value for people. Costa Rica offers its lessons based on our experience in the design of AI policies and the next tools and instruments that we are designing, for example, the sandbox, the regulations, and other instruments. And for sure, our full commitment to help turn the energy that we have and the support that countries gave us into actions so that no country, regardless the size or income, is left behind in the age of this artificial intelligence age that we face in this moment. I will stop here, and thank you, and my apologies for the connection issue. Thank you.


Moderator: Okay, thank you very much, Marlon, for your sharing the experience and your efforts on this very important initiative. If you allow me to talk a little bit about Japan’s experience, because we actually started this discussion in the year 2016 and proposed international discussion to OECD on AI principles. That was the beginning of the whole process, and the When people agreed on OECD AI principles, it is actually very comprehensive and very high-level. So some people said, you know, this is wonderful, but how we can make this into practical policies and actions? So now we are making efforts together, not by only Japan, but all together with Costa Rica, Korea and others, of course, backed by OECD Secretariat to guide the governments and other stakeholders to understand and make this very comprehensive set of principles into actions and practical policies. So this is a wonderful process and I’m very happy to hear these two presentations. And now I would like to move on to Jibu Elias from Mozilla online. So based on your experience and work with Mozilla and also your experience in India’s AI ecosystem, Jibu, what types of community-led or policy-driven initiatives have proven most effective in supporting responsible AI adoption, particularly in emerging economies? So what insights can we derive from these initiatives that could be relevant for policymakers? So Jibu, the floor is yours.


Jibu Elias: Thank you very much, Yoichi-san. It’s an honor to be here to share my experience building a responsible AI ecosystem in India, one of the most complex and dynamic tech environments in the world. So let’s begin with the foundational truth. In emerging economies, AI adoption is not just a question of capacity, but a larger question of context as well. Now responsible AI must be inclusive, accessible, and rooted in the values and live realities of people it should serve. And at Mozilla Foundation, we tried to meet these challenges head-on through a unique initiative called Responsible Computing Challenge or RCC. So in India, India has the world’s one of the largest or I think second largest developer population in the world. Yet there are a lot of shortcomings. For example, ethics, accessibility, and inclusion are almost entirely missing from the mainstream AI or even the tech curricula. The AI workforce in India is concentrated in elite urban clusters around cities like Bangalore or Gurgaon, leaving the smaller tier two, tier three cities, rural communities, and especially female workforce, women behind. And fundamentally, there’s a growing trust deficit. People are rightfully skeptical of opaque systems that affect their jobs, access to welfare, or even their freedom. So in RCC India, we decided not to start with rather abstract frameworks. We focused with people, especially students, academic faculties, women, community, marginalized groups like tribal population, and most importantly, first-generation learners who never had been asked what Responsive AI meant in their world. So from the starting point, we have designed a deeply localized and community-rooted approach where we begin with this question, what does Responsive AI mean to those who are most affected by it, but at the same time, least represented in building it? So our answer came from the communities we mentioned before, you know, students, marginalized communities, and importantly, young innovators across the country. So, one of the most striking experiences came from one of the colleges we worked with called Merian College in a hilly terrain in the Western Ghats campus in Kerala, where they became a testbed for some of our ethical tech innovations. One of its standout outputs is that it’s an AI-powered tool called WebBeast, which was developed by a first-year BCS student. So the tool is a lightweight, open-source, AI-powered accessibility widget, which was built as part of an equitable digital access course we developed with the university. It’s now been used by 30 websites across the world, and it even received a design patent from the Indian Patent Office. So this isn’t just about a student project. It’s a project that even first-year undergraduates, when empowered with ethical frameworks and open tools, can create global public goods. Similarly, we had another tool called PhysioPlay, which is a WhatsApp-based AI simulation tool for physiotherapy students designed to help them build diagnostic skills through gamified real-world casework built by a physiology student. SpeakBoost, a communication coaching platform that provides AI-powered feedback on fluency, filler words, grammar, tone, and supporting students prepare for interviews and presentations. TwinSage, which was developed by, again, a community of students from Maharashtra, coming from very marginalized groups who don’t have the privilege of access to buy high-tech technology or access. So they have developed this tool, which is a personal finance chatbot that teaches college students about budgeting, saving, financial planning through natural language conversations. So each of these tools we mentioned here are, first of all, community-based tools, community-routed tools, in some cases built by students for their peers, understanding what is lacking in their ecosystem, what they need to build. Their ethics-aware or responsible pillars are focused on AI. and Open Source Fuzz. They represent not just innovation, but how does democratized digital leadership look like. While students demonstrate what a responsible tech looks like from ground up, when we work with faculties, that led to initiatives addressing another critical frontier of AI, such as explainability in high-stake domains. So our work with the Indian Institute of Information Technology, IIT Kottayam, we developed something called the FactSets Lab, which launched a suite of explainability dashboards designed to tackle the larger black box problem in AI. So one of their dashboard helps users understand why an AI system made a decision using shared values, biased audits, and fairness metrics. Similarly, we developed a dashboard called AI Fora, which enables real-time interactive testing of AI predictions on real data sets, making model behavior visible to even non-technical users. And finally, IXI, which applies explainability to medical AI by using GradCams heat maps to highlight what influenced diagnostic decisions in retinal scans are like. So these are open, and the key impact is that they give everyday users and regulators and policymakers the ability to question and importantly correct the cost of AI. This is the future of public AI infrastructure, transparent, participatory, and grounded in accountability. And finally, our most powerful insights came not from labs, but from communities often left out of the AI conversation altogether. So at Lendi Institute of Engineering Technology in Andhra Pradesh, we ran an ideathon with students from rural and semi-urban backgrounds, where we guided activities in empathy, inquiry, creative problem-solving, and student-identified challenges in their own communities, from waste to safety, to waste management, to safety, to water scarcity. They even built tech-assisted AI, such as solutions, blueprints, and video pictures applying digital ethics in a more practical and personal way. In parallel, we also took RCC model even further to an area called Chintapalli, it’s a tribal area in Eastern Ghats where we conducted workshops with 56 tribal women, many of whom have never accessed AI tools before. We did it in the local language Telugu through participatory storytelling, visuals, and guided use of AI tools such as ChaiGPT and map real problems such as unemployment, safety, healthcare, and explore how AI could support micro-enterprises in herbal medicine, food production, and arts and crafts, some of which are the prominent employment methods these people use. The results were not just minimal tech exposure, but rather, I’m happy to say, it’s a tech transformation powered by a powerful tech like AI on cultural grounding, peer collaboration, and a dignity-first design. So these workshops proved that responsible AI doesn’t begin with the tools, it begins with trust. So while wrapping up, let me say the main lessons from India’s AI ecosystem and what we may see works in emerging economics or global south or global majority as we call it is, you know, especially having worked in the intersection of civil society, academia, and national policy is that we need ecosystems that are locally rooted, capacity-driven, and above all, people-centered. And the most powerful lesson here is that don’t just ask who builds AI, ask whose future is it building? Because in countries like ours, trust is not a given, it’s earned. And when communities are trusted as co-creators, not just end users, they don’t just adopt technology, they transform it. So if you want AI that is safe, just truly inclusive, we must design not only the code and policy, but the humility, memory, and imagination as well. So thank you very much for this opportunity. I will stop here.


Moderator: Okay, thank you very much Jibu for this wonderful story and it’s great to hear about these experiences from the ground and congratulations on your work. India’s success with DPI and the digital public goods is a powerful example of good policy practice and the I’m very happy to hear that the responsible AI principles is just backing up such success for digitalization. So now I would like to turn to Miss Anne Rachel Ng. So from your perspective as a digital policy leader in Africa, what are some of the key opportunities and also challenges for African countries in developing inclusive and context-sensitive AI policies? How can international initiatives like the OECD AI policy toolkit better support countries in that region? What key considerations should be made? So Anne Rachel, the floor is yours.


Anne Rachel: Thank you very much and good afternoon everybody. I’m actually very happy to go after Jibu in this conversation because he gave a lot of examples that I can relate to. But I’m going to start by saying that in the Global AI Index, it places African countries in general among waking up, nascent when it comes to AI investment, innovation, implementation in general. So for example, Egypt, Nigeria, the United States, and the United Kingdom. So it places African countries in general Kenya are nascent, while Morocco, South Africa and Tunisia are waking up. There’s a lot more waking up and I really hope that we will soon, you know, all be graduating. So, we do face opportunities and challenges and those are basically in developing everything that is, as Jibu said, inclusive, context-sensitive AI policies and I’m pretty sure international initiatives like the OECD toolkit can help because it does give, you know, a few places where we can pick and choose and also make sure that we look into others’ experiences so that in doing what we have to do to get there, we do it the right way. So, in terms of the key opportunities, for example, we do have development barriers that can be alleviated. AI can accelerate our, you know, critical sectors like healthcare and in there, for example, if I take the case of my own country that is Niger, we started years ago something that is called a program on smart villages and we started with healthcare. So, you know, with telemedicine that is geared mostly to skin diseases because it was easier to take pictures, send them to dermatologists and, you know, get treatment to people and also, you know, disease prediction. But it’s gone to the point that, for example, I have a group of young people right now at home who are looking at, who are working on a device Remember the oximeter during COVID where you would measure oxygen levels in a person that was sick? So, a lot of researchers found out that, for example, that is a device that does not gauge oxygen level the right way in people who are melanated. So, they decided that it was something that they wanted to do during COVID. And today, they actually have a little device that is just like the regular oximeter, but whereby the light can penetrate a darker skin and give true measures of what oxygen level is in a person’s body. And in, you know, agriculture, precision farming, agroforestry is one of the places where we’ve been using AI, education, of course, personalized learning, and use of languages in general. Because this is a place where nobody grows up with just one language in Africa, hardly any. It is important that when we’re trying to get context AI, that we make sure that to get trustworthiness, we have people who really understand what’s in it for them. We tend to have policies that are geared to people who can read and write what we call the official languages. And then we forget that in our settings, we have about, you know, 60 to 80% of our populations that are still rural. So, they don’t speak English, they don’t speak French. And if you want them to be part of this, you really have to explain it to them in their language. And that’s also one of the reasons why the little applications that the kids are doing in terms of voice recognition softwares that can be helped whether in can help people whether in FinTech or health care and others are really helping. We do have another opportunity which is simply that we have a very young population in the region. Now we do need a skilled workforce so capacity development and deployment is something that we absolutely need. Now one of the big constraints that also come with that is that kids do not grow at the speed that artificial intelligence is growing and when I take my again my own country we have you know 65 plus that are under 25 and at least 50 percent that are under 15. So it’s really a very young population and as much as we need a lot of capacity building we need to give it time you know for the kids to get to the point where we can have a sound and real workforce. We do have local innovation ecosystems that are really growing AI solutions that are geared to the local place as in for example using a lot of mobile financial tools to make sure that from the women agriculturists all the way to land sharing and deeds recognition in rural areas things like that are being done. So those are you know some of the key opportunities and of course we do have the regular challenges that everybody know in terms of infrastructure. Again, when I take the case of my country in the African region, we have 16 countries that are landlocked. So connectivity infrastructure is already something that is quite dear. We do have, we still have, you couple that together and you have only about 22% of Africans that have broadband access. So that’s still something that we need to work on because it exacerbates the divide. In terms of policy and regulatory frameworks, we have a deep fragmentation also because many countries like cohesive strategies, AI strategies, or harmonized regulations. So you do have uneven implementation or even, you know, missed cross-border collaboration opportunities because we don’t, in as much as we have some of the ministerial meetings, for example, on the continent to talk about one policies or the other, we absolutely need, if we’re going to use, you know, AI tools in fintech, we have to make sure that the finance minister understand it’s not only the, you know, the technology or digital minister talking about this. We need to make sure that if we’re going for a national ID that the person who is going to be ID’d understands the reasons why and what it’s bringing to them in terms of advantages. And we also need all of the different government ministries like, you know, Interior, Defense, all the way to the National Data Protection Agency to talk together to make sure that whatever is put in place is really protecting people’s privacy. So we also have, of course, data scarcity and bias. As I just said, we do have a lot of facial recognition systems, for example, globally that are trained on non-African data, and they perform poorly on our people. And in general, right now, at the minute that we’re speaking, only about 2% of data generated on the continent is used locally. So it’s basically hard to get real data back to our institutions just because it’s managed by global platforms that do not necessarily want to share it readily with us. And again, we do have the capacity constraints just because the governments struggle to keep pace also with AI advancements. So you’ve started barely talking about data privacy, that your agricultural minister wants to put a lot more stuff in there and environment and everything. So all of it collides to the point where, honestly, we come to a point where governments are having a hard time sieving through the little data that they have to make sense of it locally. So toolkits like the OECD one can help. because it also, but it can only help if we really have modular, flexible guidance also, you know, on low resource settings. So things like Jibu and Malone talked about are really interesting and can be looked at and that can also help some of our countries because it’s much better to have real case uses than generic benchmarks because those are great but, you know, they don’t really show you how to make it work at home. So in terms of capacity building, we need definitely more AI research centers. We need policy training and knowledge sharing, you know, with platforms. How to make that happen is also one of the things that we’re grappling with and we need all of that, of course, so that our own policymakers can be empowered to have discussions at the level where, you know, policies can then trickle down to people. And, of course, we all talked about it, inclusive governance, you know, we must include, globally, we must include African voices to avoid the one-size-fits-all, you know, that I love the idea of that oximeter because we’ll kind of sew it and we’ve experienced it somehow, but to suddenly discover that this little device that we were trusting to do something is not really doing the right thing for us was really eye-opening. So it’s important that, you know, everybody’s perspective is taken into making sure that these global toolkits are done the right way, looking at people’s I guess particular settings and context. So in terms of also, you know, developing public-private partnerships, it is something that is starting to get traction more in the region, because of course government cannot do it all. We absolutely need the private sectors to, you know, to be part of this whole process and to also make sure that they can develop things that they can, you know, live on. So I think having said that, I will conclude by saying, I’m going to say something that makes us all laugh all the time, that maybe a few here can relate to, at least if you’re African. We do say Europeans have watches, we have time. So I’m just saying this to plead for, you know, taking the time to do things, because rushing into doing things that are not geared to the context just keeps us behind more than anything, because people do not understand what it is we’re trying to do or where is it that we’re trying to get to. So it is truly important that everybody is listened to, everybody is part of the discussion, everybody is brought to the table, so that that trustworthiness that we want be not only in AI, but in the whole, you know, digital transformation that we want to see in our countries. Thank you.


Moderator: Okay, thank you very much for this very insightful presentation, Rachel. And I saw a lot of commonalities between your country and our country, like issues such as education or maybe the… Spreading the idea is always very difficult in Japan. But I really agree to the point that, you know, the inclusive multi-stakeholder approach is definitely important in this section. So thank you very much. And for the sake of the time, I thank to all speakers for those rich and insightful contributions. And now we turn to the second part of our session, which will focus on how governments are using AI in practice across key public functions. This is also of relevance to the previous segment, as the OECD AI Policy Toolkit will have information on sectors, including the public sector. So I’m pleased to hand over the moderation to Ms Seon-Joo Park, Policy Analyst at Innovative Digital and Open Government Division of OECD, who will lead the next segment. So Seong Ju , please.


Seong Ju Park: Thank you, Mr Moderator. So before we start, I just want to quickly share, I was recently back in my country, Korea, and then I needed to explain about a history of a palace to the friends that I had over there. Before, I would have used search for the palace and then try to understand the information I find, and then explain that in English to my friends. But this time, I just asked ChatGPT to give me a very catchy explanation about this palace. And then I just played it for my friends. So AI has changed many aspects of our lives, how we communicate, how we seek information. And this is affecting governments as well. This is accelerating digital transformation of public sector, changing how governments work, how government design and deliver policies and services. And it also changed the expectations and needs of the citizens and businesses that they serve. So before I invite two panelists that I have here, I want to quickly present to you some of the OECD findings on AI in government. May I have the slides? Okay, can we put it on, it’s in a presenter mode. Thank you. So AI as a tool has a great potential to support government to improve productivity, responsiveness and accountability. So AI can automate and streamline mundane and repetitive tasks, allocating efforts of the public servants into more meaningful tasks, interacting with citizens and businesses. And AI can also support tailoring processes and personalizing government services to meet users’ needs. AI can enhance decision-making by supporting governments with making sense of the present and better forecasting for the future. AI can also support enhancing accountability and detecting anomalies. Also, AI can help governments unlock opportunities for external stakeholders. So how can governments enjoy this potential benefits in a trustworthy and in a responsible way? So the work on governing with AI seeks to address this question of how to develop and then deploy trustworthy AI in governments. So AI is a tool that can be used to develop and then deploy trustworthy AI in governments. So AI is a tool that can be used And then we started with looking at what has been done across different government functions. So we have conducted analysis of use cases across 11 government functions covering three broad categories, police functions, key government processes, and service and justice. So in total, 200 use cases were selected and based on the influence, diversity, and then representativeness. So based on the use cases, literature research, and then recent policy developments, we were able to identify key trends, shaping the current state of play, major risk, and then implementation challenges that governments face, and also explore potential use and future pathways. So the first trend we saw is that use cases are unevenly distributed. There are a number of potential explanations for this distribution that you see on the screen. I won’t be able to share all, but I will try to share a couple with you. The policy functions most represented tend to be the ones most in the public eye, potentially suggesting a focus on areas that have immediate visibility to citizens. Factors going into this could involve both more demands from the citizens, but also a desire among governments and political leaders to visibly demonstrate a value of using AI in government. And we also found that some functions face particular barriers or complexities, such as particularly stricter rules on data access and sharing, and then stricter requirements for thorough audit trails in public integrity. Another trend we saw is a big emphasis on automating and personalizing processes and services. The slightly more than half of the examined use cases, they seek to contribute to the automation, streamlining and tailoring and personalization of government processes and services, particularly in justice, public services, civic participation and regulatory design and delivery. We found that four out of 10 use cases seek to enhance decision-making, sense-making and forecasting, with most concentrated in public services, regulation and civic participation. I have some of the use cases, I won’t be able to go through them, but the OECD is planning to launch the more comprehensive report where you will be able to find all 200, well, some of the 200 use cases that I mentioned earlier. So I will skip through different use cases we found for supporting different functions of the government, and then I will go to the most important topic when it comes to government AI in government. So it might not be a fun topic for us to discuss, but government’s use of AI is quite different from use of AI in private sector. It comes with higher risk. It has potential dangers and threats that could seriously harm individuals’ lives and also society as a whole. It could potentially undermine public’s trust in government, the legitimacy of government’s AI use, and even democratic values. So to address these concerns, it is important to continuously consider potential risks that may not exist today, and here on the screen you see the general five risks that we identified through our research. So these risks range from ethical risk, operational risk, exclusion risk, to public resistance and missed opportunities and then it was mentioned during earlier segment, a widened gap between the public sector and then private sector capacities. So beyond grappling with this risk, we also found that governments all face a number of implementation challenges when seeking to develop and use AI. So we found that there are many use cases, however, they remain at a piloting stage and many are struggling to scale the pilots into the wider systems or services. And also there is a large room for improvement when it comes to actionable guidelines. Also governments need to navigate a rigid regulatory environment. And the next challenge is shared by almost every government on this planet. There are inadequate data, skills and infrastructure in the public sector. In addition, governments need to better understand the cost and benefit of AI in the public sector. Many are still, the cost and benefits around the use of AI in government is quite unknown. That makes it quite difficult for policy makers to make business cases to scale up their AI efforts. So to support governments to mitigate this risk and then overcome these challenges, we have worked together with the OECD and then the partner countries. on a framework to support government’s AI efforts. This is an evolving framework and then we only seek to provide guidance for countries so that they can continue on through this AI journey. As you can see, the framework is organized around three sections. So first is a level of engagement. This includes the different stakeholders that needs to be engaged in building the foundations for a responsible use of AI in the public sector. Our previous speakers, they mentioned involving different stakeholders not only from the public sector but also from private academia users into devising AI strategies or developing AI solutions. So it’s important to have a different actors around the table. Then the second element is enablers. So enablers include areas where policy actions can be prioritized to establish a solid enabling environment and then unlock the full-scale adoption of AI in the public sector. So these areas include governance and capabilities, collaborations and partnerships where policymakers currently indicate the existence of important constraints and shortcomings. The last element is on guardrails. So guardrails include options for policy levers that governments can consider developing for a responsible, trustworthy, and human-centered use of AI in the public sector. So this can range from soft laws and guidance as standard to legislation on AI enforcement mechanisms or oversight bodies. So this work is a part of a bigger OECD project called a Horizontal Project on Thriving with AI. Under this project, there are specific deliverables focusing on AI in government. So as I mentioned before, there will be a OECD report on governing with AI, which goes much deeper and then into details of what I just quickly presented with you. And then there will be a dedicated hub for AI in the public sector. It will be on oecd.ai. It will be sort of a repository for policymakers, practitioners and researchers. And we are planning to have a global data collection exercise on AI policies and then use cases, which will also be presented through OECD AI Policy Observatory. So thank you very much. That was my very quick presentation on, just to give you an idea on where OECD research has been when it comes to AI in government. So now I would like to invite two panelists to hear from them on what it means for governments to harness AI in practice. So the first topic will be around the AI opportunities in the public sector. So I would like to invite Katarina first. So Katarina, Norway has been exploring AI to enhance the efficiency and then effectiveness of public sector services. Can you share with us some early impact that you see or early impact that you expect from Norway’s AI use in government?


Katarina de Brisis: Thank you, Seju, for your introduction. Artificial intelligence tends to be perceived by now as being chart GPT or the likes, but actually artificial intelligence is much more than that. and it has been applied and used in Norway already in some years in many government services, especially in the health sector. We have several applications that are really having a practical impact on people’s lives. One case is our Vestreviken hospital community where they implemented AI analysing x-rays of fractures and it really saved time for the patients. By 79 days many patients, about 2000, were able to go home immediately instead of waiting for results of their analysis and their diagnosis and this is now being deployed to several other hospitals. So it gives really practical benefits on the ground. Then we have our Norwegian tax administration that has used AI, developed an AI model which combined with the rule-based models analysed deposits of tax returns looking for missing returns on lending out secondary homes and that actually led to 85% detection rates across or opposite of 12% before and it saved taxpayers for 110 million kroner. It was the additional revenue they were able to produce. In cancer treatment there are hospitals using AI to produce three-dimensional maps of internal organs to have more direct radiation treatment and it already has been in use since 2023. There are also hospitals using AI to give more accurate analysis of patients with epilepsy that can diagnose it precisely and quickly. Our state loans, student loan agency uses AI to control housing, they do housing verification checks just to be sure that no public funds are misappropriated by students saying we are living there while they are actually living some other place and collecting grants for that. Our police authorities use AI for transcriptions of interrogations when they do an investigation on crime, which saves a lot of time because the AI just transcripts spoken language into written language immediately. So, in general, we have a lot of this kind of use already, but still the potential is very great. We have done a state employer survey in 2025 which asked 200 state agencies about their use of AI and 70% answered that they actually use AI in their daily work. I think this is mostly generative AI systems which they use for things like designing job advertisements, case processing, analytical work, helping them in recruitment procedures and this kind of stuff. But this is state, we have about 400 or more municipalities which are very small and potential there is much greater. We still have a way to go there and what we also need to work on is better tools to assess benefits from AI. We have cases, we have real benefits already produced. but to look across the board and have some tools that will really give us methodological background to assess benefits of introducing AI in various sectors and government levels that we need to work more on. So I’ll just maybe finish here.


Seong Ju Park: No, thank you. That is a really important point. I think many governments are still trying to find out the best way to measure what benefits and then impact use of AI actually brings in long run. But some of the cases that you share, it clearly demonstrated that use of AI has supported the Norwegian governments to enhance efficiency, but then also enhancing people’s lives, saving them time and money. Then I will go to Dr. Kim. So, Dr. Kim, you have conducted extensive research on Korea’s use of digital technology including AI and for enhancing services and policies. Could you describe the key elements that governments should consider when using AI to ensure that it is used effectively, innovatively, and inclusively?


Jungwook Kim: Thank you. So Korea is ranked as one of the leading countries in OECD Digital Government Index, which was published recently. And as Anne Rachel states, there’s some different stages of development or adoption of the AI technologies in the public side. But I’m pretty sure that there is no graduation. That means it’s a long journey and it’s a gradual change of the government services delivered to the public. So I’d like to explain and address some of the key enablers or pillars of the Korean history of AI adoptions or digitalization in public services. And the first one is innovation. Innovation is change. Change in your life, change in what you work, and change in what you address your needs and deliver your services. So for the innovations, we have three different aspects of the targets. One is data. So we need open data, but we need machine-readable data, which is not available before. That means we need to make some researches on development in data and accessing data and processing data and make aggregation and changing the data formats so that we can utilize it in AI adoption. So we need change in the data. And the other one is infrastructure. So each and every government has infrastructure in dealing with and providing public services, but for the adoption of AI, it has challenging aspects. That means we need innovative ways to take care of the current infrastructure of the public service delivery. And the third one is public service delivery itself. That means we need brand new citizen-centric AI public services, which was not available before. However, it is feasible, and we need to coin out the way we provide the services and the way we try to address the demand by the public citizens. So those are innovations like data and infrastructure and public service development. And the other pillar is inclusion. That means we should take care of the digital divide for sure. and we experience digital divide, even Korea experience digital divide and by gender, by region, by income, also by the education. So, we need enhanced accessibility for the AI adoption for the public services, of course. That might be enhancing accessibility through AI-driven hyper-personalized services by the public sector or focus on the effectiveness, access of the vulnerable peoples or isolated groups so that they can take care, they can assess easily for the public services. The other one is capability. So, we need educate, we need train the public officers as well as the citizens because it’s changing the life, you know, innovative way to take care of the issues. So, we need inclusion which can be separated into accessibility enhancement, also education for the capacity building and capability increasement, also ability increase. So, those are two pillars of the AI adoptions in public services. And the final element is investment. That requires huge resources in adopt and develop and deploy those AI services into the public sector. So, innovation, inclusion requires investment. So, you should spend your money wisely and strategically in order for the AI adoptions.


Seong Ju Park: Thank you very much. So, the data infrastructure and then also innovating how we approach public services design. These are the hot topics of many of our delegates as well. And then also the last point on investment. It has put a bigger more spotlight now with AI that governments needs to have a strategic thinking around how they’re going to use public money on investing and digital or AI related systems and services. And then I cannot agree with you more that we are in this a long journey and then I often say a moving target so there’s always new target every day and then no graduation. I think this is for many governments around the world. So thank you for sharing the key policy issues. I understand that your work also includes elements to support safe and trustworthy use of AI. How could governments use AI in a responsible and trustworthy way? What are the key elements to avoid or mitigate the five risks that I mentioned earlier?


Jungwook Kim: Thank you. So the question is dealing with the safety or security issues around the AI and it’s a public organization or public body’s work in dealing with AI technology and there is a big challenges in dealing with those security issues especially for the public services because many a lot of actually detailed personal data is accumulated and processed in public body. That means we need to secure those safety of data and that’s top priority. That means so we need citizens rights to their personal data not just you know giving access to the personalized data for anyone or some of the stakeholders. Rather you need a bit of consensus and you get the explicit consent in utilizing and processing your personal data for sure. So it’s a way to secure some of the safety issues in dealing with personalized and privacy issues. And the second one is security issues. So it’s vulnerable to like hacking or other malicious function of the system. So attention to the open network infrastructure and mobile-based system has some challenges of those ones. So system itself should be secured, should be designed and maintained in a safer way. So that is another challenge for dealing with safety issues. And the third one is AI safety and governance. So as you said, it’s moving targets. Then we need agile measures to take care of the AI safety issues. So we have examples which breaches privacy, which has harm for the citizens’ safety issues. And there are so many dialogues on those ones, but each and every country should establish those safety and governance in the right manner, in a sound system, so that they can take care of those issues for real time and even in advance, to minimize the risks or uncertainty associated with the AI implementation. So those ones are not independent from our daily life. Rather, it reflects and it has great impact on the daily life of the citizens in large scale. So for the public services, AI employment and deployment, those ones should be narrated clearly in the AI safety and governance in one specific country. So that’s what we can say based upon the Korean experience.


Seong Ju Park: Thank you very much. It’s really important when it comes to data, but also sensitive data, because we found that some of the sectors, including social security sectors and then healthcare sector, justice sector, they hold a lot more sensitive and then personal information on the users, the citizens and businesses. And I cannot agree with you more on the need for the agile governance. I think many governments have been talking about being more agile, but I think we haven’t reached. the point yet, but it will be important to have governance that will allow the proactive measures and also timely measures to prevent or mitigate this risk that we see. Katarina, I will come to you. What concrete initiatives in Norway is Norway implementing to ensure that AI in government are safe and trustworthy?


Katarina de Brisis: Thank you. Let me start with a couple of reflections on the challenges when implementing AI. For us, one of the main challenges is leadership and competence level in government agencies. So actually that will underpin also trustworthy use of AI. If we have managers in public state agencies who understand both the opportunities and risks associated with using AI, and we know that 60% of our state organizations already implement measures to increase employee competence. These are the people who are actually working and managing artificial intelligence-based systems. And 43% created internal guidelines for using AI. So this is sort of building a fundament within each public agency. And one other important issue is also a dialogue between the employer, the management, and the employee representatives. So that also those people feel having a finger on the levers of how AI is being deployed and implemented in the agency. And then the second thing is the access to data. I agree with Professor Kim that this is a crucial issue. and we have a number of very good quality registers and we have been working for several years on opening those data but the opening must happen in a responsible way and that’s why in Norway at least to access personal data for a purpose of training and using AI systems requires legal basis. So you cannot just say okay I have this data, I pick them and then I train a system and here we go. You have to have legal basis. So you have to procure this legal basis that may take time with the legislative branch. When you have that then you can proceed but within also safety and security constraints. Another thing is of course to have a legal framework in general. So Norway is now working on implementing the EU AI Act which will be our overarching framework for using AI in Norway. We aim at implementing in on par with EU countries to create level playing field. We have already in 2020 put forward a national strategy for AI which put forward seven principles for responsible and trustworthy AI. Those principles are further endorsed by our new digitalization strategy for Norway, published just recently in the fall of 24. And in that strategy our government has a very ambitious goals. They want public agencies to adopt AI at very quick rate. Already in 25 80 percent of public agencies should use AI and by 2030 100 percent. So as you see it’s very ambitious. but we work quite diligently to make it possible both within agencies as I was describing but also on national level by investing in Israel infrastructure so we the government has invested for example 40 million kroner early in developing foundational models in our language that is Norwegian and Sami languages based on our societal values so that we have systems that really reflect who we are not the whole of internet sorry and then the other investment we are looking at is our high performance computing infrastructure to enable actually develop and train AI at a scale that is needed so that’s also the investment and this infrastructure may be used by both public and private entities for example we have one startup which is called Digifarm that uses AI to help farmers predict what to sow when and where and so on and this requires computing power so this kind of infrastructure may provide it even to small startups and companies and of course in enforcing the AI act we will establish or are establishing a national enforcement structure so we will have one authority in our national communication authority that will look at the compliance with the AI act and we will also establish AI Norway which will be an arena for sharing experience guidance and testing in a regulatory sandbox of systems in a very safe environment before deploy so and we will also collaborate with our data protection authority on this regulatory sandbox so and also systems which are trained on personal data may be tested there. So this is sort of outline how do we work both at the micro level and macro level on enabling trustworthy and safe AI in Norway. Thank you.


Seong Ju Park: Thank you very much for sharing Norway’s experience and then what Norway has been doing. I remember about this one tool implemented by one of the countries I would name and it was supposed to support the public sector officials with their job but then the users of that tool wasn’t really trained on how to use the tool and at the end what was supposed to be a supporting tool ended up making wrong decisions for the government. So I see how building employee capabilities and then the leadership around AI and digital is a key to ensuring trustworthy use of AI. So I will conclude our segment here. Thank you very much to you both and then I give the floor back to you, Mr. Moderator.


Moderator: Okay, thank you very much for the wonderful discussion to all the speakers in segment two and I apologize to all the speakers in segment one that I cannot come back to you for finalizing comment but now I will open the floor for audience for any questions or comments on both segments of this open forum. So no questions. So I’m sorry the time has run out, so sorry about the management but I hope you enjoyed the discussion and if you have any questions please contact directly to the individual speakers and let me also share we will have another session on AI tomorrow morning at nine o’clock in the conference hall. So thank you very much to all the audience and also to all the speakers and this session is closed. Thank you very much.


M

Marlon Avalos

Speech speed

116 words per minute

Speech length

951 words

Speech time

487 seconds

Costa Rica initiated the toolkit based on their national AI strategy experience, recognizing that developing countries need practical tools to implement OECD principles

Explanation

Costa Rica proposed the OECD AI principles implementation toolkit after experiencing challenges in developing their own national AI strategy. They recognized that while OECD principles provide strong ethical guidance, many countries in the Global South lack the tools and institutions to turn those principles into concrete actions.


Evidence

Costa Rica launched their national AI strategy in October with support from over 50 entities across government, academia, civil society, and private sector. They conducted national risk assessment and benchmarked against various international instruments including EU AI Act and U.S. AI Risk Management.


Major discussion point

OECD AI Principles Implementation Toolkit Development


Topics

Development | Legal and regulatory


Agreed with

– Lucia Rossi
– Moderator
– Seong Ju Park

Agreed on

Practical implementation tools and frameworks are needed to translate AI principles into action


International collaboration is essential for developing countries, requiring customization, learning, and evidence-based approaches

Explanation

Avalos emphasized that even politically stable and technically skilled countries like Costa Rica face challenges in AI policy development, making international collaboration crucial. The success of the toolkit depends on features that reflect local needs, processes that evolve over time, and metrics that show AI delivers value for people.


Evidence

Costa Rica’s active participation in OECD, GPAI, regional initiatives, and European programs provided the foundation for their strategy. The toolkit is now endorsed by several countries and entering regional co-creation phase with support from Japan, Korea, Italy, France, EU, and Slovakia.


Major discussion point

International Cooperation and Knowledge Sharing


Topics

Development | Legal and regulatory


Agreed with

– Anne Rachel
– Jibu Elias

Agreed on

International collaboration is essential for AI development, especially for developing countries


Technical connectivity issues demonstrate daily challenges that developing countries face in AI implementation

Explanation

During the session, Avalos experienced connection problems which he used as a real-time example of the infrastructure challenges that developing countries face every day. This technical difficulty illustrated the broader connectivity and infrastructure barriers that hinder AI adoption in the Global South.


Evidence

Avalos lost his internet connection during the presentation and had to reconnect, stating ‘this is a challenge that developing countries like us face every day, every time.’


Major discussion point

Challenges in AI Implementation for Developing Countries


Topics

Infrastructure | Development


Agreed with

– Anne Rachel

Agreed on

Infrastructure and connectivity challenges are major barriers for developing countries


L

Lucia Rossi

Speech speed

108 words per minute

Speech length

682 words

Speech time

377 seconds

The toolkit will provide self-assessment tools and region-specific guidance through co-creation workshops to help countries bridge AI divides

Explanation

The OECD AI principles implementation toolkit will be an online tool with two main components: a self-assessment that guides countries through areas to strengthen in AI governance and priorities to establish, followed by suggestions based on best practices from comparable regions. The toolkit emphasizes co-creation through regional workshops to understand challenges and resource needs.


Evidence

The toolkit will build on the OECD AI Policy Observatory repository and include regional workshops starting with one in Thailand supported by Japan with ASEAN countries, followed by workshops with African countries and Central/South American countries.


Major discussion point

OECD AI Principles Implementation Toolkit Development


Topics

Development | Legal and regulatory


Agreed with

– Marlon Avalos
– Moderator
– Seong Ju Park

Agreed on

Practical implementation tools and frameworks are needed to translate AI principles into action


J

Jibu Elias

Speech speed

140 words per minute

Speech length

1209 words

Speech time

515 seconds

Responsible AI must be inclusive, accessible, and rooted in local values, focusing on communities most affected but least represented in AI development

Explanation

Elias argued that responsible AI adoption in emerging economies requires focusing on context and inclusion rather than just capacity. The approach should center on people, especially students, marginalized communities, women, and first-generation learners who are most affected by AI but least represented in building it.


Evidence

Mozilla’s Responsible Computing Challenge in India worked with students, academic faculties, women, tribal populations, and first-generation learners. They conducted workshops with 56 tribal women in Chintapalli using local language Telugu and participatory methods.


Major discussion point

Community-Led and Inclusive AI Development


Topics

Development | Human rights principles


Agreed with

– Anne Rachel

Agreed on

Community-centered and inclusive approaches are crucial for responsible AI development


Students and marginalized communities can create global public goods when empowered with ethical frameworks and open tools

Explanation

When provided with ethical frameworks and open-source tools, even first-year students can develop innovative AI solutions that address real community needs. These tools demonstrate that democratized digital leadership can produce globally relevant innovations rooted in local contexts.


Evidence

Examples include WebBeast (AI-powered accessibility widget by a first-year BCS student, now used by 30 websites globally and received Indian design patent), PhysioPlay (WhatsApp-based AI simulation for physiotherapy students), SpeakBoost (communication coaching platform), and TwinSage (personal finance chatbot for college students).


Major discussion point

Community-Led and Inclusive AI Development


Topics

Development | Sociocultural


Trust is earned through community co-creation rather than just end-user adoption, requiring locally rooted and people-centered ecosystems

Explanation

Elias emphasized that in countries like India, trust in AI systems is not automatically given but must be earned through inclusive development processes. When communities are treated as co-creators rather than just end users, they don’t just adopt technology but transform it to meet their specific needs and contexts.


Evidence

The tribal women workshops in Chintapalli resulted in tech transformation powered by AI but grounded in cultural values, peer collaboration, and dignity-first design. The workshops proved that responsible AI begins with trust-building rather than just tool deployment.


Major discussion point

Community-Led and Inclusive AI Development


Topics

Development | Sociocultural


Agreed with

– Marlon Avalos
– Anne Rachel

Agreed on

International collaboration is essential for AI development, especially for developing countries


A

Anne Rachel

Speech speed

124 words per minute

Speech length

1723 words

Speech time

833 seconds

AI opportunities exist in healthcare, agriculture, and education, but require addressing infrastructure constraints and capacity building for young populations

Explanation

African countries have significant opportunities to use AI for development challenges in key sectors, but face constraints in connectivity and need time to build workforce capacity. The young population (65% under 25 in Niger) represents both an opportunity and a challenge requiring patient capacity development.


Evidence

Niger’s smart villages program started with telemedicine for skin diseases, students developed an oximeter for melanated skin during COVID, and various AI applications in precision farming, agroforestry, personalized learning, and voice recognition software for local languages.


Major discussion point

AI opportunities exist in healthcare, agriculture, and education, but require addressing infrastructure constraints and capacity building for young populations


Topics

Development | Infrastructure


Agreed with

– Jibu Elias

Agreed on

Community-centered and inclusive approaches are crucial for responsible AI development


Only 22% of Africans have broadband access, and 16 African countries are landlocked, creating connectivity challenges

Explanation

Infrastructure limitations significantly constrain AI adoption across Africa, with low broadband penetration rates and geographic challenges for landlocked countries. These connectivity issues exacerbate digital divides and limit access to AI technologies and services.


Evidence

Specific statistics: 22% broadband access rate across Africa, 16 landlocked countries in the region, and connectivity infrastructure costs are particularly high for these geographic constraints.


Major discussion point

Challenges in AI Implementation for Developing Countries


Topics

Infrastructure | Development


Agreed with

– Marlon Avalos

Agreed on

Infrastructure and connectivity challenges are major barriers for developing countries


Data scarcity and bias affect AI systems, with only 2% of African-generated data used locally and facial recognition systems performing poorly on African populations

Explanation

African countries face significant data challenges where most locally generated data is managed by global platforms and not shared back with local institutions. Additionally, many AI systems trained on non-African data perform poorly for African users, creating bias and effectiveness issues.


Evidence

Only 2% of data generated on the African continent is used locally, and facial recognition systems globally are trained on non-African data and perform poorly on African people.


Major discussion point

Challenges in AI Implementation for Developing Countries


Topics

Human rights principles | Development


Taking time to develop context-appropriate solutions is more important than rushing implementation without proper understanding

Explanation

Anne Rachel emphasized the African saying ‘Europeans have watches, we have time’ to advocate for patient, context-sensitive AI development. Rushing into AI implementation without proper understanding of local contexts and needs keeps countries behind rather than advancing them.


Evidence

The African proverb ‘Europeans have watches, we have time’ and emphasis on the need for everyone to be part of the discussion and brought to the table for trustworthy digital transformation.


Major discussion point

International Cooperation and Knowledge Sharing


Topics

Development | Sociocultural


Agreed with

– Marlon Avalos
– Jibu Elias

Agreed on

International collaboration is essential for AI development, especially for developing countries


Disagreed with

– Katarina de Brisis

Disagreed on

Pace and approach to AI implementation


K

Katarina de Brisis

Speech speed

120 words per minute

Speech length

1219 words

Speech time

606 seconds

Norway has successfully implemented AI in healthcare for X-ray analysis, tax administration for fraud detection, and police transcription services, showing practical benefits

Explanation

Norway has deployed AI across multiple government sectors with measurable impacts on efficiency and citizen services. These implementations demonstrate concrete benefits including reduced waiting times for patients, increased detection rates for tax fraud, and time savings for police investigations.


Evidence

Vestreviken hospital’s AI x-ray analysis saved 2000 patients 79 days of waiting time; tax administration AI increased detection rates from 12% to 85% and generated 110 million kroner in additional revenue; police use AI for automatic transcription of interrogations.


Major discussion point

AI Applications in Government Services


Topics

Economic | Legal and regulatory


70% of Norwegian state agencies use AI in daily work, but municipalities and benefit assessment tools need further development

Explanation

While AI adoption is widespread among state agencies for tasks like job advertisements and case processing, there’s still significant potential for expansion, particularly at the municipal level and in developing better tools to assess AI benefits across different sectors and government levels.


Evidence

Survey of 200 state agencies showed 70% use AI daily, mostly generative AI for designing job advertisements, case processing, analytical work, and recruitment procedures. Norway has 400+ municipalities with much greater potential for AI adoption.


Major discussion point

AI Applications in Government Services


Topics

Economic | Development


Leadership competence, legal frameworks, and employee training are crucial for trustworthy AI implementation in government

Explanation

Successful AI implementation requires managers who understand both opportunities and risks, proper legal basis for data access, and comprehensive employee training. Norway emphasizes building competence within agencies and ensuring dialogue between management and employee representatives.


Evidence

60% of state organizations implement measures to increase employee competence, 43% created internal AI guidelines, and Norway requires legal basis for accessing personal data for AI training purposes.


Major discussion point

Trustworthy AI Governance and Risk Management


Topics

Legal and regulatory | Development


Agreed with

– Jungwook Kim
– Seong Ju Park

Agreed on

Data security and governance are critical for trustworthy AI in government


Norway is implementing the EU AI Act and investing in Norwegian language foundational models and computing infrastructure

Explanation

Norway is creating a comprehensive AI governance framework by implementing the EU AI Act alongside national strategies and investments. The government has ambitious goals for AI adoption across public agencies while building supporting infrastructure including language-specific models and computing resources.


Evidence

Norway aims for 80% of public agencies to use AI by 2025 and 100% by 2030; invested 40 million kroner in Norwegian and Sami language foundational models; establishing AI Norway for experience sharing and regulatory sandbox testing.


Major discussion point

Trustworthy AI Governance and Risk Management


Topics

Legal and regulatory | Infrastructure


Disagreed with

– Anne Rachel

Disagreed on

Pace and approach to AI implementation


M

Moderator

Speech speed

99 words per minute

Speech length

1453 words

Speech time

874 seconds

Japan’s leadership in proposing OECD AI principles in 2016 and current efforts to make comprehensive principles into practical policies

Explanation

Japan initiated international discussions on AI principles at the OECD in 2016, leading to the comprehensive OECD AI principles. Now Japan is working with other countries to translate these high-level principles into practical policies and actionable guidance for governments and stakeholders.


Evidence

Japan proposed international discussion to OECD on AI principles in 2016, which became the foundation for the OECD AI principles. Japan is now collaborating with Costa Rica, Korea and others, backed by OECD Secretariat, to make the comprehensive principles into practical policies.


Major discussion point

OECD AI Principles Implementation Toolkit Development


Topics

Legal and regulatory | Development


Agreed with

– Marlon Avalos
– Lucia Rossi
– Seong Ju Park

Agreed on

Practical implementation tools and frameworks are needed to translate AI principles into action


J

Jungwook Kim

Speech speed

124 words per minute

Speech length

887 words

Speech time

428 seconds

Korea’s AI adoption requires innovation in data, infrastructure, and service delivery, plus inclusion through accessibility and capability building

Explanation

Kim outlined Korea’s approach to AI adoption in government through three key pillars: innovation (requiring changes in data formats, infrastructure, and citizen-centric services), inclusion (addressing digital divides and enhancing accessibility), and investment (strategic resource allocation for AI development and deployment).


Evidence

Korea is ranked as one of the leading countries in OECD Digital Government Index. The approach focuses on machine-readable data, innovative infrastructure adaptation, and brand new citizen-centric AI public services, while addressing digital divides by gender, region, income, and education.


Major discussion point

AI Applications in Government Services


Topics

Development | Economic


Data security, system security, and agile AI governance are essential for protecting citizens’ personal data and rights

Explanation

Kim emphasized that public sector AI use requires top priority on data security due to the accumulation of detailed personal data in government systems. This includes securing citizens’ rights to their personal data, protecting against system vulnerabilities, and establishing agile governance measures to address AI safety issues in real-time.


Evidence

Public bodies process a lot of detailed personal data requiring explicit consent for utilization, systems are vulnerable to hacking and malicious functions, and Korea has established AI safety and governance measures based on their experience with privacy breaches and citizen safety issues.


Major discussion point

Trustworthy AI Governance and Risk Management


Topics

Human rights principles | Legal and regulatory


Agreed with

– Katarina de Brisis
– Seong Ju Park

Agreed on

Data security and governance are critical for trustworthy AI in government


Investment in AI adoption requires strategic resource allocation across innovation, inclusion, and infrastructure development

Explanation

Kim argued that successful AI adoption in government requires substantial and strategic investment across multiple areas. The three pillars of innovation, inclusion, and investment are interconnected, requiring governments to spend resources wisely and strategically to achieve effective AI deployment in public services.


Evidence

Korea’s experience shows that AI adoption requires huge resources to develop and deploy AI services in the public sector, and strategic investment is needed across data development, infrastructure adaptation, and capability building.


Major discussion point

International Cooperation and Knowledge Sharing


Topics

Economic | Development


S

Seong Ju Park

Speech speed

125 words per minute

Speech length

2095 words

Speech time

1003 seconds

AI use cases are unevenly distributed across government functions, with emphasis on automation and personalization of processes

Explanation

OECD research analyzing 200 AI use cases across 11 government functions found uneven distribution, with policy functions most represented being those in the public eye. Over half of the use cases focus on automating, streamlining, and personalizing government processes and services, particularly in justice, public services, and civic participation.


Evidence

Analysis of 200 use cases across 11 government functions covering policy functions, key government processes, and service and justice. Slightly more than half seek automation and personalization, while four out of 10 use cases enhance decision-making and forecasting.


Major discussion point

AI Applications in Government Services


Topics

Legal and regulatory | Economic


AI in government carries higher risks than private sector use, including ethical, operational, exclusion, and public resistance risks

Explanation

Government AI use differs significantly from private sector applications due to higher stakes and potential for serious harm to individuals and society. These risks can undermine public trust in government, legitimacy of AI use, and democratic values, requiring continuous consideration of potential future risks.


Evidence

Five identified risks: ethical risk, operational risk, exclusion risk, public resistance, and widened gaps between public and private sector capacities. Government AI use has potential dangers that could seriously harm individuals’ lives and society as a whole.


Major discussion point

Trustworthy AI Governance and Risk Management


Topics

Human rights principles | Legal and regulatory


Agreed with

– Katarina de Brisis
– Jungwook Kim

Agreed on

Data security and governance are critical for trustworthy AI in government


The OECD framework provides guidance on stakeholder engagement, enabling environments, and guardrails for responsible AI use

Explanation

The OECD has developed an evolving framework organized around three sections to support government AI efforts: level of engagement (involving different stakeholders), enablers (policy actions for solid enabling environment), and guardrails (policy levers for responsible and trustworthy AI use).


Evidence

The framework includes stakeholder engagement from public, private, academia, and users; enablers covering governance, capabilities, collaborations and partnerships; and guardrails ranging from soft laws and guidance to legislation and oversight bodies.


Major discussion point

International Cooperation and Knowledge Sharing


Topics

Legal and regulatory | Development


Agreed with

– Marlon Avalos
– Lucia Rossi
– Moderator

Agreed on

Practical implementation tools and frameworks are needed to translate AI principles into action


Agreements

Agreement points

International collaboration is essential for AI development, especially for developing countries

Speakers

– Marlon Avalos
– Anne Rachel
– Jibu Elias

Arguments

International collaboration is essential for developing countries, requiring customization, learning, and evidence-based approaches


Taking time to develop context-appropriate solutions is more important than rushing implementation without proper understanding


Trust is earned through community co-creation rather than just end-user adoption, requiring locally rooted and people-centered ecosystems


Summary

All three speakers from developing countries emphasized that successful AI implementation requires international cooperation, context-sensitive approaches, and community involvement rather than top-down or rushed implementations


Topics

Development | Legal and regulatory


Infrastructure and connectivity challenges are major barriers for developing countries

Speakers

– Marlon Avalos
– Anne Rachel

Arguments

Technical connectivity issues demonstrate daily challenges that developing countries face in AI implementation


Only 22% of Africans have broadband access, and 16 African countries are landlocked, creating connectivity challenges


Summary

Both speakers highlighted infrastructure limitations as fundamental barriers to AI adoption, with Avalos experiencing connectivity issues during the session and Anne Rachel providing specific statistics about African connectivity challenges


Topics

Infrastructure | Development


Community-centered and inclusive approaches are crucial for responsible AI development

Speakers

– Jibu Elias
– Anne Rachel

Arguments

Responsible AI must be inclusive, accessible, and rooted in local values, focusing on communities most affected but least represented in AI development


AI opportunities exist in healthcare, agriculture, and education, but require addressing infrastructure constraints and capacity building for young populations


Summary

Both speakers emphasized the importance of involving local communities, especially marginalized groups, in AI development and ensuring that solutions address real local needs and contexts


Topics

Development | Human rights principles


Data security and governance are critical for trustworthy AI in government

Speakers

– Katarina de Brisis
– Jungwook Kim
– Seong Ju Park

Arguments

Leadership competence, legal frameworks, and employee training are crucial for trustworthy AI implementation in government


Data security, system security, and agile AI governance are essential for protecting citizens’ personal data and rights


AI in government carries higher risks than private sector use, including ethical, operational, exclusion, and public resistance risks


Summary

All three speakers agreed that government AI implementation requires robust governance frameworks, data protection measures, and comprehensive risk management approaches due to the sensitive nature of government data and services


Topics

Human rights principles | Legal and regulatory


Practical implementation tools and frameworks are needed to translate AI principles into action

Speakers

– Marlon Avalos
– Lucia Rossi
– Moderator
– Seong Ju Park

Arguments

Costa Rica initiated the toolkit based on their national AI strategy experience, recognizing that developing countries need practical tools to implement OECD principles


The toolkit will provide self-assessment tools and region-specific guidance through co-creation workshops to help countries bridge AI divides


Japan’s leadership in proposing OECD AI principles in 2016 and current efforts to make comprehensive principles into practical policies


The OECD framework provides guidance on stakeholder engagement, enabling environments, and guardrails for responsible AI use


Summary

Multiple speakers agreed on the need for practical tools and frameworks to help countries implement high-level AI principles, with the OECD toolkit representing a collaborative effort to bridge the gap between principles and practice


Topics

Legal and regulatory | Development


Similar viewpoints

Both speakers emphasized the potential of young people and marginalized communities to drive AI innovation when given proper support and tools, highlighting examples of student-led innovations and the importance of capacity building for young populations

Speakers

– Jibu Elias
– Anne Rachel

Arguments

Students and marginalized communities can create global public goods when empowered with ethical frameworks and open tools


AI opportunities exist in healthcare, agriculture, and education, but require addressing infrastructure constraints and capacity building for young populations


Topics

Development | Sociocultural


Both speakers from developed countries shared experiences of successful government AI implementations with measurable benefits, emphasizing the importance of systematic approaches to AI adoption across multiple government sectors

Speakers

– Katarina de Brisis
– Jungwook Kim

Arguments

Norway has successfully implemented AI in healthcare for X-ray analysis, tax administration for fraud detection, and police transcription services, showing practical benefits


Korea’s AI adoption requires innovation in data, infrastructure, and service delivery, plus inclusion through accessibility and capability building


Topics

Economic | Development


Both speakers highlighted how AI systems often fail to serve non-Western populations effectively due to bias and lack of local data representation, emphasizing the need for locally developed and culturally appropriate AI solutions

Speakers

– Anne Rachel
– Jibu Elias

Arguments

Data scarcity and bias affect AI systems, with only 2% of African-generated data used locally and facial recognition systems performing poorly on African populations


Trust is earned through community co-creation rather than just end-user adoption, requiring locally rooted and people-centered ecosystems


Topics

Human rights principles | Development


Unexpected consensus

The importance of taking time for proper AI implementation rather than rushing

Speakers

– Anne Rachel
– Jungwook Kim

Arguments

Taking time to develop context-appropriate solutions is more important than rushing implementation without proper understanding


Korea’s AI adoption requires innovation in data, infrastructure, and service delivery, plus inclusion through accessibility and capability building


Explanation

It was unexpected to see both a developing country representative (Anne Rachel) and a developed country representative (Jungwook Kim) agree on the importance of patient, gradual AI implementation. This consensus suggests that even advanced countries recognize AI adoption as a long-term journey requiring careful planning rather than rapid deployment


Topics

Development | Sociocultural


The universal challenge of measuring AI benefits in government

Speakers

– Katarina de Brisis
– Seong Ju Park

Arguments

70% of Norwegian state agencies use AI in daily work, but municipalities and benefit assessment tools need further development


AI use cases are unevenly distributed across government functions, with emphasis on automation and personalization of processes


Explanation

Despite Norway’s advanced AI implementation, both speakers acknowledged that even leading countries struggle with measuring AI benefits and achieving even distribution across government functions. This suggests that assessment and scaling challenges are universal, not just issues for developing countries


Topics

Economic | Legal and regulatory


Overall assessment

Summary

The speakers demonstrated strong consensus on several key areas: the need for international cooperation and practical implementation tools, the importance of inclusive and community-centered approaches, the critical role of data governance and security in government AI, and the recognition that AI implementation is a gradual process requiring patience and proper planning. There was also agreement on the challenges of infrastructure, capacity building, and the need for context-sensitive solutions.


Consensus level

High level of consensus with complementary perspectives from different regions and development stages. The agreement spans both technical and social aspects of AI implementation, suggesting a mature understanding of AI governance challenges across different contexts. This consensus provides a strong foundation for international cooperation and the development of practical tools like the OECD AI principles implementation toolkit.


Differences

Different viewpoints

Pace and approach to AI implementation

Speakers

– Anne Rachel
– Katarina de Brisis

Arguments

Taking time to develop context-appropriate solutions is more important than rushing implementation without proper understanding


Norway is implementing the EU AI Act and investing in Norwegian language foundational models and computing infrastructure


Summary

Anne Rachel advocates for a patient, time-intensive approach emphasizing the African saying ‘Europeans have watches, we have time’ and warns against rushing AI implementation without proper context understanding. In contrast, Katarina presents Norway’s very ambitious timeline with 80% of public agencies using AI by 2025 and 100% by 2030, representing a rapid deployment approach.


Topics

Development | Sociocultural


Unexpected differences

Infrastructure challenges as demonstration vs. systematic barrier

Speakers

– Marlon Avalos
– Anne Rachel

Arguments

Technical connectivity issues demonstrate daily challenges that developing countries face in AI implementation


Only 22% of Africans have broadband access, and 16 African countries are landlocked, creating connectivity challenges


Explanation

While both speakers address infrastructure challenges, Avalos uses his technical difficulties as a real-time demonstration of connectivity issues, suggesting these are manageable obstacles that can be worked around. Anne Rachel presents infrastructure limitations as fundamental systematic barriers requiring substantial structural changes. This represents an unexpected difference in framing the same core issue – whether infrastructure challenges are symptomatic problems or foundational barriers to AI adoption.


Topics

Infrastructure | Development


Overall assessment

Summary

The discussion shows remarkably high consensus on core principles (inclusion, context-sensitivity, international cooperation) but reveals subtle yet significant differences in implementation philosophy and pace


Disagreement level

Low to moderate disagreement level with high strategic implications. While speakers largely agree on goals, their different approaches to timing, community engagement, and implementation strategies could lead to significantly different outcomes in AI policy development. The disagreements are more about methodology and pace rather than fundamental objectives, but these differences could be crucial for policy effectiveness and adoption success in different regional contexts.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasized the potential of young people and marginalized communities to drive AI innovation when given proper support and tools, highlighting examples of student-led innovations and the importance of capacity building for young populations

Speakers

– Jibu Elias
– Anne Rachel

Arguments

Students and marginalized communities can create global public goods when empowered with ethical frameworks and open tools


AI opportunities exist in healthcare, agriculture, and education, but require addressing infrastructure constraints and capacity building for young populations


Topics

Development | Sociocultural


Both speakers from developed countries shared experiences of successful government AI implementations with measurable benefits, emphasizing the importance of systematic approaches to AI adoption across multiple government sectors

Speakers

– Katarina de Brisis
– Jungwook Kim

Arguments

Norway has successfully implemented AI in healthcare for X-ray analysis, tax administration for fraud detection, and police transcription services, showing practical benefits


Korea’s AI adoption requires innovation in data, infrastructure, and service delivery, plus inclusion through accessibility and capability building


Topics

Economic | Development


Both speakers highlighted how AI systems often fail to serve non-Western populations effectively due to bias and lack of local data representation, emphasizing the need for locally developed and culturally appropriate AI solutions

Speakers

– Anne Rachel
– Jibu Elias

Arguments

Data scarcity and bias affect AI systems, with only 2% of African-generated data used locally and facial recognition systems performing poorly on African populations


Trust is earned through community co-creation rather than just end-user adoption, requiring locally rooted and people-centered ecosystems


Topics

Human rights principles | Development


Takeaways

Key takeaways

The OECD AI Principles Implementation Toolkit, initiated by Costa Rica, will provide practical self-assessment tools and region-specific guidance to help countries implement AI principles through co-creation workshops


Responsible AI development must be inclusive, locally-rooted, and community-centered, with marginalized communities serving as co-creators rather than just end-users


Developing countries face significant challenges including infrastructure limitations, connectivity issues (only 22% of Africans have broadband access), data scarcity, and fragmented policy frameworks


AI applications in government services show practical benefits, with Norway demonstrating success in healthcare, tax administration, and police services, while 70% of Norwegian state agencies already use AI


Trustworthy AI governance requires leadership competence, legal frameworks, employee training, and addressing higher risks in government use compared to private sector applications


International cooperation and knowledge sharing through regional workshops and platforms are essential for bridging AI divides and promoting inclusive AI ecosystems


AI implementation is a long journey with moving targets, requiring strategic investment in innovation, inclusion, and infrastructure development


Resolutions and action items

OECD will launch a comprehensive report on governing with AI and create a dedicated hub for AI in the public sector on oecd.ai


Regional co-creation workshops will be organized, starting with ASEAN countries in Thailand, followed by workshops with African, Central American, and South American countries


Norway aims for 80% of public agencies to use AI by 2025 and 100% by 2030, with investments in Norwegian language foundational models and computing infrastructure


Norway will implement the EU AI Act and establish AI Norway as an arena for sharing experience and regulatory sandbox testing


OECD will conduct a global data collection exercise on AI policies and use cases to be presented through the OECD AI Policy Observatory


Unresolved issues

Many AI use cases remain at piloting stage with governments struggling to scale pilots into wider systems or services


Governments need better tools and methodologies to assess the costs and benefits of AI implementation in the public sector


Inadequate data, skills, and infrastructure in the public sector continue to constrain AI adoption


The need for more actionable guidelines and navigation of rigid regulatory environments remains challenging


Capacity building and workforce development cannot keep pace with the rapid advancement of AI technology


Data bias issues persist, with facial recognition systems performing poorly on African populations and only 2% of African-generated data being used locally


Suggested compromises

Taking time to develop context-appropriate solutions rather than rushing implementation without proper understanding of local needs


Balancing ambitious AI adoption goals with the need for proper training, legal frameworks, and safety measures


Using modular and flexible guidance approaches that can adapt to different resource settings and local contexts


Combining international best practices with local innovation and community-led initiatives


Establishing public-private partnerships to share the burden of AI development and implementation costs


Thought provoking comments

Even a country like Costa Rica, politically stable, technically skilled and internationally connected, face these challenges, then surely other countries like us will too face that challenge.

Speaker

Marlon Avalos


Reason

This comment was particularly insightful because it reframed the AI development challenge from a Global South perspective. Rather than positioning Costa Rica as disadvantaged, Avalos acknowledged their relative strengths while emphasizing that if even well-positioned countries struggle, the challenges are systemic rather than just resource-based. This created a foundation for genuine international collaboration rather than a donor-recipient dynamic.


Impact

This comment established the legitimacy and urgency of the OECD AI Principles Implementation Toolkit initiative. It shifted the discussion from theoretical policy frameworks to practical, experience-based solutions and set the tone for other speakers to share their ground-level challenges and innovations.


Don’t just ask who builds AI, ask whose future is it building? Because in countries like ours, trust is not a given, it’s earned. And when communities are trusted as co-creators, not just end users, they don’t just adopt technology, they transform it.

Speaker

Jibu Elias


Reason

This comment was profoundly thought-provoking because it challenged the fundamental approach to AI development and deployment. It shifted focus from technical capabilities to human agency and democratic participation in technology design. The distinction between ‘end users’ and ‘co-creators’ reframes the entire AI governance conversation around empowerment rather than consumption.


Impact

This comment elevated the entire discussion by introducing a philosophical framework that connected all subsequent speakers’ examples. It provided a lens through which the audience could evaluate all AI initiatives – whether they truly involve communities as co-creators or merely as beneficiaries.


We do say Europeans have watches, we have time. So I’m just saying this to plead for, you know, taking the time to do things, because rushing into doing things that are not geared to the context just keeps us behind more than anything, because people do not understand what it is we’re trying to do or where is it that we’re trying to get to.

Speaker

Anne Rachel Ng


Reason

This culturally grounded metaphor was exceptionally insightful because it challenged the prevailing narrative of ‘catching up’ in AI development. It reframed the perceived disadvantage of slower adoption as potentially advantageous, emphasizing that contextual appropriateness and community understanding are more valuable than speed. This perspective counters the technology determinism often present in AI discussions.


Impact

This comment provided a powerful counter-narrative to the urgency often associated with AI adoption. It influenced the discussion by validating deliberate, community-centered approaches and gave other speakers permission to discuss the importance of local context and inclusive processes over rapid deployment.


We found that some functions face particular barriers or complexities, such as particularly stricter rules on data access and sharing, and then stricter requirements for thorough audit trails in public integrity.

Speaker

Seong Ju Park


Reason

This observation was insightful because it revealed that the uneven distribution of AI use cases in government isn’t just about technical capacity or resources, but about institutional and regulatory complexity. It highlighted how governance structures themselves can create barriers to AI adoption, suggesting that policy reform may be as important as technical development.


Impact

This comment shifted the second segment’s focus from success stories to implementation challenges, preparing the ground for more nuanced discussions about the barriers governments face and the need for adaptive governance frameworks.


AI has changed many aspects of our lives, how we communicate, how we seek information. And this is affecting governments as well. This is accelerating digital transformation of public sector, changing how governments work, how government design and deliver policies and services. And it also changed the expectations and needs of the citizens and businesses that they serve.

Speaker

Seong Ju Park


Reason

This comment was thought-provoking because it positioned AI not just as a tool for government efficiency, but as a transformative force that changes the fundamental relationship between governments and citizens. It suggested that AI adoption creates new expectations and needs, implying that governments must evolve not just their tools but their entire approach to public service.


Impact

This framing influenced the entire second segment by establishing that AI in government isn’t just about automation or efficiency gains, but about fundamental transformation of governance relationships. It set up the subsequent discussions about trust, accountability, and citizen engagement.


So it’s moving targets. Then we need agile measures to take care of the AI safety issues… those ones should be narrated clearly in the AI safety and governance in one specific country.

Speaker

Jungwook Kim


Reason

This comment was insightful because it acknowledged the fundamental challenge of governing rapidly evolving technology while emphasizing the need for country-specific approaches. The ‘moving targets’ metaphor captured the dynamic nature of AI governance challenges, while the emphasis on national narratives recognized that governance solutions must be culturally and institutionally grounded.


Impact

This comment reinforced the toolkit approach discussed in the first segment by validating the need for flexible, adaptive governance frameworks rather than one-size-fits-all solutions. It connected the theoretical framework discussions with practical implementation challenges.


Overall assessment

These key comments fundamentally shaped the discussion by challenging conventional narratives about AI development and governance. Rather than focusing solely on technical capabilities or resource gaps, the speakers introduced themes of community agency, cultural context, institutional complexity, and adaptive governance. The comments created a progression from recognizing shared challenges (Avalos) to reimagining development approaches (Jibu, Anne Rachel) to understanding implementation complexities (Park, Kim). This elevated the conversation beyond typical policy discussions to address fundamental questions about power, participation, and the purpose of AI in society. The speakers’ insights collectively argued for a more democratic, contextual, and deliberate approach to AI governance that prioritizes community needs and local contexts over rapid technological adoption.


Follow-up questions

How can we better measure the cost and benefits of AI implementation in the public sector?

Speaker

Katarina de Brisis


Explanation

Many governments struggle to make business cases for scaling up AI efforts due to unknown costs and benefits, making it difficult for policymakers to justify investments


How can we develop better tools and methodologies to assess benefits from AI across various sectors and government levels?

Speaker

Katarina de Brisis


Explanation

While there are documented cases of AI benefits, there’s a need for systematic methodological frameworks to evaluate AI impact across different government functions


How can governments scale AI pilots into wider systems and services?

Speaker

Seong Ju Park


Explanation

Many AI use cases in government remain at piloting stage and struggle to scale up, representing a significant implementation challenge


How can we ensure AI systems work effectively for diverse populations, particularly addressing bias in facial recognition and medical devices for people of different ethnicities?

Speaker

Anne Rachel Ng


Explanation

Current AI systems often perform poorly on African populations due to training on non-representative data, as demonstrated by the oximeter example during COVID-19


How can we develop more actionable guidelines for AI implementation in government?

Speaker

Seong Ju Park


Explanation

There’s a large room for improvement in providing practical, implementable guidance rather than high-level principles


How can we address the infrastructure challenges, particularly for landlocked countries with limited broadband access?

Speaker

Anne Rachel Ng


Explanation

Only 22% of Africans have broadband access, and 16 African countries are landlocked, creating significant connectivity barriers for AI adoption


How can we better coordinate cross-ministerial collaboration for AI policy implementation?

Speaker

Anne Rachel Ng


Explanation

AI implementation requires coordination across multiple government ministries (finance, interior, defense, data protection) but this coordination is often lacking


How can we develop AI governance frameworks that are agile enough to keep pace with rapidly evolving AI technology?

Speaker

Jungwook Kim


Explanation

AI is a moving target requiring real-time and proactive governance measures, but current governance structures may not be agile enough


How can we ensure inclusive AI development that truly involves marginalized communities as co-creators rather than just end users?

Speaker

Jibu Elias


Explanation

Trust in AI systems requires involving communities in the development process, not just as recipients of the technology


How can we address the capacity building challenge when the pace of AI development exceeds the speed at which human capacity can be developed?

Speaker

Anne Rachel Ng


Explanation

With very young populations in developing countries, there’s a mismatch between the speed of AI advancement and the time needed to build adequate workforce capacity


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.