Open Forum #70 the Future of DPI Unpacking the Open Source AI Model

25 Jun 2025 10:45h - 11:45h

Open Forum #70 the Future of DPI Unpacking the Open Source AI Model

Session at a glance

Summary

This discussion at the 2025 Internet Governance Forum in Oslo focused on the future of digital public infrastructure (DPI) and artificial intelligence, examining how open source AI can enhance global DPI protocols. The session was moderated by Judith Vega from the World Economic Forum and featured panelists from PayPal, Meta, and Emisi3Dear, representing perspectives from financial services, social media platforms, and African immersive technology development.


The conversation established that DPI has become fundamental to modern society through digital identity, payment systems, and data exchange, with most innovations coming from the private sector. Meta’s Melinda Claybaugh explained how their open source Llama AI models enable developers worldwide to create customized solutions for local communities, making cutting-edge technology freely accessible. She highlighted practical applications like AI-powered glasses that can translate languages and provide real-time information, demonstrating AI’s integration into daily life.


PayPal’s Larry Wade emphasized AI’s role as an optimization layer for financial services, particularly in customer onboarding, fraud prevention, and enhancing transaction security. He stressed the importance of open source protocols for attracting talent, avoiding winner-picking, and building trust with regulators through transparency. Judith Okonkwo from Nigeria discussed how open source AI enables experimentation and localized solutions, citing examples like VR applications for autism awareness and educational tools for resource-constrained environments.


Key challenges identified included the need for localized datasets, digital literacy, infrastructure development, and skills capacity building. The panelists emphasized that successful implementation requires strong public-private partnerships, with private companies taking responsibility to educate regulators about complex technologies. The discussion concluded with recognition that while AI offers tremendous potential for enhancing DPI globally, achieving trustworthy, explainable, and inclusive deployment requires continued collaboration between all stakeholders to ensure these technologies serve the public good.


Keypoints

## Major Discussion Points:


– **Open Source AI Integration with Digital Public Infrastructure (DPI)**: The panel explored how open source AI can enhance the three core components of DPI – digital identity, digital payments, and data exchange – with emphasis on making these systems globally scalable, interoperable, and secure.


– **Private Sector Applications and Innovation**: Discussion of real-world implementations, including Meta’s Llama open source language models being used for scientific research and local language applications, PayPal’s use of AI for fraud detection and customer onboarding, and the development of AI-integrated hardware like smart glasses.


– **Public-Private Partnership Requirements**: Strong emphasis on the need for collaboration between private companies and regulators, with private sector taking responsibility to educate policymakers about complex technologies while governments provide appropriate regulatory frameworks that enable innovation.


– **Regional Barriers and Localization Challenges**: Examination of obstacles to AI adoption across different regions, particularly in Africa, including infrastructure limitations, skills gaps, need for localized datasets, and the importance of digital literacy for broader public participation.


– **Trust and Explainability in AI Systems**: Discussion of the tension between AI’s pattern recognition capabilities and the need for transparent, explainable decision-making, especially in government applications and financial services where accountability to citizens is paramount.


## Overall Purpose:


The discussion aimed to explore how open source AI can be leveraged to improve digital public infrastructure globally, examining the roles of both private and public sectors in ensuring these technologies are accessible, trustworthy, and beneficial for society while addressing implementation challenges across different regions and jurisdictions.


## Overall Tone:


The discussion maintained a collaborative and optimistic tone throughout, with panelists demonstrating mutual respect and building on each other’s points. The conversation was technical yet accessible, with speakers acknowledging both the exciting possibilities and serious challenges of AI integration. The tone remained constructive even when addressing complex regulatory and ethical concerns, emphasizing shared responsibility and the need for continued cooperation between all stakeholders.


Speakers

**Speakers from the provided list:**


– **Judith Vega** – Moderator, Specialist at the World Economic Forum working on governance and policy for technologies


– **Larry Wade** – Global Head of Compliance for PayPal’s Blockchain, Crypto, and Digital Currencies, offering expertise in financial innovation and regulatory frameworks


– **Melinda Claybaugh** – Policy Privacy Director at Meta, brings experience in privacy and platform governance


– **Judith Okonkwo** – Founder of Emisi3Dear, pioneer in immersive technologies and open innovation, especially across the African continent (joined remotely)


– **Agustina Callegari** – Lead for the Global Coalition of Digital Safety at the World Economic Forum, serving as online moderator


– **Audience** – Various audience members who asked questions during the Q&A session


**Additional speakers:**


– **Marin** – Researcher at IT4Change, an NGO that works at the intersections of digital technology and social justice


– **Haidel Alvestram** – (Role/expertise not specified)


– **Satish** – Has long background in open source, presently part of ICANN and DotAsia organization


– **Knut Vatne** – Representative from the Norwegian Tax Administration


– **Daniel Dobrowolski** – Head of governance and trust at the World Economic Forum (mentioned as being present at the table)


Full session report

# Comprehensive Report: Open Source AI and Digital Public Infrastructure – Internet Governance Forum 2025


## Executive Summary


This discussion at the 2025 Internet Governance Forum in Oslo examined the intersection of open source artificial intelligence and digital public infrastructure (DPI), exploring how these technologies can enhance global digital systems whilst addressing implementation challenges across different regions and sectors. The session brought together diverse perspectives from technology companies, policy experts, and government representatives to discuss the future of AI-enabled public digital services.


The conversation established that DPI has become fundamental to modern society through three core components: digital identity, payment systems, and data exchange. The panelists discussed both opportunities and challenges in implementing open source AI solutions, with particular attention to regional barriers, public-private partnerships, and the ongoing tension between AI capabilities and accountability requirements.


## Participants and Perspectives


The discussion was moderated by **Judith Vega**, a specialist at the World Economic Forum working on governance and policy for technologies, with **Agustina Callegari** serving as online moderator. The panel featured three primary speakers:


**Flavia Alvez**, representing Meta, provided insights into how large technology platforms are approaching open source AI development. She explained Meta’s strategy with their Llama AI models, which are made available to developers worldwide to create customized solutions for local communities.


**Larry Wade**, Global Head of Compliance for PayPal’s Blockchain, Crypto, and Digital Currencies, offered a financial services perspective on AI integration. He emphasized AI’s role as an optimization layer for customer onboarding, fraud prevention, and transaction security enhancement.


**Judith Okonkwo**, Founder of Emisi3Dear and a pioneer in immersive technologies across the African continent, participated remotely to discuss regional implementation challenges and opportunities for open source AI in resource-constrained environments.


The audience included several participants who contributed to the discussion, including **Marin** from IT4Change, **Satish** from ICANN and DotAsia organisation, **Knut Vatne** from the Norwegian Tax Administration, and **Haidel Alvestram**.


## Core Discussion Themes


### Open Source AI and Accessibility


**Flavia Alvez** explained Meta’s approach to open source AI, describing how their Llama models enable developers worldwide to create customized solutions without requiring massive computational resources. She highlighted practical applications including AI-powered smart glasses that provide real-time translation services and scientific research applications where open source models are accelerating discoveries in health and education.


However, the definition of “open source AI” became a point of contention. **Marin** from IT4Change challenged whether AI is genuinely democratized when foundational models remain controlled by a few major actors, questioning the broader definitions of “open source” used by AI companies. **Satish** noted that open source AI encompasses different components—code, model weights, and datasets—each with varying levels of openness.


### Regional Implementation and Barriers


**Judith Okonkwo** provided crucial insights into practical challenges of implementing AI technologies across different regions, particularly in Africa. She identified four major barriers: skills gaps, capacity constraints, infrastructure limitations, and the critical need for localized datasets.


She shared specific examples of her work, including Autism VR initiatives and VR for Schools programs that demonstrate how open source AI can be adapted to address regional challenges despite infrastructure limitations. The need for localized datasets emerged as particularly critical, as AI models trained on datasets from one region may not perform effectively in different cultural, linguistic, or economic contexts.


### AI Integration in Financial Services


**Larry Wade** described PayPal’s approach to AI integration, positioning AI as an “optimization layer” rather than a replacement for existing systems. This approach maintains traditional controls and security measures whilst enhancing customer experience through improved pattern recognition and risk assessment.


Wade explained how AI enables financial services to reach previously underserved populations, particularly the unbanked and underbanked globally. He also discussed PayPal’s PYUSD stablecoin as an example of how blockchain technology combined with AI can create new financial infrastructure, noting the regulatory significance of having a regulated stablecoin backed by US treasuries.


He emphasized that it would be “irresponsible for private companies to create these world-changing technologies and not lean into educating those that have to regulate them,” advocating for proactive engagement between companies and regulators.


### Trust and Explainability Challenges


A significant tension emerged around AI’s pattern recognition capabilities versus the need for transparent, explainable decision-making in public sector applications. **Knut Vatne** from the Norwegian Tax Administration raised concerns about government agencies’ ability to use AI for citizen-affecting decisions when they cannot adequately explain the results.


**Haidel Alvestram** identified a “fundamental conflict in payment systems” between the need for accurate, auditable systems and AI’s typical inability to explain how it achieves its results. This represents a significant barrier to AI adoption in critical applications where accountability and transparency are regulatory requirements.


The panelists acknowledged this as an ongoing challenge requiring continued research and development, with no clear resolution offered during the discussion.


## Areas of Agreement and Disagreement


The participants showed agreement on several key points: the potential value of open source AI for democratizing access to technology, the importance of localized datasets for AI effectiveness, and the necessity of public-private partnerships for successful implementation.


However, significant disagreements emerged around the definition of “open source AI,” with traditional open source advocates questioning whether current AI company practices truly constitute openness. There were also different perspectives on the appropriate level of AI integration in government decision-making, with government representatives expressing stronger reservations than private sector participants.


## Practical Applications


The discussion was grounded in concrete examples of current AI applications. **Flavia Alvez** described Meta’s models being used for scientific discoveries and educational tools in local languages. **Judith Okonkwo** shared examples of VR applications combined with AI for autism awareness and educational support in resource-constrained environments. **Larry Wade** explained how AI enables asset provenance verification and digital identity authentication in financial services.


## Unresolved Challenges


Several significant challenges remain unresolved, including the fundamental tension between AI capabilities and explainability requirements, questions about genuine democratization of AI technology, and practical mechanisms for scaling localized implementations across diverse regions and regulatory environments.


## Conclusion


This Internet Governance Forum discussion highlighted both the potential and challenges of integrating open source AI into digital public infrastructure. While participants agreed on the importance of collaboration and localization, significant questions remain about implementation approaches, governance frameworks, and ensuring that AI benefits reach underserved communities.


The conversation emphasized that successful AI integration requires sustained collaboration between private companies, government entities, and civil society organizations, with continued attention to equity, accountability, and public interest considerations. The path forward requires ongoing dialogue and experimentation to address the technical and policy challenges identified during the session.


Session transcript

Judith Vega: Intro Hi, good morning everyone. Thank you so much for joining us. I’m going to give us a couple of more minutes to get settled. I’m going to invite everyone to come up and take a seat up here on this round table, just so we’re all a bit closer. Okay, great. I think we can go ahead and get started. So, good morning everyone again and thank you so much for for joining us here at the 2025 Internet Governance Forum in Oslo and a very warm welcome to everyone tuning in via the live stream. My name is Judith Vega and I’m a specialist at the World Economic Forum working on governance and policy for technologies. It is my sincere pleasure to be your moderator for today on the session focusing on the future of digital public infrastructure and artificial intelligence. As we all get settled, I want to start today by making a bold claim. So, I’m going to ask us a question here. Most of us on a daily basis interact with DPI protocols and tools. And I’ll get to prove my point in a second. By show of hands, can I ask how many of you here in this room have a smartphone with Face ID? All right. That’s not bad for people at the Internet Governance Forum. That’s pretty good. All right. How many of you have social media accounts that require login information and a password? It doesn’t have to be Facebook. It can be LinkedIn. It can be whatever your choice is. Good. That’s all right. That’s most of us again. And how many of you use digital payment systems? Ah, that’s more of us. There we go. So there’s a reason that most of us in the room raised our hands. And it’s that over the past decade, DPI has become the cornerstone that allows all of us to navigate and participate in society through its core components, which are digital identity, digital payment systems, and data exchange. And there’s a wide variety of ranges that we can do that in. And most of the innovations in those three sections have come really from the private sector in the last couple of years. So the question that we ask today is not, does DPI work? Or how does it work? But rather, how do we get it to work well? How do we get it to work in the future in a way that is globally scaled, interoperable, and secure? And we pose that perhaps the answer lies in AI, in open source AI very specifically. And if so, then what are the roles of the public and private sector? What can they both play to make sure that this comes to fruition? To answer these questions today, I’m thrilled to be joined by three outstanding panelists. To my left, to my right, excuse me, I have Larry Wade, Global Head of Compliance for PayPal’s Blockchain, Crypto, and Digital Currencies, offering a critical perspective on financial innovation and regulatory frameworks. Thank you for being here, Larry. And to his right, we have Melinda Klebau, Policy Privacy Director at Meta, who brings a wealth of experience in privacy and platform governance. And then, joining remotely, we have Judith Okonkwo. I’m not sure if you can see. She’s the founder of Emisi3Dear. She’s a pioneer in immersive technologies and open innovation, especially across the African continent. I remind you that this is an open forum, so we invite your questions, your reflections, throughout the entire session. Whether you’re here in person or joining us online, your voice is essential to this dialogue. And with that, I begin with a question from Melinda. Melinda, Meta has broken ground with their open AI source model. Can I ask you, how does Meta view AI? What does it feel like the future of AI is? What is this AI integration across regions and cross-jurisdictionally? And how do you see open source AI contributing to the development of DPI protocols globally?


Melinda Claybaugh: I think that was three questions or so, at least. Hello, everyone. Flavia Alvez from Meta. Really happy to be here. Thanks for organizing. Yeah, so just to level set for a minute about AI at Meta. We are both a developer and provider of a large language model that we call Llama. We’ve produced multiple versions of Llama at this point. And we also build services on top of our large language model. So just for a minute, our open source approach to building our large language model Llama really means that we make a very powerful large language model available for free to anyone to build on it. This is an incredible advantage to anyone who wants to have access to cutting edge technology. And it allows a really impressive level of customization for developers who want to provide bespoke solutions for their companies, their constituents, their stakeholders, their countries and regions. And so we think that open source is an incredibly powerful tool to accelerate the adoption and use and implementation of AI, but most importantly, to make it as useful as possible. for as cheap as possible to people. We also are very focused on building and incorporating AI into our existing services and developing new services based on AI. So if you’re a user of our apps, you will have seen that we’ve already added generative AI features into our apps that let you do fun things, of course, but also to ask questions and get information and answers. We’ve also recently launched a standalone app that you can have ongoing conversations with, that you can talk to, ask for recommendations, that kind of thing. And so we really see the future of AI as a personalized experience, a personalized assistant for you in your day-to-day life. And I think we are getting increasingly closer to that being a reality recently. For those of you who may have seen our booth, our meta booth has our glasses. Our meta AI assistant has been integrated with Ray-Ban Meta’s eyeglasses. So that means you can wear these glasses and walk around and talk to the glasses and ask the glasses, hey, I’m in Oslo, what am I looking at? Or what does this sign say in Norwegian? Can you translate it for me? And so these are just really concrete, easy, fun examples of the way that AI and AI powered by open source technology is really coming into our daily lives and providing a lot of value.


Judith Vega: Thank you, Melinda. I have a follow-up. You said that this is providing daily value, which I think is very, very true. In AI providing value to these technologies and these new products, where do you see these being integrated the most? Where do you find that people are using this sort of open AI source the most?


Melinda Claybaugh: yeah so I mean when you think about our open source models that are have been downloaded millions and millions of times I mean we’re tracking a lot of uses in really groundbreaking ways so our llama models are being deployed to make scientific discoveries for advances in health research they’re being deployed in small communities around the world to help kids with their homework in a local language you know they’re just being deployed in really creative interesting ways that are helping people day to day I think we tend to think about oh what’s the latest cute feature on this or that app or that app and that’s fun but I think we shouldn’t lose sight of just the really the importance of this foundational technology and the value that these models can provide and so we provide we’ve run programs where we provide impact grants to you know entities that have interesting pitches and ideas and we provide technical assistance and we’re I think we’re still learning the sky’s really the limit in terms of how AI can be leveraged to solve local problems in a really


Judith Vega: inexpensive way you thank you and I want to stay with this this topic of value and I Larry I want to turn to you PayPal as a leader in digital payment systems how does it see its value the or the transfer of financial values and sort of this next era of the internet with AI how does it see it changing with AI yeah it’s interesting it’s pretty much an optimization layer in a way so our CEO Alice Chris likes to say our goal is to revolutionize commerce that that’s what


Larry Wade: we want to do because we have this two-sided network so we get to see consumers and merchants you know almost 400 million wallets in 200 countries so it’s this very robust ecosystem that we get to see when you look at from my vantage point distributors technology So that’s blockchain, digital assets, AI is going to be essential in a few ways, right? So think about just onboarding customers. Believe it or not, depending on where you are in the world, it’s extremely challenging. So the customer identification process, KYC, KYB, so know your customer, know your business, and especially on the small business side, it can be challenging. Being able to utilize tools such as AI to be able to say, all right, there’s additional attributes that we can look at in order to gain comfort with onboarding this customer segment, which now we can facilitate providing different services that we couldn’t in the past. Sounds very simple, but again, when you’re talking about compliance or just risk management globally, that’s essential. Blocking and tackling on fraud and financial crimes. Just making sure that people, when you’re dealing with money, I like to say the internet kind of 2.0 democratized information. Beautiful. This web three is democratizing value, and when you’re democratizing value, the stakes are even higher because everyone needs to transact. So being able to enable a safer environment to enhance and improve the velocity of transactions, that’s going to be essential there. So again, fraud, BSA, AML, sanctions, etc. And then also just the overall experience. What are people doing and how? AI allows us to see patterns that we typically are unable to see. So we launched the first stablecoin by a major financial institution that is regulated, PYUSD. And a fiat-backed stablecoin, again, many benefits to it. Deploying AI right now allows us to kind of start seeing, okay, where is it being used? How is it being used? What potential use cases? How can we allow this tool that allows for faster, cheaper, programmable value transfer with instant finality? And then I’ll say lastly, I’m kind of tying the blockchain and AI as well, is this notion of asset provenance, right? So I will talk to merchants and let’s just say you’re Nike and you have the physical good and you have the digital representation as well, right? When you start getting into physical and digital, knowing what is valid is going to be extremely important. It’s kind of almost that blue check. Well, think about when you have your digital twin, when you have AI generated outputs, being able to utilize kind of NFT technology to be able to put that stamp to say this is the real one. This is that digital one of one. That’s also something that’s going to be important. So the digital identity component and also the asset provenance component, as well as optimizing the overall experience for value transfer, that’s kind of where AI is being integrated right now. And it’s actually still early days, but I do think we’re going to hit a point on the curve where we’re going to see exponential change. On that exponential change that you talk about, looking into the future, do you think that AI is going to be foundational for seamless integration between public digital wallets and then private digital wallets and services? 100%. And the reason why is, again, when you’re dealing with value transfer, trust, compliance, it’s essential and you can’t get it wrong. So when you can improve those kind of core tenets of how we’re going to integrate with these wallets, which are going to hold value, not only just fiat value or ties to the banking system, it’ll be also just their assets that hold value that you want to just keep yourself in self-custody. So I think it’ll be essential. And what they’ll do is those experiences from the onboarding to the continuous monitoring, understanding what are the preventative and detective controls around this ecosystem, it’ll enhance that. Again, it’ll also allow us to improve pattern recognition. Just being able to lower the likelihood of bad things happening, improve the experience to make sure that activities feel more seamless, that’s what AI is good for. So I can’t see it not being critical.


Judith Vega: Thank you, Larry. I want to turn to Judith now, if we can get her on the screen.


Judith Okonkwo: Hi, Judith, can you hear us? Yes, I can. Hi, thank you so much for joining us. I want to bring you in on this. We’re talking about AI enhancing across the DPI protocols and services. And you’ve done a lot of work on adoption of AI and integration. And I want to ask you, is there any particular area that you feel AI integration is particularly important? Where is it critical? And are there any barriers to integrating open source AI across different regions and across different jurisdictions? Yes, thank you very much for the question. So I’ll probably start with sort of like AI integration from an open source perspective. And what it enables for context. The work that we do at MSC3D over sort of like the last decade has been centered on ecosystem development for the immersive technology. So augmented virtual and mixed reality across Africa. And as you can imagine, there have been lots of barriers, right? From the perspective of access, infrastructure, all of that, for people to engage with these technologies. But even more importantly, to be able to build with them for society. And I think one of the exciting things about the integration of open source AI is that it allows us to. start to benefit from the convergence of these technologies, because really, I think it isn’t either or when it comes to these technologies. We can see them coming together to really create products and services that can have a real benefit for society. And so when I think about what AI is now making possible, especially open source AI, it’s driving experimentation. It’s allowing people to build, but not have to start from scratch, for example, which is really, really important. And to give you some context about what that looks like in practice for things happening at the immersive technology end of things, with our work in some of our communities, we have worked on a number of projects, which I will mention, which are now really benefiting from the availability of open source AI tools. One project is a product called Autism VR, which was designed as a voice driven virtual reality game. And the idea with this was creating something that would really start to kind of like shift the needle when it comes to the lack of awareness about neurodiversity, particularly among children. Because, of course, we’re coming from a context where mental health is severely under-resourced and where the lack of information about neurodiversity has really caused a lot of not just discrimination, but exclusion for children that should not happen. With the advances that we now have, the availability of open source AI tools, it’s now possible to say, not only are we going to integrate that voice driven component, which makes it a much more engaging tool for the general population to engage with, but that we can also leverage language capability, right? Because we’re coming from, you know, communities where. several different languages are spoken on a daily basis, and building solely for English, for example, has really limited the public ability to engage. Another example I wanted to cite, kind of related to that, is VR for Schools initiative that we have, and this is now looking at deploying this technology in really resource-constrained learning environments, so how can we go into a situation where, for example, you have a school without, you know, infrastructure for things like science experiments, right, and that kind of resource-constrained environment, can you bridge the gap with immersive tools, right, can you create a VR lab where students are then able to do simulations, and now that’s one step there, but then imagine the ability to have agents who can act as guides built on top of these, you know, open source AI tools that can then provide the support within the immersive environments for those students learning, and what that does is then make this a tool that you can deploy, not just in a classroom setting, you know, but also in much more informal contexts, and I think when we think about situations where you have, you know, children who unfortunately are out of school, you then get to the concept of almost taking the school to the child where they are, if you’re able to have this combination, but, you know, moving from that to your question about, you know, the barriers to integration of open source, I think we see much of the same constraints that we’ve seen from the immersive technology side, I think a major one to talk about is skills, the sort of like capacity gap, and what needs to be done about that, I think to be able to leverage open source AI, we need to invest significantly in educating people and making sure that we have the knowledge and skills locally to build. I’m doing this across the board, right? And I think alongside that, there’s definitely even just sort of like a link to the awareness piece, right? A lot of work that needs to be done from a digital literacy perspective. Other barriers to this integration, I would definitely talk about infrastructure. Much of the same sort of like handicaps we have with immersive also existing with open source AI, particularly when it comes to the Internet. And I think in a country like Nigeria, for example, it’s really great to see the investment that’s happening now in that space to make significant changes and get as many people online as possible. And then data, you know, we need localized data sets. We need to be able to train models so that they’re relevant for us. And I know that that’s work that’s currently ongoing. There are lots of fantastic initiatives. Masakani is one. So there are barriers, but the work has begun, although there is a lot more to be done.


Judith Vega: Thank you so much, Judith. I love when panelists also share my first name, so it’s lovely to call on you. But I want to stay with this idea of AI being useful to build upon and to get us ready for sort of this next phase of tools that are really being deployed and used for public good and good consumption. And I want to turn to Melinda now. I wonder, you know, part of critical to sort of DPI building this idea of hardware and meta has begun to produce good hardware, valuable hardware. What do you think is important to be able to scale that hardware? What is the role of AI there? So I think we want to make products that are useful to people and so part of this


Melinda Claybaugh: is we’ve we’ve launched the glasses a few years ago and we continue to roll them out to more and more countries. We continue to add more functionality from the AI perspective as the AI gets better and more useful. Part of this is an iterative process, right? Understanding, these are new concepts. These are, you know, wearing AI on your face is new and so I think what we have to do is test things out and see and how do people use them? What are the use cases? How do people find them useful? And then we bake that back into the development process for our products and so I think it’s a learning process over time. Obviously there are constraints in terms of how to actually build something that fits your face and has a battery that works and can, you know, there’s questions around processing and and all of that, but I think the the biggest challenges are really around adoption and how are people planning to use these and making them available in as many countries as possible, making the AI as useful to as many places as possible and so part of that, Judith, the prior panelists was talking about making local data available and I think that is really crucial to unlocking the power of AI and so, you know, we train our AI on a wide variety of data but we don’t have access to a lot of data that would make the models most useful to local communities and so there’s again why the open-source component is so important because local developers can build on top of our model by adding data sets that are relevant for that country community region and so I think all of this has to kind of work together to figure out, you know, what is most useful in terms of having AI available to you. Is it in your app? Is it on your face? Is it all of it? And I think it’s exciting. We’ll see a lot of different approaches from different companies in how to make AI products as relevant and as useful as possible for people’s day-to-day lives.


Judith Vega: Thank you very much. I’m going to actually open this question up now to any of our panelists. We recognize the importance of localized data and data sets and integration and harmonization of these technologies. I want to ask, this is from the private sector, we know that there’s development here. What would help, would be beneficial from the public sector to be able to achieve these goals?


Larry Wade: Yeah, I can take that one. And just before I dive in there, something Judith said, and you said it as well, kind of going, she said bring it to where the kids are. There’s a reason why there are so many unbanked or underbanked people in the world. A lot of that has to do with just the overall risk tolerance of institutions that are serving them, whether it’s their own policies, or again, restrictions placed on them from whatever kind of local regime from a regulatory perspective. So just wanted to hit on that same thing. Once you kind of can use AI to solve that more localization, additional attributes, hey, here’s additional data that can actually de-risk this customer, again, opens up things. But to answer your question, and this is something I have to deal with all the time, it’s being able to bring the regulators and governments along the journey with you. And it has to be a public-private partnership. Again, when you’re dealing with these very complex topics that impact society in such systemic ways, you have to make sure that those who are making the policies are not making them in silos, that you’re knowledge sharing. And hopefully that governing body, wherever they are, they’re kind of giving you the ability to experiment. So there’s this constant push and pull of, there’s rulemaking, here’s why, that sounds great, but it’s not feasible. We do need rules because we need to be able to ensure that we all have a kind of general set of parameters to play with. So, I think that back-and-forth relationship, the kind of minimum expectations, guiding principles, minimum requirements, and also just being comfortable with when information changes, both sides being able to kind of change with it, I think is really, really important for all of this to flourish.


Judith Vega: Thank you, Larry. On your point, I want to open this up now to the rest of the room, and also I’m joined by Agustina Callegari, who’s the lead for the Global Coalition of Digital Safety at the World Economic Forum, who’s serving as our online moderator. So please, if you have any questions for our panel, either online or in person, this is your time. Please raise your hand and join the conversation.


Agustina Callegari: Agustina, do we have anyone online? I have a question here for Judith, who is online. The question is that if there are any examples of south-to-south cooperation for open source AI sharing? Yeah, that’s the question for you, Judith, if you can listen.


Judith Vega: Thank you. Hello. Thank you very much for the question.


Judith Okonkwo: So any examples of south-to-south open source sharing? Sort of like the examples that I’m most familiar with. at the moment are around community. So one of the things that has really driven the concept of open source on the continent that I know about is the open source community, the African version, and they have collaborated across board with communities in other South-South countries. And I want to highlight this because the concept of open source has given people pause several times on the continent because it’s the idea of, you know, there’s this sort of like reaction, what you want me to make it freely available, then how are we going to make money, you know, how are we going to benefit economically, that sort of thing. And so there’s been a real need for education around open source and kind of like all the affordances that it then provides for everyone, including the people who are building in the first instance. So that’s what I would mention, but I’m not aware of any other sort of like core examples and I’ll definitely


Judith Vega: look that up. Thank you. Thank you. There is another question that is related to what Larry was saying about working with policy makers. So how do you ensure a continuous sharing of knowledge with policy makers? Yep. So I think one, it’s having a respectful, honest relationship with the regulators that you are working with for your particular business and ensuring that


Larry Wade: you’re having engagement with the actual kind of government officials, again, not only in your jurisdiction, but in those jurisdictions that you are interacting with. So a couple of things. Take the digital asset business and PayPal. So again, two-sided network. How do you integrate this technology into rails to enable faster, cheaper? more programmable, just value transfer within that ecosystem. One, I wanted, would you just mention also on open source, the reason why we chose to use open source protocols was one, how do you attract the best talent to work on protocols? Needs to be open source. Two, how do you not pick winners and losers open source? So like just for anyone who’s kind of asking that as well, we really thought about that. Two, even with PYUSD, they’re on open source blockchains. Now there’s obviously a need for private at times, but again if we’re gonna allow this, these technologies to grow, open source tends to be the best approach. But again, just making sure that you have those regular cadences. It sounds really simple, but it’s challenging. Who are those regulators? Who are those policy makers? What are the regular cadences? How are we bringing value to them? How are we kind of self-reporting when things are going right or wrong before they ask? A lot of this is about trust. There are brilliant people working on these things, right? Engineering is not really the issue right now. If you think about it, take all these amazing technologies we have right now, whether it’s AI or quantum computing or blockchain digital assets, there are brilliant minds working on them. The real gaps are around all the people who are going to help facilitate the introduction of these technologies into society. And that’s on the policy side, and again, that’s in the businesses. So having just that respectful, honest, transparent relationship and knowledge sharing on a frequent basis goes a long way. I’m gonna take the liberty to interrupt our Q&A segment for a bit. I want to follow up on this. Trust is earned, right? It’s something that requires a sustained period of time of interaction. Do you, does PayPal find that it becomes more trustworthy in using open source? I would say yes, and it’s interesting because it’s not only just trust, so take a step back. We have a stablecoin with our name on it. We work with other institutions. Being able to say, hey, yes, this is PayPal stablecoin, but it’s on an open source blockchain, allows that institution to feel like they have more skin in the game. When we’re working with regulators, and I’m fortunate that I get to speak to regulators all around the world. You know, I was in Singapore a couple weeks ago meeting with the MAS, and then in the UK meeting the FCA, I deal with the New York Department of Financial Services literally every other week, and just all the different alphabet soup. These open source protocols also allow them to have a little bit more agency on how they evaluate. So I found it to be beneficial. With that said, I do think there is the need for some walled gardens, and that’s where this whole notion of interoperability is going to come into play, because there are times where you need an intranet, or you need a closed ecosystem, but then how do you ensure that there’s an interoperability protocol to interact outside when the time is needed? I think that’s also part of that open source story and how you’ll see both of those playing out. Can I ask when those times are that you need a closed garden or an intranet? Sure. Let’s say you’re a big bank and you just did a syndicated wind farm deal in Canada, and the arranging bank now, via some smart contract, it’s determined that whatever threshold is met, now we can disperse out payments. Does everybody need to see that? No. Does everyone need to see how, you know, the Visa MasterCard Network, how participants fit? No. Right? Do you want to see, would you want people to see all of your PayPal transactions? No. So, I think that it’s fine to have a little bit of privacy. I think privacy is going to actually be really important. And it’s funny because we’re talking open source, but then now we’re going to privacy. And again, this is why this is all so complicated, but also why it’s so fun. Because we are solving new problems that I don’t think humanity has ever had to think about on this scale because these technologies are so revolutionary. So I think there definitely are times where it needs to be between us. But ultimately, you know, both are needed. Oh, I couldn’t agree more. And I certainly don’t want everyone seeing my transactions on PayPal. But with that, I open it up again to the floor once more. Yes, there’s a question in the back.


Audience: Hi. Can you hear me? Yes. Please go ahead. Hi. My name is Marin. I am a researcher at IT4Change, which is an NGO that works at the intersections of digital technology and social justice. So my question, it’s a two-part question. So one is a more basic question on, I want to understand better what you think or how do you see, how do you define an open source AI? So the issue is, one issue, concern that we have is even when we talk about open source and the possibilities of innovation that it allows for it, it seems that the foundational models are still being controlled by few actors. It’s not really democratized. So what is, for me, open source AI is something that’s also equivalent to a democratized access and development of AI. So if the core foundational models are still controlled by a few actors, then how do you define open source AI? And secondly, I think… You mentioned in one of your interventions that open source, when you integrate open source AI into DPI, it also allows agency to the regulators to evaluate it. So I want to understand what are the benefits of open source when it is integrated with DPI. What are the, like, how does it allow the public actors to evaluate? Because when DPI is essentially used for various governance, core governance aspects, and it can impinge on the rights of the citizens. So how does it, how does open source allow in the regulators to have more oversight over the DPI applications that are being used for governance structures? Yeah, I’ll give you kind of my thought on it. That’s a great question, by the way. Thank you.


Larry Wade: So let’s kind of go back to, I’ll use this kind of Internet 2.0, 3.0 example again. So in Internet 2.0, you had these brilliant engineers that created this infrastructure. Who extracted value from that infrastructure layer? None of the infrastructure builders. All the value was at the application layer, pretty much. So I also think that’s why we have some of the issues we have now, right? But again, there was tons of innovation. We’re moving forward. The way I think about open source is that infrastructure layer is open where developers can work and build, and there will be times that they build applications that are open source themselves. And then there will be times where applications do need to be a little bit closed. But ultimately, if you don’t have the open source infrastructure layer, now you also just have that problem at the application layer again. And being able to have value transfer mechanisms align to the infrastructure layer is a really important idea because it incentivizes brilliant minds to work on them because they have some upside now and then also it allows for a little bit more just competition on what’s going to win because ultimately if I can extract value from various infrastructure layers what’s going to make me pick one over the other? Maybe it’s just better. So that’s kind of how I think about that and then your question on the regulator side again, you’re dealing with people who you have a lot more expertise than they do because they have such wide scopes and you’re living it every day. So if you have a starting point where there’s an understanding of what the kind of infrastructure is as you’re building more complex products on top of it the discussions are a little easier. So, I mean, it happens all the time just with what I have to do just again in the digital asset and distributed technology space, right? So if I come in and say, hey, we want to build this new product that allows for X but we’re building it on this open source blockchain that you are familiar with at a minimum there’s a little bit of comfort when we come to them with what we’re trying to pitch and then now the complication is on that actual innovation on top of that. I don’t know if that helps a little bit but it’s kind of like this beautiful dance in a way.


Judith Vega: Thank you, Larry. I know we have a question here but the gentleman was up first. So if you’d like to go ahead. Thank you.


Audience: My name is Haidel Alvestram. There’s a fundamental conflict in payment systems in that payment systems have to be accurate to the sense and they have to be auditable. Rules have to be followed. Well, AI is typical in detecting interesting patterns, coming up with surprising answers and being absolutely hopeless at explaining how they achieve them. Can you talk a little bit about how you mitigate that conflict when you embed AI in payment systems? Excellent question.


Larry Wade: This is fun. So the way I like to think about things is, let’s take payments here, 80% of what we need to do, we can leverage best practices and just tried and true, hey, we know this works. So overlaying AI, again, it’s about optimization. We wouldn’t throw out all of the policies, procedures, and controls that are already developed to make sure that we can adhere, even though, by the way, there is a lot of friction and a lot of errors, even in the existing system, right? And there’s a lot of true ups and things of that nature. But the way I see AI integrating into payments is not saying, we’re just going to rely on AI for this, it’s, we’ve been doing X, Y, Z. So PayPal, we’ve been moving money for 20 years, we’ve been doing it well. Okay, overlaying AI now can give a better customer experience, could actually now find those tail situations, and can just better refine us making sure that we meet any obligations to customers, regulators, etc. So again, optimization, rather than pure reliance. That’s kind of how I see that. So it’s a partnership. It’s a good question.


Judith Vega: Thank you, Larry. And then we have a gentleman here to the right. Thank you.


Audience: My name is Satish and I have a long background in open source. I am presently part of ICANN and DotAsia organization. I sense a little bit of uncertainty when you refer to open source AI, because open source from the last 20 plus years of working with open source usually means code. Code means the stuff that you write in C or Java or whatever. And these days, open source model of code is kind of free. I mean, most organizations, including Microsoft, release code under open source. This is nothing extraordinary. When we started out 25 years ago, it was very extraordinary. Today, it isn’t. The second part is open source model weights. Now, that is new to AI. When you take a raw model, you train it, and you come out with weights, that is what decides how the model is going to respond to questions. Open sourcing, that is not a very well-articulated concept. But the third thing is open source datasets. Now, I’d like to know what precisely you mean when you refer to open source AI.


Judith Vega: Thank you. Okay, thank you so much for the question. We can come in a bit, and I’ll give a bit of background. So, we’ve had these extensive conversations at the forum now and in the work that we do about what open source means, particularly as it pertains to digital public infrastructure. Normally and typically, when you talk about DPI, the P in public means different things to different people. What we’ve landed on is that DPI doesn’t necessarily mean public as in public-public sector, but rather public as in common, or generally available to the public. And that’s also the definition that we leverage, or the common consensus that we leverage for open source. Not that it is, again, public-public sector and public-driven, but rather that it’s ubiquitous and it can be commonly found, that it’s something that can be found and used, leveraged, adopted across various jurisdictions, across various regions, regardless of its source, and then can be built upon by different actors across different sectors. So, really grounding ourselves in that open source, whether it be a trading model or just the code itself, but that it is common, that it is open, that it is free and it can be accessible, is what we mean when we refer to sort of open source. And I think it’s what Melinda was referring to. These are just protocols that are available, right? Anyone can download them. If you have the right hardware, you can download them, you can train them, you can build upon them, you can deploy them, and you can integrate them to different models or to different technologies. So, that’s what we mean. The P for us is common, right? Not publicly available, not necessarily publicly driven.


Larry Wade: And just to add to that, that notion of common is going to be really important. I keep going back to regulation and minimum requirements, and if you can kind of start with the same ingredients, for lack of better words, how your cake comes out is going to be dependent upon how you mix those ingredients, manipulate them, and how you bake it. But ultimately, we all are kind of starting with the same ingredients. If you can kind of start with the same ingredients, it allows those that are governing to have a better starting point to have sensible, reasonable regulations and requirements. So again, that’s why I lean towards, if each model was bespoke, and those governments and regulators had to start with something net new every single time, that’d be quite challenging. But if we all kind of have minimal requirements, or there are certain known protocols that have been adopted to start with, I think it’ll be a little bit easier to manage some of this, because it’s going to be quite challenging, honestly. Because here’s something I run into. What one regulator in one part of the world, what matters to that group, can be dramatically different. I’ll give you an example. It’s very clear that when you’re doing business in the UK, consumer protection is front and center. So you have financial promotions, and consumer duty, and things like that. Yes, it matters in the US, but not as much as it matters in the UK. Are you saying that we’re unprotected in the US? I’m not saying that, but I’ve just never, like, just getting to kind of see, each kind of government and regime has their own thing. So for example, when you look at the EU and MECA, yes, it’s great. They put out a digital asset regulation. But if you kind of backdoor it, they’re saying but we really want EU denominated stable coins So everybody you’re dealing with regulators who are trying to learn and then depending on what their priorities are They’re trying to force those as well. So it’s gonna be complicated no matter what you do so the more that we can kind of have a common nomenclature and kind of common starting point to at least Negotiate with I think it’s gonna make things easier because it’s gonna be challenging regardless. This is global adoption for all of these technologies


Judith Vega: Thank you, I know we had a question online, Agustina, I’ll turn it to you


Agustina Callegari: Yeah, there is another question. It’s how do you how do we see private technology companies play a role in DPI and AI for social good landscape. Within DPI now the fundamental topics like digital ID payments have been solved What was the last part on solve? Within DPI now That’s the way it’s framed. Now the fundamental topics like digital ID payments have been solved basically Yeah, it’s asking about if they should relate it to digital ID and payment have been solved, but


Judith Okonkwo: Yeah, I’m gonna let Judith come in for for a minute Judith are you you’re still with us Yes, I am So sort of like to jump in that last bit about The digital ID and payments having been solved if that’s the question Then I think yeah, it definitely should still be a question because I would say not solved in Very many parts of the world. I’m still the question but to the first part which is Around how private technology companies can come in and I particularly want to talk about AI for social good I’m not sure if Melinda is still there with us, but One of the initiatives that META, for example, is driving on the continent, linked to its large language model, LLAMA, are these LLAMA Impact Accelerators. And the initiatives where they are incentivizing communities, developers, to build on top of their large language models and create products that will, in some positive way, impact society. The Impact Fund has been going on for a couple of years, I believe, but the current iteration, and applications are still open for that, what we are seeing is a handshake with governments. So, for example, in Nigeria, it’s in partnership with the ministry that oversees the digital economy. And I think what’s interesting about that is we’re starting to see the multi-stakeholder approach to driving AI for social good, right? I know when we talk about DPIs, we talk a lot about public-private partnerships and the role that they have in accelerating things. And I think we start to see that with initiatives like this. And there are a number of others, I mean, in country, in Nigeria, which I’ll reference for my examples, alongside things like the Impact Grant, there are other initiatives from, say, the Gates Foundation, where they’re currently investing with the government to create an AI scaling hub that will then allow more people in country to be able to do a number of things, build on models, work on data sets, all of the things that will advance a national AI strategy. So there’s, yeah, a huge role for private technology companies alongside that for all of the people that will engage with them. And I think particularly from the regulatory side of things, so government, there is a real need to determine what that engagement looks like and how it will impact people, how it will impact citizens. And of course, there’s the citizens piece where people then have to have a voice in saying how these things will affect them. And I think when we start to talk about that voice, we have to think about the digital literacy that’s required to enable that.


Larry Wade: Yeah, I totally agree with you. Digital identity and payments has not been solved. Yes, there have been enhancements and we’re moving towards, but no, it’s actually a great opportunity for people to try to tackle. And just one thing to add to what Judah said, I think it’s irresponsible for private companies to create these world-changing technologies and not lean into educating those that have to regulate them. And again, that’s just my own personal, thankfully I get to, for PayPal, kind of lean in on the regulated side. So that’s kind of how me and my team kind of go about it. But to say, hey, we’re going to create something that is complex, that can be disruptive, that can be beneficial. Here you go, you figure it out on your own. I think that’ll just cause more confusion, angst, just for everyone involved. So I think it’s building, leaning in, but then also educating, communicating, and understanding that even though frustrating, you need to bring governments and regulators along for the journey because that’s the society portion of this whole thing.


Judith Vega: Thank you so much. I want to give a final opportunity to any other guests.


Audience: Yeah, hello. Mr. Knut Vatne here from the Norwegian Tax Administration, so I’m representing a large public sector agency in Norway. We’re using a lot of basic machine learning and gen AI tools as co-pilots and for productivity, but we are rather reserved at using advanced AI like deep learning based AI and generative AI as well for decision making that affect the citizens because we basically can’t really explain the results at a satisfactory level. So I wonder, to what degree do you view open source as helping us realizing explainable AI? I mean, open weights or open source code can provide trust on a formal level, but in my view, it does little in the way of actually explaining the results and the decisions on a level that’s understandable to the citizen. Well, one, thank you for that. I would say that this is where, again, just we all take a step back and be humbled that these are very interesting and challenging questions. So having that, you know, lower level, hey, we’re kind of playing and experimenting in this, I think that would just be great. And then also leaning into those partners who you do work with on the tech side and being able to share your results and see if they can help as well. But I think that it’s going to be important to make sure that government agencies are kind of on the way mirroring the private sector. If not, that bifurcation will be so great in the long run that we could end up having problems down the road. So to your point, you have a responsibility to the citizens to make sure we get this right. And that’s what we’re doing now. But then also experimenting here to make sure that you’re kind of keeping up with the technology. That way, when it’s ready for prime time, which a lot of this is not yet, you can kind of do the cut over. That’s kind of how I see that. And Judith, I don’t know if you have or…


Judith Vega: Judith and Judith, I don’t know if you guys have anything. I will let panelist Judith come in if she can, and then we only have about a minute left, so I’ll go ahead and wrap up. No? Okay, I will go ahead and offer some thoughts on this question and also give us some reflections. We talk a lot about trust at the forum, and I’m very happy to be joined by Daniel Dobrowolski at the table, who’s the head of governance and trust at the World Economic Forum, and we talk a lot about trustworthy decision-making, and that’s centrally important to decision-makers and regulators that, as Larry will put it, have sort of an obligation to the public. And I think you’re right, when we talk about AI, we have the luxury of spending all of our days talking about AI and decisions and models and how to play with them in private-public cooperation, but there’s a large number of people or groups of people, disaggregated throughout the world, that maybe don’t have the luxury of doing that every day. And to your point, it then becomes necessary to be able to explain and communicate these things in simplified terms, so that the user is not only protected, but protected through being informed and well-informed, so that then the user can also take action and steps, and sort of better decision-making themselves, or demonstrate their preferences somehow. And that takes, again, cooperation that’s both private and public, these efforts need to be driven jointly. And I sort of want to wrap up by inviting us then, all of us, to think about the future. You know, AI isn’t this abstract thing anymore that’s being talked about every so often on large news outlets. Rather, it’s a technology that’s being deployed every single day, and it’s being used by both public and private sectors to improve and enhance DPI, and the technologies that we all use, whether we’re sending money across PayPal or Venmo or Zelle to someone. abroad or in a different country or to our friends after dinner. It’s something that we’re using to access civic participation, public life, in some countries even voting and other forms of essential civic participation. It’s the way that we express our citizenry and that we express our autonomy. So as we sort of venture into this future together, I invite all of us to think about what kind of future it is that we want and that we’re not passive users of this technology, right? We can think about these things every day, we can make decisions every day and especially the people here in this room that we all continue to be well informed and advocate for the sort of technologies that we want being deployed in the better rock of our everyday lives. So thank you again. I think our lovely panelists, our online moderator, and thank you so much for joining us. And if you have any questions, we’re here for the next 10 minutes. Please stick around. Thank you again. Have a lovely day. Austin Kim, Tate University, Engineering College Thank you for jumping in! MouseTB Like this video? Let us know in the comments. Like and Subscribe! Thanks for watching!


M

Melinda Claybaugh

Speech speed

151 words per minute

Speech length

978 words

Speech time

386 seconds

Open source AI provides free access to cutting-edge technology and enables customization for local solutions

Explanation

Meta’s open source approach with their Llama large language model makes powerful AI technology available for free to anyone to build on it. This allows developers to provide customized solutions for their companies, constituents, stakeholders, countries and regions, making AI as useful as possible for as cheap as possible.


Evidence

Meta’s Llama models have been downloaded millions of times and are being deployed for scientific discoveries, health research, and helping kids with homework in local languages


Major discussion point

Open Source AI and Its Role in Digital Public Infrastructure


Topics

Development | Infrastructure | Economic


Agreed with

– Judith Okonkwo
– Larry Wade

Agreed on

Open source AI enables broader access and customization for local solutions


Disagreed with

– Audience
– Judith Vega

Disagreed on

Definition and true openness of open source AI


Local developers need access to relevant datasets to make AI models useful for their communities and regions

Explanation

While Meta trains their AI on a wide variety of data, they don’t have access to data that would make models most useful to local communities. The open-source component is crucial because local developers can build on top of Meta’s model by adding datasets relevant for their specific country, community, or region.


Evidence

Meta’s AI assistant integrated with Ray-Ban glasses can translate Norwegian signs and provide location-specific information in Oslo


Major discussion point

Barriers to AI Adoption and Regional Implementation


Topics

Development | Sociocultural | Infrastructure


Agreed with

– Judith Okonkwo

Agreed on

Localized datasets and regional customization are essential for AI effectiveness


J

Judith Okonkwo

Speech speed

143 words per minute

Speech length

1631 words

Speech time

679 seconds

Open source AI drives experimentation and allows building without starting from scratch, particularly beneficial for resource-constrained environments

Explanation

Open source AI enables people to build and experiment without having to start from scratch, which is especially important in resource-constrained contexts. It allows for the convergence of technologies and creates products that can benefit society, particularly in communities with limited access and infrastructure barriers.


Evidence

Examples include Autism VR (a voice-driven VR game for neurodiversity awareness) and VR for Schools initiative (deploying VR labs in resource-constrained learning environments where students can do science simulations)


Major discussion point

Open Source AI and Its Role in Digital Public Infrastructure


Topics

Development | Sociocultural | Infrastructure


Agreed with

– Melinda Claybaugh
– Larry Wade

Agreed on

Open source AI enables broader access and customization for local solutions


Open source AI enables convergence of technologies like immersive reality and AI to create beneficial societal products

Explanation

The integration of open source AI allows for the convergence of immersive technologies (AR/VR/MR) with AI to create products and services that have real societal benefits. This convergence is particularly valuable when technologies work together rather than in isolation.


Evidence

Autism VR project now benefits from AI voice integration and language capabilities, and VR for Schools can have AI agents acting as guides in immersive learning environments


Major discussion point

Open Source AI and Its Role in Digital Public Infrastructure


Topics

Development | Sociocultural | Infrastructure


Major barriers include skills gaps, capacity constraints, infrastructure limitations, and need for localized datasets

Explanation

Key barriers to open source AI integration include the capacity gap and need for significant investment in education and local skills development. Infrastructure constraints, particularly internet access, and the need for localized datasets to train relevant models are also major challenges.


Evidence

In Nigeria, there’s investment happening in internet infrastructure, and initiatives like Masakani are working on localized datasets


Major discussion point

Barriers to AI Adoption and Regional Implementation


Topics

Development | Infrastructure | Sociocultural


Agreed with

– Melinda Claybaugh

Agreed on

Localized datasets and regional customization are essential for AI effectiveness


Multi-stakeholder approaches involving government partnerships are essential for driving AI for social good initiatives

Explanation

Private technology companies can play a crucial role in AI for social good through partnerships with governments. These multi-stakeholder approaches accelerate development and ensure proper engagement with regulatory bodies and citizens who need a voice in how these technologies affect them.


Evidence

Meta’s LLAMA Impact Accelerators in Nigeria partner with the ministry overseeing digital economy, and Gates Foundation is investing with the government to create an AI scaling hub


Major discussion point

Public-Private Partnerships and Regulatory Cooperation


Topics

Legal and regulatory | Development | Economic


Agreed with

– Larry Wade

Agreed on

Digital ID and payments have not been fully solved globally


L

Larry Wade

Speech speed

159 words per minute

Speech length

3079 words

Speech time

1159 seconds

Open source protocols attract the best talent and avoid picking winners and losers in technology development

Explanation

PayPal chose to use open source protocols because it attracts the best talent to work on protocols and avoids the problem of picking winners and losers in technology development. This approach allows for broader participation and innovation in the ecosystem.


Evidence

PayPal’s stablecoin PYUSD is built on open source blockchains, and they use open source protocols in their two-sided network serving 400 million wallets in 200 countries


Major discussion point

Open Source AI and Its Role in Digital Public Infrastructure


Topics

Economic | Infrastructure | Development


Agreed with

– Melinda Claybaugh
– Judith Okonkwo

Agreed on

Open source AI enables broader access and customization for local solutions


AI serves as an optimization layer for customer onboarding, fraud prevention, and enhancing transaction velocity in payment systems

Explanation

AI functions as an optimization layer in PayPal’s payment systems, particularly helping with customer identification processes (KYC/KYB), fraud prevention, and improving transaction velocity. It allows them to look at additional attributes to gain comfort with onboarding customer segments they couldn’t serve before.


Evidence

PayPal uses AI for pattern recognition in their 400 million wallet ecosystem across 200 countries, and for monitoring their regulated stablecoin PYUSD to understand usage patterns and use cases


Major discussion point

AI Integration in Financial Services and Payment Systems


Topics

Economic | Cybersecurity | Infrastructure


AI enables better pattern recognition and risk assessment, allowing services to previously underserved customer segments

Explanation

AI allows financial institutions to identify additional attributes and patterns that help de-risk customer segments that were previously considered too risky to serve. This is particularly important for addressing the unbanked and underbanked populations globally.


Evidence

PayPal’s experience serving unbanked/underbanked populations where risk tolerance of institutions and regulatory restrictions were barriers, now addressable through AI-enhanced risk assessment


Major discussion point

AI Integration in Financial Services and Payment Systems


Topics

Economic | Development | Legal and regulatory


Agreed with

– Judith Okonkwo

Agreed on

Digital ID and payments have not been fully solved globally


AI will be foundational for seamless integration between public and private digital wallets due to trust and compliance requirements

Explanation

AI will be essential for integrating public and private digital wallets because when dealing with value transfer, trust and compliance are critical and you can’t get it wrong. AI improves the core tenets of integration including onboarding, continuous monitoring, and preventative/detective controls.


Evidence

AI enhances pattern recognition and lowers the likelihood of bad things happening while improving user experience for wallets holding both fiat value and digital assets in self-custody


Major discussion point

AI Integration in Financial Services and Payment Systems


Topics

Economic | Cybersecurity | Legal and regulatory


AI integration in payments focuses on optimization rather than pure reliance, maintaining existing controls while improving customer experience

Explanation

Rather than replacing existing payment system controls, AI serves as an optimization layer that enhances tried-and-true practices. PayPal wouldn’t throw out existing policies and procedures but uses AI to refine processes, find edge cases, and improve customer experience while meeting regulatory obligations.


Evidence

PayPal has 20 years of experience moving money and uses AI to overlay on existing systems rather than pure reliance, addressing the fundamental conflict between AI’s pattern detection and need for auditable payment systems


Major discussion point

Challenges in AI Explainability and Trust


Topics

Economic | Cybersecurity | Legal and regulatory


Disagreed with

– Audience

Disagreed on

Appropriate level of AI integration in government decision-making


AI enables asset provenance and digital identity verification, particularly important for physical-digital asset integration

Explanation

AI combined with blockchain technology enables asset provenance verification, which becomes crucial when dealing with both physical goods and their digital representations. This is like a ‘blue check’ for digital assets, ensuring authenticity of digital twins and AI-generated outputs.


Evidence

Example of Nike having both physical goods and digital representations, where NFT technology can provide a stamp of authenticity for ‘the real one’ or ‘digital one of one’


Major discussion point

AI Integration in Financial Services and Payment Systems


Topics

Economic | Legal and regulatory | Infrastructure


Successful AI implementation requires bringing regulators and governments along the journey through knowledge sharing and experimentation

Explanation

When dealing with complex technologies that impact society systemically, there must be public-private partnerships with knowledge sharing between companies and regulators. Governing bodies need to provide the ability to experiment while maintaining appropriate oversight and rule-making.


Evidence

Larry’s regular engagement with regulators globally including Singapore’s MAS, UK’s FCA, and New York Department of Financial Services, demonstrating the need for frequent, transparent relationships


Major discussion point

Public-Private Partnerships and Regulatory Cooperation


Topics

Legal and regulatory | Economic | Development


Agreed with

– Judith Okonkwo

Agreed on

Public-private partnerships are crucial for successful AI implementation


Private companies have a responsibility to educate regulators about complex technologies they create rather than leaving them to figure it out alone

Explanation

It’s irresponsible for private companies to create world-changing technologies and not help educate those who have to regulate them. Companies should engage in building, educating, communicating, and understanding that they need to bring governments and regulators along for the journey.


Evidence

Larry’s role at PayPal involves regular engagement with regulators and self-reporting when things go right or wrong before regulators ask, emphasizing that trust-building requires sustained interaction


Major discussion point

Public-Private Partnerships and Regulatory Cooperation


Topics

Legal and regulatory | Economic | Development


Regular engagement and transparent relationships with regulators across different jurisdictions are crucial for trust-building

Explanation

Building trust with regulators requires sustained periods of respectful, honest, transparent relationships and knowledge sharing on a frequent basis. This involves identifying the right regulators, establishing regular cadences, and self-reporting both successes and failures.


Evidence

Different regulatory priorities across jurisdictions – UK focuses on consumer protection, EU wants EU-denominated stablecoins, each regime has different priorities requiring tailored engagement approaches


Major discussion point

Public-Private Partnerships and Regulatory Cooperation


Topics

Legal and regulatory | Economic | Human rights


A

Audience

Speech speed

161 words per minute

Speech length

897 words

Speech time

333 seconds

Government agencies face challenges using advanced AI for citizen-affecting decisions due to inability to explain results satisfactorily

Explanation

Public sector agencies like the Norwegian Tax Administration use basic machine learning and AI as productivity tools but are reserved about using advanced AI for decision-making that affects citizens. The main concern is the inability to explain AI results at a satisfactory level to citizens.


Evidence

Norwegian Tax Administration uses AI for co-pilots and productivity but avoids it for citizen-affecting decisions due to explainability concerns


Major discussion point

Challenges in AI Explainability and Trust


Topics

Legal and regulatory | Human rights | Sociocultural


Disagreed with

– Larry Wade

Disagreed on

Appropriate level of AI integration in government decision-making


Open source provides formal trust but doesn’t necessarily solve the problem of explaining AI decisions to citizens

Explanation

While open source code or open weights can provide trust on a formal level, they do little to actually explain AI results and decisions at a level that’s understandable to citizens. This is particularly important when DPI is used for core governance aspects that can impact citizen rights.


Evidence

Question about how open source allows regulators to have more oversight over DPI applications used for governance structures


Major discussion point

Challenges in AI Explainability and Trust


Topics

Legal and regulatory | Human rights | Infrastructure


There’s a fundamental conflict between AI’s pattern detection capabilities and the need for auditable, explainable payment systems

Explanation

Payment systems must be accurate and auditable with clear rule-following, while AI is typically good at detecting patterns and providing surprising answers but is poor at explaining how it achieves results. This creates a fundamental tension in embedding AI in payment systems.


Evidence

Payment systems require accuracy to the cent and auditability, while AI excels at pattern detection but lacks explainability


Major discussion point

Challenges in AI Explainability and Trust


Topics

Economic | Cybersecurity | Legal and regulatory


Concerns exist about whether AI is truly democratized when foundational models are still controlled by few actors

Explanation

Even when discussing open source AI and its innovation possibilities, there are concerns that foundational models remain controlled by a few actors, questioning whether this truly represents democratized access and development of AI. The questioner seeks clarification on how open source AI is defined when core models aren’t democratized.


Evidence

Question about defining open source AI when foundational models are controlled by few actors, challenging the notion of democratized AI


Major discussion point

Questions and Clarifications on Open Source AI Definition


Topics

Economic | Legal and regulatory | Development


Disagreed with

– Melinda Claybaugh
– Judith Vega

Disagreed on

Definition and true openness of open source AI


Open source AI encompasses different components: code, model weights, and datasets, each with varying levels of openness

Explanation

Open source in AI context involves three distinct components: code (which is now commonly open sourced), model weights (a newer concept specific to AI), and datasets. The questioner seeks clarification on which specific aspect is meant when referring to ‘open source AI’ since each has different implications.


Evidence

Distinction between open source code (now common), open source model weights (new to AI), and open source datasets, noting that open sourcing model weights is not a well-articulated concept


Major discussion point

Questions and Clarifications on Open Source AI Definition


Topics

Infrastructure | Legal and regulatory | Economic


A

Agustina Callegari

Speech speed

96 words per minute

Speech length

129 words

Speech time

79 seconds

Online questions addressed south-to-south cooperation examples and the role of private companies in AI for social good

Explanation

As the online moderator, Agustina facilitated questions from remote participants about examples of south-to-south cooperation for open source AI sharing and how private technology companies can play a role in the DPI and AI for social good landscape.


Evidence

Questions about south-to-south cooperation examples and private companies’ role in AI for social good, with follow-up about whether digital ID and payments have been solved


Major discussion point

Questions and Clarifications on Open Source AI Definition


Topics

Development | Economic | Legal and regulatory


J

Judith Vega

Speech speed

161 words per minute

Speech length

2101 words

Speech time

778 seconds

The definition of ‘public’ in DPI means common and generally available rather than government-controlled

Explanation

When discussing Digital Public Infrastructure, the ‘P’ for public doesn’t necessarily mean public sector-driven, but rather refers to something that is common, ubiquitous, and generally available to the public. This applies to open source as well – meaning it’s accessible, free, and can be leveraged across various jurisdictions and sectors regardless of its source.


Evidence

Clarification that DPI protocols and tools should be commonly found, usable, and buildable upon by different actors across different sectors


Major discussion point

Questions and Clarifications on Open Source AI Definition


Topics

Infrastructure | Legal and regulatory | Development


Disagreed with

– Audience
– Melinda Claybaugh

Disagreed on

Definition and true openness of open source AI


Trustworthy decision-making requires cooperation between private and public sectors to ensure informed user protection

Explanation

Trustworthy AI decision-making is centrally important to regulators who have obligations to the public. Since many people don’t have the luxury of daily AI expertise, it becomes necessary to explain and communicate AI systems in simplified terms so users are protected through being well-informed and can make better decisions themselves.


Evidence

Reference to World Economic Forum’s work on governance and trust, and the need for joint private-public efforts to ensure user protection through informed decision-making


Major discussion point

Challenges in AI Explainability and Trust


Topics

Human rights | Legal and regulatory | Sociocultural


Agreements

Agreement points

Open source AI enables broader access and customization for local solutions

Speakers

– Melinda Claybaugh
– Judith Okonkwo
– Larry Wade

Arguments

Open source AI provides free access to cutting-edge technology and enables customization for local solutions


Open source AI drives experimentation and allows building without starting from scratch, particularly beneficial for resource-constrained environments


Open source protocols attract the best talent and avoid picking winners and losers in technology development


Summary

All three main panelists agreed that open source AI democratizes access to advanced technology, allows for local customization, and enables innovation without requiring developers to start from scratch. They emphasized how this approach benefits underserved communities and attracts talent.


Topics

Development | Infrastructure | Economic


Localized datasets and regional customization are essential for AI effectiveness

Speakers

– Melinda Claybaugh
– Judith Okonkwo

Arguments

Local developers need access to relevant datasets to make AI models useful for their communities and regions


Major barriers include skills gaps, capacity constraints, infrastructure limitations, and need for localized datasets


Summary

Both speakers emphasized that AI models need localized datasets and regional customization to be truly useful for specific communities, highlighting this as both an opportunity and a barrier to implementation.


Topics

Development | Infrastructure | Sociocultural


Public-private partnerships are crucial for successful AI implementation

Speakers

– Larry Wade
– Judith Okonkwo

Arguments

Successful AI implementation requires bringing regulators and governments along the journey through knowledge sharing and experimentation


Multi-stakeholder approaches involving government partnerships are essential for driving AI for social good initiatives


Summary

Both speakers strongly advocated for collaborative approaches between private companies and government entities, emphasizing the need for education, knowledge sharing, and joint initiatives to ensure responsible AI deployment.


Topics

Legal and regulatory | Development | Economic


Digital ID and payments have not been fully solved globally

Speakers

– Larry Wade
– Judith Okonkwo

Arguments

AI enables better pattern recognition and risk assessment, allowing services to previously underserved customer segments


Multi-stakeholder approaches involving government partnerships are essential for driving AI for social good initiatives


Summary

Both speakers agreed that despite technological advances, digital identity and payment systems still face significant challenges globally, particularly in serving underbanked populations and resource-constrained environments.


Topics

Development | Economic | Infrastructure


Similar viewpoints

Both speakers emphasized how open source AI particularly benefits underserved and resource-constrained communities by providing free access to advanced technology and enabling local innovation without requiring extensive initial investment.

Speakers

– Melinda Claybaugh
– Judith Okonkwo

Arguments

Open source AI provides free access to cutting-edge technology and enables customization for local solutions


Open source AI drives experimentation and allows building without starting from scratch, particularly beneficial for resource-constrained environments


Topics

Development | Infrastructure | Sociocultural


Both speakers emphasized the responsibility of private technology companies to actively engage with and educate government entities and regulators, rather than developing technologies in isolation from policy makers.

Speakers

– Larry Wade
– Judith Okonkwo

Arguments

Private companies have a responsibility to educate regulators about complex technologies they create rather than leaving them to figure it out alone


Multi-stakeholder approaches involving government partnerships are essential for driving AI for social good initiatives


Topics

Legal and regulatory | Development | Economic


Both speakers emphasized that trust in AI systems requires sustained, transparent relationships between private companies and regulators, with a focus on protecting and informing users through collaborative approaches.

Speakers

– Larry Wade
– Judith Vega

Arguments

Regular engagement and transparent relationships with regulators across different jurisdictions are crucial for trust-building


Trustworthy decision-making requires cooperation between private and public sectors to ensure informed user protection


Topics

Legal and regulatory | Human rights | Economic


Unexpected consensus

AI should serve as optimization rather than replacement for existing systems

Speakers

– Larry Wade
– Audience

Arguments

AI integration in payments focuses on optimization rather than pure reliance, maintaining existing controls while improving customer experience


There’s a fundamental conflict between AI’s pattern detection capabilities and the need for auditable, explainable payment systems


Explanation

Despite coming from different perspectives (industry vs. government), both the PayPal representative and the Norwegian Tax Administration representative agreed that AI should enhance rather than replace existing systems, particularly in areas requiring accountability and explainability. This consensus was unexpected given their different roles but shows shared concerns about AI reliability in critical systems.


Topics

Economic | Legal and regulatory | Cybersecurity


Private companies have educational responsibilities toward regulators

Speakers

– Larry Wade
– Judith Okonkwo

Arguments

Private companies have a responsibility to educate regulators about complex technologies they create rather than leaving them to figure it out alone


Multi-stakeholder approaches involving government partnerships are essential for driving AI for social good initiatives


Explanation

It was unexpected to see such strong consensus from private sector representatives about their responsibility to educate and collaborate with regulators, rather than viewing regulation as an obstacle. This suggests a mature understanding of the need for responsible innovation in AI.


Topics

Legal and regulatory | Development | Economic


Overall assessment

Summary

The speakers demonstrated remarkable consensus on key issues including the value of open source AI for democratizing access, the critical importance of public-private partnerships, the need for localized solutions, and the responsibility of private companies to engage constructively with regulators. There was also agreement that current DPI solutions are not yet fully adequate globally.


Consensus level

High level of consensus with significant implications for AI governance and DPI development. The agreement suggests a mature understanding among stakeholders about the need for collaborative, responsible approaches to AI implementation. This consensus could facilitate more effective policy development and technology deployment, particularly in addressing global digital divides and ensuring AI benefits reach underserved communities.


Differences

Different viewpoints

Definition and true openness of open source AI

Speakers

– Audience
– Melinda Claybaugh
– Judith Vega

Arguments

Concerns exist about whether AI is truly democratized when foundational models are still controlled by few actors


Open source AI provides free access to cutting-edge technology and enables customization for local solutions


The definition of ‘public’ in DPI means common and generally available rather than government-controlled


Summary

Audience members questioned whether open source AI is truly democratized when foundational models remain controlled by few actors, while Meta’s representative emphasized the benefits of their open approach and the moderator defended a broader definition of ‘open’ as commonly available rather than fully democratized.


Topics

Economic | Legal and regulatory | Development


Appropriate level of AI integration in government decision-making

Speakers

– Audience
– Larry Wade

Arguments

Government agencies face challenges using advanced AI for citizen-affecting decisions due to inability to explain results satisfactorily


AI integration in payments focuses on optimization rather than pure reliance, maintaining existing controls while improving customer experience


Summary

Government representatives expressed strong reservations about using AI for citizen-affecting decisions due to explainability concerns, while private sector representatives advocated for AI integration as an optimization layer, suggesting different risk tolerances between public and private sectors.


Topics

Legal and regulatory | Human rights | Cybersecurity


Unexpected differences

Technical definition of open source in AI context

Speakers

– Audience
– Judith Vega
– Melinda Claybaugh

Arguments

Open source AI encompasses different components: code, model weights, and datasets, each with varying levels of openness


The definition of ‘public’ in DPI means common and generally available rather than government-controlled


Open source AI provides free access to cutting-edge technology and enables customization for local solutions


Explanation

Unexpected technical disagreement emerged about what constitutes ‘open source’ in AI, with audience members with open source expertise challenging the panelists’ broader definitions. This revealed a gap between traditional open source community understanding and how AI companies define openness.


Topics

Infrastructure | Legal and regulatory | Economic


Overall assessment

Summary

The discussion revealed moderate disagreements primarily around definitions of openness, appropriate levels of AI integration in government, and specific mechanisms for public-private cooperation. Most fundamental disagreements centered on risk tolerance and accountability standards between public and private sectors.


Disagreement level

Moderate disagreement with significant implications for AI governance. The definitional disputes about ‘open source’ could impact policy development, while differing risk tolerances between sectors may slow adoption of AI in critical public services. However, broad consensus on the need for cooperation provides a foundation for progress.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers emphasized how open source AI particularly benefits underserved and resource-constrained communities by providing free access to advanced technology and enabling local innovation without requiring extensive initial investment.

Speakers

– Melinda Claybaugh
– Judith Okonkwo

Arguments

Open source AI provides free access to cutting-edge technology and enables customization for local solutions


Open source AI drives experimentation and allows building without starting from scratch, particularly beneficial for resource-constrained environments


Topics

Development | Infrastructure | Sociocultural


Both speakers emphasized the responsibility of private technology companies to actively engage with and educate government entities and regulators, rather than developing technologies in isolation from policy makers.

Speakers

– Larry Wade
– Judith Okonkwo

Arguments

Private companies have a responsibility to educate regulators about complex technologies they create rather than leaving them to figure it out alone


Multi-stakeholder approaches involving government partnerships are essential for driving AI for social good initiatives


Topics

Legal and regulatory | Development | Economic


Both speakers emphasized that trust in AI systems requires sustained, transparent relationships between private companies and regulators, with a focus on protecting and informing users through collaborative approaches.

Speakers

– Larry Wade
– Judith Vega

Arguments

Regular engagement and transparent relationships with regulators across different jurisdictions are crucial for trust-building


Trustworthy decision-making requires cooperation between private and public sectors to ensure informed user protection


Topics

Legal and regulatory | Human rights | Economic


Takeaways

Key takeaways

Open source AI is becoming foundational for Digital Public Infrastructure (DPI) development, providing free access to cutting-edge technology and enabling local customization without starting from scratch


AI integration in financial services serves as an optimization layer rather than replacement, enhancing customer onboarding, fraud prevention, and transaction velocity while maintaining existing controls


Successful AI implementation requires strong public-private partnerships where private companies actively educate regulators and governments rather than leaving them to figure out complex technologies alone


Major barriers to AI adoption include skills gaps, infrastructure limitations, need for localized datasets, and digital literacy challenges, particularly in resource-constrained environments


Open source approaches attract better talent, avoid picking technology winners and losers, and provide regulators with common starting points for developing sensible regulations


AI applications are already delivering real-world value through scientific discoveries, educational support in local languages, immersive learning environments, and enhanced payment security


The definition of ‘public’ in DPI means commonly available and accessible across jurisdictions rather than government-controlled, emphasizing interoperability and widespread adoption


Resolutions and action items

Participants agreed on the need for regular engagement cadences between private companies and regulators across different jurisdictions to build trust and share knowledge


Recognition that private technology companies should take responsibility for educating regulators about world-changing technologies they create


Consensus that government agencies should experiment with AI at lower levels to keep pace with private sector developments while maintaining citizen protection standards


Agreement that multi-stakeholder approaches involving government partnerships are essential for AI for social good initiatives


Unresolved issues

How to achieve truly explainable AI that can satisfy government requirements for citizen-affecting decisions while maintaining AI’s pattern recognition capabilities


Whether AI is genuinely democratized when foundational models remain controlled by few major actors despite open source availability


The fundamental conflict between AI’s inability to explain decision-making processes and the need for auditable, transparent systems in critical applications like payments and government services


How to effectively scale localized datasets and digital literacy programs across different regions and jurisdictions


The balance between open source protocols and necessary privacy protections in financial and personal data systems


Specific mechanisms for ensuring continuous knowledge sharing between private companies and policy makers across diverse regulatory environments


Suggested compromises

Using AI as an optimization layer alongside existing proven systems rather than complete replacement, maintaining traditional controls while enhancing performance


Implementing a hybrid approach where infrastructure layers remain open source while allowing some applications to be closed when privacy or security requires it


Starting with lower-risk AI experimentation in government agencies while building toward more advanced applications as explainability improves


Developing common nomenclature and starting points for AI regulation while allowing regional customization based on local priorities and values


Balancing open source benefits with necessary walled gardens through interoperability protocols that enable secure interaction when needed


Thought provoking comments

There’s a reason that most of us unbanked or underbanked people in the world. A lot of that has to do with just the overall risk tolerance of institutions that are serving them, whether it’s their own policies, or again, restrictions placed on them from whatever kind of local regime from a regulatory perspective… Once you kind of can use AI to solve that more localization, additional attributes, hey, here’s additional data that can actually de-risk this customer, again, opens up things.

Speaker

Larry Wade


Reason

This comment reframes AI not just as a technological advancement but as a tool for financial inclusion. It identifies the core problem (risk assessment limitations) and proposes AI as a solution to expand access to financial services for underserved populations.


Impact

This shifted the discussion from technical capabilities to social impact, establishing AI’s role in addressing systemic inequalities. It connected the technical discussion to real-world consequences and set up the framework for discussing public-private partnerships in solving societal challenges.


We need localized data sets. We need to be able to train models so that they’re relevant for us… Much of the same sort of like handicaps we have with immersive also existing with open source AI, particularly when it comes to the Internet.

Speaker

Judith Okonkwo


Reason

This comment exposed a critical gap in the open source AI narrative – that true democratization requires not just access to models, but relevant, localized data and infrastructure. It challenged the assumption that open source automatically equals equitable access.


Impact

This comment introduced crucial nuance to the discussion about AI democratization, leading other panelists to acknowledge the importance of local data and spurring discussion about how private companies can support localized AI development through partnerships and grants.


There’s a fundamental conflict in payment systems in that payment systems have to be accurate to the sense and they have to be auditable. Rules have to be followed. Well, AI is typical in detecting interesting patterns, coming up with surprising answers and being absolutely hopeless at explaining how they achieve them.

Speaker

Haidel Alvestram (Audience)


Reason

This comment identified a core tension between AI’s pattern recognition capabilities and the transparency requirements of financial systems. It challenged the panelists to address the ‘black box’ problem in high-stakes applications.


Impact

This question forced a more nuanced discussion about AI implementation, leading Larry Wade to clarify that AI should be used for optimization rather than replacement of existing systems. It introduced the concept of AI as a partnership tool rather than a standalone solution.


I think it’s irresponsible for private companies to create these world-changing technologies and not lean into educating those that have to regulate them… to say, hey, we’re going to create something that is complex, that can be disruptive, that can be beneficial. Here you go, you figure it out on your own. I think that’ll just cause more confusion, angst, just for everyone involved.

Speaker

Larry Wade


Reason

This comment addressed corporate responsibility in technology development and regulation, arguing that companies have an obligation to educate regulators about technologies they create. It challenged the traditional separation between innovation and regulation.


Impact

This comment elevated the discussion to questions of corporate ethics and responsibility, reinforcing the theme of public-private cooperation and establishing that successful AI integration requires active collaboration rather than passive compliance.


Even when we talk about open source and the possibilities of innovation that it allows for it, it seems that the foundational models are still being controlled by few actors. It’s not really democratized… if the core foundational models are still controlled by a few actors, then how do you define open source AI?

Speaker

Marin (Audience)


Reason

This comment challenged the fundamental premise of the discussion by questioning whether ‘open source AI’ truly democratizes access when foundational models remain controlled by major tech companies. It exposed potential contradictions in the open source narrative.


Impact

This forced the panelists to more precisely define what they meant by ‘open source’ and ‘democratization,’ leading to important clarifications about the difference between ‘public’ as government-controlled versus ‘public’ as commonly accessible. It deepened the analytical rigor of the discussion.


We basically can’t really explain the results at a satisfactory level… to what degree do you view open source as helping us realizing explainable AI? Open weights or open source code can provide trust on a formal level, but in my view, it does little in the way of actually explaining the results and the decisions on a level that’s understandable to the citizen.

Speaker

Knut Vatne (Norwegian Tax Administration)


Reason

This comment from a government official highlighted the practical challenges of implementing AI in public sector decision-making, distinguishing between technical transparency and citizen-understandable explanations.


Impact

This brought the discussion full circle to questions of public accountability and trust, forcing consideration of how technical solutions must ultimately serve democratic principles. It emphasized the gap between technical capabilities and public sector requirements.


Overall assessment

These key comments transformed what could have been a purely technical discussion about AI and DPI into a nuanced exploration of power, equity, and responsibility in technology deployment. The most impactful comments consistently challenged assumptions – about democratization, accessibility, and the relationship between technical capability and social benefit. They forced the panelists to move beyond promotional narratives to address fundamental tensions: between innovation and regulation, between technical transparency and public understanding, and between global solutions and local needs. The discussion evolved from describing what AI can do to grappling with how it should be deployed responsibly, ultimately emphasizing that successful AI integration requires not just technical solutions but sustained collaboration, education, and attention to equity and accountability.


Follow-up questions

Examples of south-to-south cooperation for open source AI sharing

Speaker

Agustina Callegari (relaying online question)


Explanation

This question seeks to understand how developing countries are collaborating on open source AI initiatives, which is important for understanding global cooperation patterns and knowledge sharing mechanisms outside of traditional North-South partnerships


How to ensure continuous sharing of knowledge with policy makers

Speaker

Agustina Callegari (relaying online question)


Explanation

This addresses the critical challenge of maintaining ongoing dialogue between technology companies and regulators, which is essential for effective governance of emerging technologies


How to mitigate the conflict between AI’s pattern detection capabilities and the need for accurate, auditable payment systems

Speaker

Haidel Alvestram


Explanation

This highlights a fundamental technical challenge in integrating AI into financial systems where transparency and explainability are regulatory requirements


Precise definition of ‘open source AI’ – whether it refers to code, model weights, or datasets

Speaker

Satish


Explanation

This definitional question is crucial for establishing common understanding and standards in discussions about open source AI development and deployment


How private technology companies should play a role in DPI and AI for social good landscape

Speaker

Agustina Callegari (relaying online question)


Explanation

This explores the appropriate boundaries and responsibilities of private sector involvement in public digital infrastructure, which is essential for effective public-private partnerships


To what degree open source helps realize explainable AI for government decision-making affecting citizens

Speaker

Knut Vatne (Norwegian Tax Administration)


Explanation

This addresses a critical governance challenge where public agencies need to explain AI-driven decisions to citizens while maintaining transparency and accountability standards


Need for localized datasets to train AI models for regional relevance

Speaker

Judith Okonkwo and Melinda Claybaugh


Explanation

This research area is essential for ensuring AI systems work effectively across different cultural, linguistic, and regional contexts, particularly in underserved markets


Skills and capacity gap for leveraging open source AI in developing regions

Speaker

Judith Okonkwo


Explanation

This identifies a critical barrier to AI adoption that requires targeted educational and training interventions to ensure equitable access to AI technologies


Infrastructure constraints for AI deployment, particularly internet connectivity

Speaker

Judith Okonkwo


Explanation

This highlights the foundational infrastructure requirements that must be addressed before AI technologies can be effectively deployed in many regions


Digital literacy requirements for citizen participation in AI governance decisions

Speaker

Judith Okonkwo


Explanation

This addresses the need for public education to enable meaningful citizen engagement in decisions about AI systems that affect their lives


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.