WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance

17 Dec 2024 08:30h - 09:30h

WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance

Session at a Glance

Summary

This panel discussion focused on the role of AI in enhancing trust and improving governance, particularly in the public sector. The panelists, representing Meta, OECD, and Oracle, explored how AI can reshape government services and build public trust. They emphasized the importance of open-source AI approaches to democratize access and foster innovation, especially in developing countries. The discussion highlighted the potential of AI to streamline government processes, from passport renewals to tax services, while also addressing concerns about data sovereignty and privacy.


The panelists stressed the need for harmonized global regulations to avoid fragmentation and ensure interoperability across jurisdictions. They discussed various regulatory approaches, including the EU AI Act and more principle-based frameworks in other regions. The importance of public-private partnerships was underscored, with examples of how governments can leverage private sector expertise and startup innovation to implement AI solutions effectively.


Key challenges addressed included building trust in AI technologies, ensuring data protection, and balancing innovation with regulation. The panelists shared examples of AI applications in healthcare, agriculture, and public safety, demonstrating the transformative potential of AI in improving public services. They also touched on the importance of education and transparency in AI adoption to build public trust.


The discussion concluded with an emphasis on the critical role of partnerships between governments, private sector companies, and startups in driving responsible AI innovation and implementation in the public sector. Overall, the panel highlighted the significant potential of AI to enhance government efficiency and public trust, while acknowledging the need for careful consideration of ethical and regulatory frameworks.


Keypoints

Major discussion points:


– The role of AI in improving government services and public trust


– The importance of open source AI and data sovereignty


– Regulatory approaches to AI, including the EU AI Act


– Public-private partnerships and startup involvement in AI innovation


– Challenges around data sharing and trust in AI implementation


Overall purpose:


The discussion aimed to explore how AI can be leveraged responsibly by governments to improve public services and build trust, while addressing challenges around regulation, data privacy, and partnerships.


Tone:


The tone was largely optimistic and solution-oriented, with panelists highlighting the potential benefits of AI for government services while acknowledging challenges. There was a collaborative spirit, with panelists building on each other’s points. The tone remained consistent throughout, maintaining a balance of enthusiasm for AI’s potential and pragmatism about implementation challenges.


Speakers

– Brandon Soloski: Center for Corporate Diplomacy at Meridian International


– Sarim Aziz: Director for Public Policy for South and Central Asia at Meta


– Lucia Russo: Economist and Policy Analyst at the OECD, focused on digital economy and policy division


– Pellerin Matis: Vice President of Global Government Affairs at Oracle


Additional speakers:


– Anil Pura: Audience member from Nepal


Full session report

AI for Responsible Governance: Enhancing Trust and Improving Public Services


This panel discussion, moderated by Brandon Soloski from the World Economic Forum, featured representatives from Meta, OECD, and Oracle exploring the role of artificial intelligence (AI) in enhancing trust and improving governance, with a particular focus on the public sector. The panelists included Sarim Aziz, Head of Public Policy for Asia Pacific at Meta; Lucia Russo, Head of Unit for Digital Government and Data at OECD; and Pellerin Matis, Senior Director for Public Sector Strategy at Oracle.


Setting the Context: Trust in Government and Technology


Brandon Soloski opened the discussion by referencing the Edelman Trust Barometer, which showed a significant trust deficit in government institutions globally. He also mentioned an IBM Institute for Business Value survey indicating that while 50% of government executives believe AI and automation will have a positive impact on their workforce, only 20% have taken action to implement these technologies.


Key Themes and Discussions


1. AI’s Potential to Enhance Government Efficiency and Services


The speakers unanimously agreed on AI’s significant potential to improve government efficiency and service delivery. Lucia Russo from the OECD emphasized AI’s capacity to enhance government responsiveness, while Pellerin Matis of Oracle provided concrete examples of AI applications:


– Healthcare: Improving delivery, hospital management, and patient care


– Citizen Services: More efficient passport renewal and document processing


– Agriculture: Enhanced monitoring and resource management


– Public Safety: Improved surveillance and emergency response systems


– Legal and Legislative Work: AI-assisted document processing and analysis


Sarim Aziz from Meta highlighted AI’s potential in streamlining government operations across multiple domains.


2. Building Trust in AI for Government Use


A central theme was the importance of building public trust in AI technologies for government applications. The speakers proposed several approaches:


– Sovereign AI and Cloud Infrastructure: Pellerin Matis advocated for dedicated cloud infrastructure to protect government data.


– Open Source AI: Sarim Aziz argued that open source AI increases transparency and accessibility, potentially fostering greater trust and ensuring global participation.


– Public-Private Partnerships: Lucia Russo stressed the importance of collaboration between governments and the private sector, citing Egypt as an example of successful partnership.


– Education and Transparency: Brandon Soloski highlighted the need for AI education and clear communication of benefits to increase public trust.


3. AI Regulation and Policy Approaches


The discussion touched on various regulatory approaches to AI:


– Global Harmonization: Pellerin Matis emphasized the need for harmonized global AI regulations to avoid fragmentation.


– Risk-Based Approaches: Lucia Russo advocated for risk-based and evidence-based regulatory approaches, mentioning the OECD’s work on AI governance principles.


– Principle-Based Regulation: Sarim Aziz noted that many Asia-Pacific countries are adopting principle-based rather than prescriptive AI regulations.


– G7 Hiroshima Process: The panel discussed the ongoing efforts to develop international AI governance frameworks.


4. Challenges in Government AI Adoption


Key challenges identified included:


– Legacy IT Systems: Pellerin Matis pointed out that outdated infrastructure and data silos hinder government AI adoption, citing Singapore’s efforts to overcome these challenges.


– Data Privacy and Security: Audience members raised concerns about data protection impacting trust in AI implementation.


– Digital Divide: The need to ensure equitable access to AI benefits across countries was highlighted as an unresolved issue.


5. The Evolving Nature of AI


Pellerin Matis provided perspective on the current AI landscape, noting the significant leap forward represented by technologies like ChatGPT:


“What’s new with ChatGPT and generative AI is not AI itself… What’s new is that it’s now accessible to everyone.”


6. Open Source AI and Global Accessibility


Sarim Aziz made a strong case for open source AI as a means to ensure global participation and accessibility:


“We need to fundamentally change the way, the path forward needs to be an open source one that has wide acceptance, that is accessible to all countries… to ensure that nobody gets left behind, to ensure that people in this part of the world and other parts of the world have a part in their conversation.”


Conclusion and Future Considerations


The panel discussion highlighted AI’s potential to enhance government efficiency and public trust while acknowledging the need for careful consideration of ethical and regulatory frameworks. The speakers emphasized the critical role of partnerships between governments, private sector companies, and startups in driving responsible AI innovation and implementation in the public sector.


Several areas for further exploration were identified:


1. Strategies for overcoming trust issues in data sharing between government and private sector


2. Balancing innovation with data privacy and security concerns in government AI adoption


3. Addressing the digital divide and ensuring equitable access to AI benefits across countries


4. Exploring AI applications in emerging and frontier markets


5. Ensuring interoperability across various AI regulatory frameworks


As governments continue to explore and implement AI solutions, addressing these challenges will be crucial for realizing the full potential of AI in improving public services and building trust in governance.


Session Transcript

Brandon Soloski: Okay, that’s interesting. I hear a little bit of a delay. Good idea. All right. Good afternoon, early afternoon, everyone. I’m not sure if folks in the room are able to hear me. Welcome, everyone. My name is Brandon Soloski. Welcome again to our session today on revitalizing trust, harnessing AI for responsible governance. Again, my name is Brandon Soloski I am with the Center for Corporate Diplomacy at Meridian International. It’s a pleasure today to be in the intersection that we are at right now. And I’m very fortunate to be joined by some distinguished panelists who will be joining me today to talk about this pressing issue. To my left, Sarim Aziz, Director for Public Policy for South and Central Asia of Meta is joining me. In addition, over across the way from me is Lusa Russo, Economist and Policy Analyst at the OECD, focused on digital economy and policy division. And across my way, we have Matisse Pellerin, Vice President of Global Government Affairs at Oracle. Before we go ahead and begin, we’ll go ahead and just provide some quick introductions to our work, our companies, just to give you a little bit of a flavor of where we’re coming from as we dive into the subject today. I’ll turn it over to Sarim.


Sarim Aziz: Thank you, Brandon. And thanks, everybody, for being here on this really important discussion. So yeah, my name is Sarim. I’ve been at Meta for over eight years. I actually did not start on the policy side. I’ve been on the technology side of working on AI and mobile applications for most of my career. I’ve only worked in tech. But yeah, more increasingly… you know, we found that even though MEDA has been working on the app for over 10 years, actually, that this conversation has definitely, you know, gone up to the next level. So excited to be here and, you know, add to the discussion.


Brandon Soloski: Lucia?


Lucia Russo: Thank you, and good morning, good afternoon. It’s a pleasure to be here. Thank you for the invitation. I’m Lucia Russo, as it was said, at the OECD. At the OECD, we have a division that works on international AI governance. So it started years ago with the adoption of the OECD principles that are basically a guide for policymakers and stakeholders on how to foster trustworthy, innovative AI. And since then, we’ve been working to advance this work with our member states and beyond. We have also work that touches upon different sectors. And today we’ll talk about the public sector and, yeah, and then other domains. But I’ll stop here.


Sarim Aziz: Hi, good morning, everyone. It’s a pleasure to be with you today in Riyadh.


Pellerin Matis: Thank you very much for the invitation. So I’m Mathis Perrin. I’m the Global Vice President for Government Affairs at Oracle. I joined Oracle 2019, so almost six years ago. And my main job is to manage government affairs for Oracle outside the U.S. So in that job, I work a lot with government officials to see how technologies can help them be more efficient and support government public services around the world. For the one who don’t really know Oracle, probably you know the brand, you know the logo, but you don’t know what we do. It’s very common. So Oracle, we are a cloud infrastructure and a cloud application company. So we provide technology for private sector, for governments to manage their daily operations. So it can go from HR, payroll, but also customer experience. And you will find lots of our technologies in lots of sectors, including health care, e-government, financial services, and much more. So we have a very large portfolio and are very happy to join this discussion because AI is something very important now that we are investing a lot in that field. And in addition to our cloud infrastructure, AI technology is becoming much more important now.


Brandon Soloski: Thank you again, Serena, Malucia, and Mathis. Really excited to dig into our topic today, but before I go ahead and begin, there’s a couple of things I wanted to talk about in terms of trust. I work at the Center for Corporate Diplomacy at Meridian International. At the Center for Corporate Diplomacy, we are trying to provide the private sector with the experience, the tools, and the insights to navigate geopolitical issues, to understand matters related to trade, over-the-horizon policy matters that impact business. We do that by providing insights to our partners almost on a weekly basis, whether that be a foreign visiting minister or ambassador. The relationships that the private sector now has with the foreign diplomatic community, with governments, is now more important than ever. For the private sector, they are the new diplomats. They are part of the diplomatic community, and this is a new age that we are in at Meridian that we often refer to as open diplomacy. And one of the reasons this is so pertinent right now as well is when it comes to trust. trust. One of the things I wanted to highlight, and I don’t know if anyone ever follows the Global Advisory Edelman Trust Barometer Index. So Edelman, a public relations advisory firm, every year puts out a trust barometer index. They survey over 30 countries, thousands of participants all over the world. And what they found, I found, was quite curious to this conversation as well. The private sector is now the most trusted institution in the world, followed by the nonprofit sector, followed by governments, followed by media. There’s been times in my working life where I know that’s been completely reversed, where the private sector was not the most trusted institution. But we’ve seen quite an uptick over the past years, and that’s starting to ebb a little bit in terms of how high trust in the private sector is, but it’s still front and center. The private sector really is leading the way with diplomacy when talking about AI, when talking about governments, when talking about the possibilities that exist within this new infrastructure that we are now building out. So one of the things we’re going to talk about today, just on that topic of trust, there’s so much potential with AI, from potholes, navigating taxes, to getting your passport renewed, some of the most tedious things that we all deal with. The ability and the opportunity that AI presents is truly tremendous. But at the same time, I refer to that element of trust, 66% of global respondents right now actively believe that governments are purposely trying to mislead them. When you look at that stat right now in trust and governance, it is quite low, and there’s a lot to be done when it comes to AI and when it comes to this topic, and the possibilities are truly tremendous. So one of the things I wanted to start talking about was a survey that was done just recently conducted by the IBM Institute for Business Value, and they found that respondents believe government leaders are often overestimating public’s trust in them. They also found that while the public is still worried about new technologies like AI, most people are in favor of government adoption of generative AI. So I’d like to open this up a little bit to my panel. So how can AI reshape this frustrating process often linked to the distrust of government and mitigate these touch points to build faith towards ethical, fair, and trustworthy AI solutions? Okay.


Lucia Russo: Okay. Thank you. I can start with that, and as I mentioned, we have at the OECD the public government directorate that is doing tremendous work in this field. And I believe that if used correctly, AI can indeed strengthen trust in the public sector. If you look at the components for government that are influencing trust, citizens’ trust, these include, for instance, responsiveness and reliability. So where can we have AI improve those two government components? So if you look at reliability, as you were mentioning, there are a number of tasks that can be done with AI. For instance, enhancing internal efficiency of processes, so speeding up routine processes and freeing up work of civil servants for tasks that are more useful to the citizens, and also improving the effectiveness of policymaking, for instance, by understanding through large amounts of data. of data, what better what the user needs are, and then when it comes to responsiveness, also being able to anticipating societal trends and user needs. So there is this report that was recently issued and it’s called Governing with AI, are we ready? And there are interesting statistics about how OECD countries have been using AI for these three key tasks that I just described, and we found that 70% of OECD countries used AI to enhance efficiency in internal operation, 67% to improve responsiveness of services, but only 30% to enhance effectiveness of public policy. So we see that this trend is ongoing, but of course it’s still not fully at scale, so here an important consideration is of course that the public sector has also a huge responsibility of implementing AI in a way that is accountable, transparent, and ultimately trustworthy for their citizens, and especially to minimize harms when it comes to special areas like immigration law or law enforcement, or even welfare benefits or fraud prevention. So here I would recall, as I mentioned, the OECD principles that really define what key values should be embedded in any deployment and development of AI, and I mentioned some of them, transparency, accountability, fairness, respect of privacy. And I’ll just end also with a final note on how public sector should build the enablers, so the skills, the infrastructure, and data for trustworthy innovation to actually flourish.


Brandon Soloski: Thank you so much. Matisse?


Pellerin Matis: If I can add a comment, I think I fully agree that education is very important, and if you want to promote trust in technology, especially on AI, you really need to make sure people really understand what is a technology and understand what is AI, how is it built, and how the data is managed, and that’s probably the first pillar of building trust. As a tech company, of course, our role is to support that, and we are working a lot with our colleagues. customers to provide them some digital trainings and some specific sessions to help them understand how AI is used in our solutions and how AI is built, how we can fight bias on AI, how you can manage your data and make sure that they are safe. Because you don’t use an AI tool the same way if it’s managed, if it’s a GPT or if it’s a government AI tool. It’s not the same way. It doesn’t build on the same technology. Another angle is, of course, transparency and explaining how our AI solutions are built, which, of course, will improve confidence in this technology. However, I think education is the first layer, but it’s not the only one. And there is also probably a more technical discussion to have about AI. And that’s why understanding the technologies is important to be able to go to the second layer, because if you have a more technical discussion, you need to make sure people really understand. So that brings me to the topic of sovereign AI. I think sovereign AI is becoming more and more important, especially for the private sector, because it ensures the data is secure and safe. If you’re a government, if you’re a private company, you’re not going to use the same AI technology that me or people in the audience here who are going to connect to chat GPT and use AI for their personal activities, or you go to X, a former Twitter, and use a new model, which has been just released last week. If you’re a private company or a government, you need to make sure that you are going to be able to train the AI models on some infrastructures that are safe, and your data is not going to be used by someone else, especially if you put some very confidential data. So I think sovereign AI definitions, at least that’s how I define sovereign AI, there is probably. two ways to defining, two things you need to check. First, what AI models you’re using. And are you able to train the models with your own data? And actually, when you are government, being able to train an AI solution using government data is super important. But you can only do that if you are able to get access to the models and train with your own data. If you cannot do that with ChatGPT, for instance, and sorry, Microsoft is not here. I’m just bashing ChatGPT, but I love ChatGPT, by the way. But I will not put my confidential data from Oracle in ChatGPT, because Microsoft is my competitor. So I cannot use this model to work. I need to have my own. So that’s very important. And so being able to give access to some LLMs and train the LLMs, LLMs is Large-Angle Models. To be able to train these LLMs with your own data is super important. That’s what we try to do in Oracle. We have lots of customers that are involved in very critical operations, like if you’re a nuclear plant or if you are a health care company, you need to be able to get access to these LLMs and use your own data. So we work with OpenAI. We work with Cohere, et cetera. And we give the ability to our customers to get these technologies, but with their own data. So that’s the first thing. The second thing is where your data is hosted and where your data is going. Because if you are a research institute or a university or an academic institution, you’re making some good research on a specific topic, maybe you don’t want your AI trainings to go in the US or to go in China. So that’s also another point, where you’re going to put your AI data. And it’s very important for if you want to build a sovereign AI, you need a sovereign infrastructure. So what is in the back? In the back office, I want to say of AI, it’s cloud. It’s very easy. Cloud technology is the first layer of AI. So you need to have a sovereign cloud which is going to host your data to make sure your data is not going to leave the country, and your data are going to be based in the country where you’re based. So that’s very important. And it’s even more important for government. And just to finish on that, to give an example is what we are currently doing here in Saudi Arabia. Oracle is building cloud infrastructure in Saudi Arabia. And we already operate a few data centers in Jeddah, in Riyadh, and very soon in Neom. However, we know that government entities here in Saudi Arabia, they want to have the benefits of cloud and AI technology. But they don’t want to put all their data in a public cloud. They want a sovereign cloud. So what we are doing here in Saudi Arabia is that we are also building, in addition to a public cloud, we are also building sovereign cloud with STC. STC is Saudi Telecom. It’s a telecom company here in Saudi Arabia. And STC is building, with Oracle, a sovereign cloud where we are going to be able to train and host critical data from the Saudi government, and make sure when they use AI technology, when they embed AI technology into public services in Saudi Arabia, they will be able to use government data and make sure it’s safe, and it’s not going back elsewhere. It’s not going back to the US. It’s not going back to UAE. It’s based here in Saudi Arabia. That’s very important.


Brandon Soloski: Thank you so much. And more questions to follow up on that related to some of the work here, as well as making sure data remains sovereign, and that we have interoperability as well. So quite a lot on this subject. But I want to turn it over to Mr. Aziz very quick.


Sarim Aziz: Thank you, Brennan, and thank you to my panelists. I think, as Lucia set the scene on the principles, and Matisse talked about some of the considerations for deployment, I think it’s important to just emphasize that, I mean, AI is not a new thing, right? I think sometimes we forget it’s, you know, I think it’s important to kind of differentiate and reframe the discussion around, like, why is this, the trust that you definitely mentioned, like, why is that increasing? You know, for something that, you know, what is the difference between the AI that we were using five years ago versus the AI today, from the perspective that, you know, AI has been used in any computer system that helps, like, analyze, perform functions on existing data. That’s been happening for a while. But what’s so exciting about this new age of AI, so to speak, is the fact that its ability to not just perform tasks on existing data, but to create new data. And this is, it’s multimodal. It can take text. It can take images. It can take video, audio. So that’s the exciting part. And I think that does what’s crucial as to what Matisse is saying, that this is so important, this technology. And we do believe at Meta that it has a transformative potential, to the point that it’s so important that it shouldn’t be, you know, in the hands of a few, which is the trust, which is actually exacerbating the trust deficit. You can’t have, you know, a few big companies based in the United States. Where do we get this technology, right? Especially in the developing world. So I think it’s really important to understand that the current model, especially as Matias kind of highlighted, of these closed proprietary systems owned by a few companies, is just not going to get us there. So we need to fundamentally change the way, the path forward needs to be an open source one that has wide acceptance, that is accessible to all countries. And I think that’s why Meta has, our CEO wrote this letter about open source AI is the way forward to ensure that nobody gets left behind, to ensure that people in this part of the world and other parts of the world have a part in their conversation. They can test the models, they can understand, they can look under the hood, see how it’s done. They can take it and fine-tune it, as Matias said, to their local cultural context and languages. So I just want to be clear, I think that is going to be fundamental in terms of governments adopting and supporting the open innovation approach to ensure they don’t get left behind, that they’re part of that conversation. I have lots more to say on that, but I just wanted to like seed that idea.


Brandon Soloski: No, that’s an absolutely great point. And that brings me to a little bit of my next question as well. So we’re not quite there yet, but not far on the horizon. One would want to ask AI about their evaluation that they received, or maybe the patient that was denied service as a result of AI, or other mix-ups that might happen, and the powerlessness that one might feel as a result of that. So I would be very curious to follow up on your question as well, and I would love for Matias and Lucia also to comment. But with Meta, can you talk to me a little bit about how Meta is leveraging and working with government to improve public services and enhance trust in AI?


Sarim Aziz: Thanks, Brandon. So yeah, I think in our conversations with government, we do see with our open-source approach that we see amazing adoption with startups. They love open-source technology. I mean, Meta, again, as I said, they’re not new to open-source. If you are familiar with web technologies, Meta has done plenty of open-source work around that, around React and many other technologies. In AI itself, we have a thousand different libraries prior to these LLMs that we’ve open-sourced. So I think the main consideration with government is, one, trying to tell them that, you know, if you are already doing an open data approach, that an open AI approach, open innovation approach is going to be an extension of that, right? So the first is, like, are your data sets open in terms of, like, allowing the public sector and the startups and private sector that works with those data sets? I mean, it’s becoming ubiquitous in terms of data sets. Yes, like, you need to control where it’s at. You need to have full control over it, be able to customize it. But I think it’s just, like, really about democratizing the access to that, to the data sets, but also, like, the models. And it’s about, you know, telling them that there’s this conception that, well, you know, open-source is not safe or secure. And that’s actually absolutely not true. In fact, the cybersecurity industry will tell you, including the DoD, that it’s not helpful in the cybersecurity space when signals and data are not shared. In fact, you have to share with third parties to ensure that you’re able to respond to the threats and bad actors. So from our perspective, it’s educating governments around the fact that open-source AI can accelerate innovation. It can increase access within public sector. and the fact that you control your destiny. It gives you flexibility where you want to deploy it, whether you want to do it some cloud, some on-premise, whether you want to know what amount of data you want to be fine-tuned, what do you want to use RAG for, for retrieval, augmented generation. And it increases accountability. And so there’s just been this concept that you need to go with a proprietary approach and to hold people accountable. Actually, governments can have more control and customization with an open-source approach. And so that’s been the discussion. And a lot of it has been being able to prototype. And we have plenty of great examples from France, where actually, it’s used by parliamentarians to use our LAMA model to make it simple, legal documents and legislations more simpler for other agencies to understand. So they use LAMA already. It’s deployed. There’s plenty of great examples in health care as well with Mayo Clinic, which is one of the largest medical nonprofits that is using it for radiation oncology in terms of their diagnostics. Huge potential there. For education, public sector, we’ve seen in places in Africa, Fundamate is using WhatsApp as a study assistant. So amazing things you can do with that. And so I think governments, there’s an opportunity for more public-private partnership there to see what private sector has done. As you mentioned, they’re already pushing the boundaries. If they had the support of the public from the government, I think we could do amazing work in the public sector. That’s been our focus at META.


Brandon Soloski: Thank you. Matisse?


Pellerin Matis: Yeah, I mean, AI is a top priority for governments, as you said. But we need to be realistic, because unfortunately, governments are still lagging behind the private sector in terms of AI adoptions. And lots of stuff has been done in the private sector. But governments are, for most of them, still running on very old technology. If we look at what they are doing, lots of governments have technology from the 90s or from the year 2000. So they are not very user-friendly. They are very expensive to maintain operational. they are even not very secure. So there is lots of work to do, but I think there is a good understanding now from world leaders and government officials that they need to modernize their public services, their public administration, to bring the best tool in country to support economic growth, to support better jobs that also improve the quality of the public services. So lots of governments right now are making huge investment to bring these new technologies. Cloud and AI are the two first priority. One of the big difference between private sector and the government is that the government is sitting on a huge amount of data. I mean, it’s a gold mine. A government has plenty of data. And usually, they don’t really know how to use this data. Because all the ministries usually work in silos. You have the health ministry. They have their own data. They don’t connect with the finance ministry, or they don’t connect with home affairs. So it means that they are not talking to each other, and they are not able to really leverage the power of AI. So the first thing that they need to do is, first, to connect this data, and also to use this new technology like AI to be able to really analyze the data and make decisions which are based on facts, so facts-based. And it gives insights to the politicians. It gives insights to the various head of administration about what decisions they should take through this analysis, through the big data, and also data analytics they can use. I’m very convinced that AI technology is really going to improve public services and improve the quality of public services. As Lucia said just before, I mean, there is a change in how AI technology today can be used. AI is not new. And for a very long time, we have been using AI to manage very non-complex operations with very low value added. But now with GenAI, we have a switch in how the technology can be used, because GenAI can manage very complex requests. And it can also give you personalized answer, which is very valuable for a government. Because if you’re a government, it means that you can use GenAI to automate. automate lots of the tasks which were done by your civil servants before because they were complex. And now you can make them autonomous, or at least you can reduce the time you need to really manage and operate them on a daily basis. So AI will for sure make government more efficient. For instance, to give you a few examples, you can use AI to manage a relationship with your citizens. Instead of having to send an email to a public administration to ask some question, I don’t know if some people in the audience have already tried to send a request to your tax authority, for instance. You want to know if you’re subject to this regulation or if you need to submit this revenue. It may take too much to get an answer from the tax authorities. If you’re able to embed an AI chatbot which is connected to your tax regulation, so the data set of your tax regulation, but it’s also connected to the revenue from the finance ministry, which are declared by your employer, well, the chatbot can, in a few seconds, give you the answer about your request. And so you went from two months to a few seconds, and you have the same exact answer. So faster service. Second thing is also trying to better optimize public expenditure. Through AI tools, you can drastically, I mean, detect tax fraud. You can identify the…you can also better calculate social benefits. So in Europe, for instance, we are working with a lot of governments to use AI to make sure social benefits are correctly calculated. And it can save you a billion of euro every year because in lots of countries, sometimes social benefits are not very well calculated. It’s not optimized because you don’t…the social ministry is not talking with the other ministries, and we don’t really know how much revenue you have. So we give you some money, but at the end, you were not supposed to get the money. So that’s another way. And also… So there is plenty of use cases. So at Oracle, what we try to do, we try to make AI easy to adopt. And how we make that happening is that we try to embed directly AI technology into our own applications to make sure it’s easy to use and easy to implement when you’re a government. But it also applies for the private sector, by the way. Another important point about AI is when you use AI, you need to use the good data. If you don’t use the good data when you train your models, probably the answers are not going to be very good. I’m going back to my first example about ChatGPT. But ChatGPT is very good if you ask ChatGPT to draft some content or to draft a keynote or a briefing document, because it’s based on a lot of public data which are available on the internet. However, if I ask ChatGPT to give me a specific answer about a health care situation or about a tax regulation, it probably will not be able to give me a very relevant answer. So contextualization of data is very important. And for government, it means that you need to bring specific data sets which are coming from your own industry to train your model and make sure it brings relevant answer to your citizens. You mentioned passport renewal just before. And I think it’s a very good example, because how we can use AI for passport renewal? Well, it’s very easy. You can have solutions that is going to be put on the website of the government. This AI-generated chatbot is going to be connected with various databases from the government. And so it’s going to help you prepare your passport applications. Because usually, when you need to do passport application, you need to gather lots of various documents which are coming. I mean, you need a birth certificate. certificate, you need justification of your address, you need your former documents, you need lots of various stuff. So this AI technology is going to be able to gather all these documents for you, connecting with the various ministries and data sets. It’s going also to generate automatically the form you need to prepare. It will give you, it’s going to give you the next meeting available in the agenda. And also, when you’re going to arrive for the meeting, the civil officer is going to review your application. For him, it will be much easier, because he will know that, probably, the AI won’t have done some human error. So the application will be correctly filled in, the documents will be correct. You won’t miss any document, because the AI will give you all the documents automatically. And at the end, it’s also going to improve how the civil officer is going to work, because he will not waste some time to tell you you need to come back, et cetera. So that’s a very small example about how we can use AI and why it generates very good benefits in terms of productivity, efficiency, and cost saving for the government. But just to finish on that, I think AI for the public sector is growing, but it’s still very new. And I think governments are still a bit cautious about using AI, but it’s clearly accelerating. And now we see lots of use cases which are already live and generate very good benefits for citizens and the government.


Brandon Soloski: Thank you so much. My apologies for the coughing attack. I seem to be going through right at the moment. I should have brought a little water on the stage. I think one of the things I wanted to talk about, and you were just mentioning this, was the interoperability aspect of much of AI. proliferation in this past year on new regulations, policies, laws, the attempting to regulate AI, to position various countries, even regions, for the future, to position themselves for this new sector. Now, it’s been a full year since the EU announced the world’s first major AI regulation, the EU AI Act, been following this. And I’m quite intrigued to hear some of all of your thoughts and specifically, as governments around the world draw on the EU’s regulatory approach as AI, as they shape their own AI policies, what may be lessons might they want to start taking into consideration or any thoughts or observations on any of these new laws or regulations?


Lucia Russo: Maybe I’ll go first. Yes, you’re totally right. We are seeing many policies and regulations emerging. And of course, the EU AI Act is, one may say, the pioneering regulatory approach in that it establishes this comprehensive, overarching legislation across sectors that aims at regulating AI systems that enter the EU market. But we are seeing, likewise, the EU, we are seeing some regulatory frameworks emerging, for instance, in Canada and Brazil that also follow a similar risk-based or impact-based approach, though these proposals are still being discussed before parliaments. And then, on the other hand, we also see different approaches, such as those taken by the US, you mentioned, but also the United Kingdom or Israel, where instead of going with a cross-sectoral approach, you’d rather see principles defined and then regulations to be defined more at the sector level. And this is clearly an approach that, so far, the UK has taken, Israel. And in the US, we have seen the executive order that has some components of risk management and safety and critical infrastructures, but still relies mostly on standards and then voluntary commitments. So I think this space is really evolving quite fast. And what concerns mostly the OECD, being an international organization working on consensus building and facilitating interactions across jurisdictions, is that, of course, this can lead to regulatory fragmentation, which, in turn, leads to higher compliance costs for enterprises operating across borders. So our mandate is really to establish interoperability across these various regulatory frameworks. And we do that at the very basic, for instance, trying with the definition of AI system, which, in fact, has been adopted by the UI Act, by the Convention of the Council of Europe, but also by the MIS framework. So having the same definition allows these frameworks to talk to each other, because they talk about the same thing. But also, we are mapping risk management frameworks to establish what are the commonalities. And so through responsible business conduct, allowing companies to see what compliance mechanism they need to ensure to trade across borders. I’ll just, perhaps, mention three things you said, what countries should look at when they look. at the UI Act. I think, of course, it’s prerogative of countries to establish frameworks according to their technological ecosystems, their priorities, their societal values. I think the key elements from the UI Act would be really the importance of creating regulatory frameworks that are risk-based according to the level of risk of the systems and so proportionate in terms of the requirements, accountability for deployers and developers, and then also establishing the robust testing and certification systems across the life cycle. And perhaps just to conclude on the risk-based approach, I think that should also be based on evidence, and that’s why at the OECD we also built an incident reporting framework, the AIM is called, and the purpose is really to see where risks actually materialize, because we talk a lot about risk in abstract, but then where is it that causes the most harm? And on that basis, this should be able to adapt alongside technological innovation.


Sarim Aziz: Thank you. I think just to add on to what Lucia said, from an Asia-Pacific perspective, I think it was exactly a year ago at the last IGF in Japan where the G7 Hiroshima process was announced, which is actually consistent with a lot of the OECD principles. So I think what we’ve seen is most countries in Asia-Pacific are not following the EU model. I think they have followed more of the G7 OECD kind of more principle-based approach on making sure, because I think they all understand this is new technology, right? It’s evolving so quickly, and by the time you regulate it, it would have already evolved perhaps. And so I think there are great examples, including the UK example, where there is a need for having harmonized and having AI safety institutes around the world as a network. That’s been a great initiative, and I think there’s to assess risks. And with the UK ASI, I think that because of that collaboration, they were able to launch something called Inspect, which basically is an open-source software library to, almost a year ago, assesses for risks like cyber, bio, and other kinds of safety risks. So I do think there’s lots of great work going on. It’s still early, but I do see that collaboration as the key here, not necessarily regulation to something that’s still evolving.


Pellerin Matis: Thank you. Yeah, but not working very well, okay, it’s back It’s not back Okay, can I have another mic maybe? No Okay Maybe just to come in quickly one is about harmonization. I think that’s for private sector very very important without going into details of Thank you, okay, that’s that’s funny So without going into into details about the AI testing for the private sector It’s it’s very important to have an harmonization and and and we don’t want to at least we should not see values different framework Define everywhere one in Europe one in Asia one in in South America I know in South America right now There is a lot of work in Brazil and a few other countries about AI and they are all wondering what we should do in AI Well, I think it’s for us It would be very complicated if we have a fragmented regulation around the world about how we use AI So that’s the first the first one and I really think Governments and officials working on that should really consider trying to harmonize the rules So the second point is innovation and adoption. We talk about adoption at the beginning of the panel We should not make sure we should be careful about not Reducing the trust about these technologies because these regulations are great and I’m not saying it’s not it’s not it’s not it’s a bad thing But in the book in the global opinion sometimes there is some misunderstanding about about about this technology and it’s not helping adoption because people think it might be dangerous or think their data are not safe and And and sometimes these regulatory and policy discussion generated generate some mistrust about technology and in the EU It’s not it’s not only about AI but if you look at about cloud and and all the debates around data sovereignty Unfortunately, it has slowed down drastically cloud adoption because companies, governments are worried about cloud because maybe there is a risk about the data. While we know from a technical perspective, usually it’s very safe to go to the cloud because cloud companies are cyber experts and they are putting billions of dollars every year to secure the infrastructure. So usually when you’re in the cloud, your data is safer. But there is a misunderstanding about it, and there is some, in the global opinion of the population, a sort of worry about data sovereignty. And adoption is very slow because of that. I was in Singapore a few days ago and I went through the customs and I was super impressed about their ability to use AI in the airport. Now you don’t need to, in the customs, you don’t need to go to take your passport. They recognize you automatically, we face recognitions. When you arrive at the boarding gate, you don’t need to have a boarding pass because they have embedded AI facial recognition into their process. And now people are just going through the boarding gate and they recognize you, they know you’re on seat 03B, and that’s fine, you can go in the plane. You will never see that in Europe because of GDPR, because of all the rules. It’s not possible. So we need to find a compromise between data privacy, but also innovation because innovation is important and it’s also through these new technologies and through these innovations that we can make government more efficient and easier for people.


Brandon Soloski: That’s a great point. And ironically, very likely a European company that is handling a lot of what you were just talking about, edemia, but you’re absolutely spot on right there with GDPR. I think one of the other things I wanted to talk about, and we started talking about this already, was in terms of partnerships. And you mentioned this a little bit about some of the large companies and the influence that this has, but I think one of the things I’d love to chat a little bit about and get some of all of your thoughts. thoughts are on what you think the role of partnerships with the private sector is going to play, including startups. How is this going to evolve outside of just some of the big companies? And I’ll kick it over to you, Aziz, as I know you started talking about this already.


Sarim Aziz: Thanks, Brandon. I want to make sure others can chime in. But I think just to using Singapore as a good example, even a government like Singapore that is quite innovative, I think still part of the reason is because they realize the value of the startup private sector and the startup community. And so I think that’s where governments can really tap into the local talent and entrepreneurs and startups who are already, they’ve picked up this technology, they’re already doing great things with it. And I think one of the proofs of this is that we ran an AI accelerator across Asia-Pacific across 13 countries, everything from Bangladesh, Nepal, all the way to Australia and New Zealand. And we were blown away by, and this is just the power of open source, how these startups and nonprofits were using our technology. And this is one of the blessings and challenges with open source is you don’t know how it’s being used because it could be used in incredible ways. And it’s only because we did this competition that we found that, oh my gosh, the New Zealand NetSafe organization, which is an organization that takes care of online harms and safety, is using our model to basically streamline complaints that are getting from the community around just content. And they’re empowered by the government to basically send information to digital platforms and not just NetApp but others. So it was amazing to see that in every sector, health care, in manufacturing in Japan, there were those uses of AI. And what we did was we did this regional experiment locally. We ran local competitions in these countries. And we brought the local government to say, come and see what your own local startups are doing with this technology. And they’re doing it in the sectors that you care about. They’re doing it in health care. They’re doing it in manufacturing. They’re doing it in Taiwan. There was a company that was able to use AI to identify building, like use blueprints to identify building code violations and whether the IDs are using adopts to the local laws and regulations. So it’s incredible stuff. Things we couldn’t think of were being done. And so we engaged over 23 different government agencies across the Asia-Pacific region to show them, here’s what happens when you work with the private sector. It can be foreign big tech companies, but it can be your local talent who are already using all the tools available to them. That’s the power of the cloud. Your local talent can use whatever tools they have, it’s Oracle Cloud or whatever makes sense for them, Amazon, Microsoft, and again, the power is open source because you’re not locked in. With open source, you can take your data and take it wherever you want to put it. You want to put it in Oracle? Great. If you want tomorrow, if you get a better deal with Microsoft, go there. It should be what makes sense for you and gives you that control and flexibility.


Lucia Russo: Maybe I’ll just bring in some perspective from Egypt. We’ve been working with Egypt for analyzing their AI strategy and they have a very nice example of public-private partnership in that they built this applied innovation center which model works as a tripartite model where you have the Ministry of Innovation and then the ministry that could be health or agriculture or the judicial system, and then you have the private sector. The idea is that this domain ministry comes in with the need and the Ministry of Innovation helps in gathering the technological solution together with private companies that help develop and scale the solution. This has proved very effective, for instance, in developing solutions for the health like diagnosing retinopathy linked to diabetes or even speech-to-text recognition for the judicial system. I think there is this benefit of having the private sector as providers and also knowledge transfer also in settings where, of course, technological innovation may be lagging because of the ecosystem itself.


Pellerin Matis: I think government can really learn from the private sector because there is lots of technologies and solutions which have already been implemented in the private sector that can easily be replicated in the government. If you look at what, if I take the Oracle example, what we are doing for private companies to run their HR, their payroll, their procurement, lots of these applications can easily be implemented in the Ministry of Finance to run your public procurement system, your public contract, your user payment of your civil servants, etc. So there is a lot of applications that the government can use to… to really leverage the power of cloud and AI. If I give you an example about health care, health care is a very important topic for Oracle. We bought Cerner a few years ago, which is a big electronic medical records company. And since then, we have made huge investment to modernize the health care sector, because we are very convinced that there is lots of things to do. One of the main challenges of health care right now is that the data is fragmented. You have lots of various actors, stakeholders on the health care space, from health agencies, to hospitals, to private hospitals, to private insurance, et cetera. So there is lots of them. And usually, the data is not really connected to each other. So what we are doing right now is trying to build an ecosystem solution that gives the ability to governments to connect all these stakeholders together and have a global visibility as a national level, population level, and using AI to give a better understanding for government officials about what is a national situation. So we call this a data intelligence platform for health care. It’s already implemented in a few countries. But this platform, using AI technology, gives the tools to identify and detect diseases, for instance, or to predict all the patients’ needs in a specific region. So even a specific country, a specific city, sorry. That’s something we have done during COVID. And we saw it was working very well. And there is a huge demand for governments to have this type of dashboard, which is going to help them reduce health care costs, but also be able to improve patient outcomes. And the second level is a bit lower. It’s about how we can modernize hospitals and how we can help health professionals, like doctors, et cetera, to improve their quality of work in the hospitals, to make the hospital more efficient. And so actually, we just released a few weeks ago a new electronic health record, which is actually, to make it simple, it’s a hospital management system. So it’s a software that manages the appointment for doctors, drug prescriptions, number of beds, number of beds you have, everything in the hospitals. And now we are embedding AI technology to try to automate all the tasks right now the health professional needs to do, like drafting a report, like putting the meeting in the agenda, or drug prescription. It takes time. And so now we are embedding voice recognition in our systems. And so doctors can just record the meeting. And at the end of the meeting, the AI is going to generate everything for you. So no reports draft, it will be generated by AI. So next meeting will be put in the agenda. automatically through AI, same for the prescription, et cetera. And we are able to reduce the time practitioners and health professionals are in front of their computer and not talking to the patient. So that’s very important. And that’s something which is already live. Actually, in Saudi Arabia, in UAE, in Qatar, we are already implementing these solutions in a lot of hospitals. And we see drastic, important improvement in how patients are using health care in these countries. But to analyze evidence, to schedule cases, to predict a potential outcome of illegal cases, so there is lots of ways to use that. Agriculture is very important. And we have some good cases in Africa, even in Philippines, where we use an agriculture solutions to help governments to monitor crops, to monitor the climate, to be able to anticipate climate change or some issues in the crop or stuff like that. Or even public safety. Public safety is the one maybe people know the better. Because when you are a police authority or you are an emergency authority, you can use AI for emergency response, or for video screening, et cetera. So there is lots of use cases.


Brandon Soloski: Fascinating subject. We could go on for quite some more time. And I have more questions in regards to emerging and frontier markets and how AI could be applied there. And I would love for us to continue on the conversation. But we are at the bottom of the hour. And I would love to end on that optimistic note around partnerships as well. So much can get done in that space. If one could have a favorite sustainable development goal, number 17, partnerships would be mine. So much gets done there. So just amazing to be able to talk about this with all of you today. Thank you again, Matisse, for joining. for joining us, Lucia for joining us from OECD, Aziz, thank you again, Mr. Aziz Surim, for joining us as well from MEDA. It’s really been a pleasure to have this conversation today, to understand the role that the private sector plays in this space, its leadership, in terms of building trust with the public sector as well, truly a fascinating subject, and it was a pleasure to join you all today. I’ll be around. I know Aziz, Lucia, and Matisse will also be around. We’d love to take some questions at the end, as I think we might be out of time. Yeah, thank you very much, it’s my pleasure to be here.


Audience: My name is Anil Pura, I’m from Nepal, and in terms of implementation of AI, there are a lot of challenges, but one of the most prevailing challenges is a trust issue, in terms of sharing the data by the government and public partnership. So how to overcome that, and are there any good examples you’d like to share with us? Thank you.


Lucia Russo: Well, quickly, about trust in data management for governments, I think we mentioned this a little bit about building sovereign infrastructure. A few examples, close to here, we work with the government of Oman, for instance. We have built sovereign infrastructure base in Oman, because Oracle was not operating any cloud infrastructure in Oman, but the government wanted to use our technology. They wanted to use our technology to modernize their governments, to modernize their public services, and use AI. And so what we have done is that we have built a cloud for them, which is a dedicated infrastructure. It’s built under the control of the Omani government, with their own security, their own standard certification, et cetera. So there is some solutions. as you say. And for me, the infrastructure, cloud infrastructure layer is probably one of the most important ones to check when you want to really protect your data. And after, we can also go into the protection of the data sets, anonymization, et cetera. But that’s another aspect, which is, I would say, much easier. But yeah.


Sarim Aziz: At the risk of contradicting Matisse, but just to say yes, I mean, that’s one option. But I think the answer is open source, where you’re not locked in, you control your data. I mean, actually, Lama, which is Matisse’s model, is available to Oracle’s cloud infrastructure. So yes, if you want to host it there, you can. But if things are too sensitive for the government of Nepal and you’d like to host it on your own infrastructure, you’re happy to do that. You can also do both. Like, it can be a hybrid. You’re not locked into one proprietary system. And I think open source is the answer to give you maximum control, maximum sovereignty, whether it’s cloud or on-prem. And basically, you control your data. No one else does. And so open source is a solution for governments to look at. In fact, many governments are using it. They don’t have to tell us that they’re using it. And to some point, there are use cases where, especially now, where I think the next generation is going to be not these things aren’t going to run on just cloud and servers and computers. We’re seeing edge devices. There are more mobile devices in the world and sensors in places that may not even have good connections. And so you need AI to run on those edge devices. And open source models now are getting so small that you can actually deploy it on your phone or on a small device, on edge devices as well. So lots of interesting use cases that could come out of that.


Brandon Soloski: All right. Well, I know we’re at the bottom of the hour. And our time has come to a conclusion. But thank you again for the great question. And I’m sure our panelists would love to stick around and field a few more questions if anyone else in the audience would like to speak with us. Again, thank you again for. joining today, to everyone online and everyone in the room, truly a pleasure. Such a fascinating topic. Matisse, Lucia, Aziz, thank you again. Serושka. Se next guest is Raul Garber. Raul, welcome! RAUL GARBER Yeah, that’s really, very interesting. I didn’t realize we had the right translation. It was a great job. Thank you for the enthusiastic response. Thank you. It was great. Thank you so much. Yeah. Very interesting topic. Yeah. I wish we could go on. Yeah, me too. I mean, I’m not worried about the hour, right? We can end this like easily in another two hours. I’m pretty sure we can do that. So there’s a restriction, but I think we’re happy to do it. I think she’s going to stop for a minute. OK, we’ll work with you guys. Yeah, I’ll do the stuff. We’ll see how it goes. Absolutely. Yeah, Allison is on our, I just saw her on Thursday. Oh, really? Yeah, she’s on our executive committee, which is kind of like our board. Oh, OK. So she is a really strong advocate for us. I know that. I know that. So she’s great. So I am also. She’s very supportive of us. I am also. She’s been really good to the community in the US. We have lots of people. But sometimes it’s difficult. So if I don’t see her, I like to talk to her. Very nice to talk to her. Same here. Thank you. A little bit. Before Jane and Michael was a consultant, I worked for like a bit less than 10 years as an advisor. But first, I started working with a French guy. But I have been paid in DC to work with French guys.


L

Lucia Russo

Speech speed

122 words per minute

Speech length

1367 words

Speech time

668 seconds

AI can enhance government efficiency and responsiveness

Explanation

AI can improve internal efficiency of government processes and free up civil servants for more valuable tasks. It can also enhance responsiveness by anticipating societal trends and user needs.


Evidence

70% of OECD countries used AI to enhance efficiency in internal operations, 67% to improve responsiveness of services


Major Discussion Point

Building Trust in AI for Government Use


Agreed with

Pellerin Matis


Sarim Aziz


Agreed on

AI can enhance government efficiency and service delivery


Public-private partnerships drive AI innovation in government

Explanation

Partnerships between government and private sector can effectively develop and scale AI solutions for public services. This model allows for knowledge transfer and leveraging private sector expertise.


Evidence

Example of Egypt’s applied innovation center with tripartite model involving government ministries and private companies


Major Discussion Point

Building Trust in AI for Government Use


Risk-based and evidence-based regulatory approaches are important

Explanation

AI regulations should be based on the level of risk posed by AI systems and should be proportionate in terms of requirements. Evidence of actual harms should inform regulatory approaches.


Evidence

OECD’s AIM incident reporting framework to identify where AI risks materialize


Major Discussion Point

AI Regulation and Policy Approaches


P

Pellerin Matis

Speech speed

157 words per minute

Speech length

3996 words

Speech time

1525 seconds

Sovereign AI and cloud infrastructure protect government data

Explanation

Sovereign AI ensures government data is secure and safe. It allows governments to train AI models on their own data without sharing it with external parties.


Evidence

Oracle building sovereign cloud with STC in Saudi Arabia for government data


Major Discussion Point

Building Trust in AI for Government Use


Agreed with

Lucia Russo


Sarim Aziz


Brandon Soloski


Agreed on

Need for trust-building measures in AI adoption


Differed with

Sarim Aziz


Differed on

Approach to data protection and sovereignty


Need for harmonized global AI regulations to avoid fragmentation

Explanation

Harmonized global AI regulations are important for the private sector to avoid dealing with different frameworks in different regions. Fragmented regulations can slow down AI adoption and innovation.


Evidence

Example of slow cloud adoption in EU due to data sovereignty concerns


Major Discussion Point

AI Regulation and Policy Approaches


AI can improve healthcare delivery and hospital management

Explanation

AI can help connect fragmented healthcare data and provide insights at a national level. It can also automate tasks for healthcare professionals, improving efficiency in hospitals.


Evidence

Oracle’s data intelligence platform for healthcare and AI-embedded electronic health record system


Major Discussion Point

AI Applications for Government Services


Agreed with

Lucia Russo


Sarim Aziz


Agreed on

AI can enhance government efficiency and service delivery


AI enables more efficient passport renewal and citizen services

Explanation

AI-powered chatbots can streamline passport renewal processes by automatically gathering required documents and generating forms. This can significantly reduce processing time and improve efficiency.


Evidence

Example of AI-assisted passport renewal process


Major Discussion Point

AI Applications for Government Services


Agreed with

Lucia Russo


Sarim Aziz


Agreed on

AI can enhance government efficiency and service delivery


AI enhances agricultural monitoring and public safety

Explanation

AI can be used in agriculture to monitor crops and climate, helping governments anticipate issues. In public safety, AI can be used for emergency response and video screening.


Evidence

Examples from Africa and Philippines for agriculture, and general use cases in public safety


Major Discussion Point

AI Applications for Government Services


Agreed with

Lucia Russo


Sarim Aziz


Agreed on

AI can enhance government efficiency and service delivery


Legacy IT systems and data silos hinder government AI adoption

Explanation

Many governments are still running on outdated technology from the 90s or 2000s. These legacy systems and data silos make it difficult to implement and leverage AI effectively.


Major Discussion Point

Challenges in Government AI Adoption


S

Sarim Aziz

Speech speed

183 words per minute

Speech length

2192 words

Speech time

718 seconds

Open source AI increases transparency and accessibility

Explanation

Open source AI allows governments to have more control and customization over AI systems. It enables them to test models, understand how they work, and fine-tune them to local contexts.


Evidence

Examples of open source AI use in France for simplifying legal documents and Mayo Clinic for radiation oncology diagnostics


Major Discussion Point

Building Trust in AI for Government Use


Agreed with

Lucia Russo


Pellerin Matis


Brandon Soloski


Agreed on

Need for trust-building measures in AI adoption


Differed with

Pellerin Matis


Differed on

Approach to data protection and sovereignty


Many Asia-Pacific countries adopting principle-based rather than prescriptive AI regulations

Explanation

Countries in Asia-Pacific are following more of a G7 OECD principle-based approach to AI regulation. This allows for flexibility as the technology is evolving rapidly.


Evidence

G7 Hiroshima process announcement at IGF in Japan


Major Discussion Point

AI Regulation and Policy Approaches


AI assists with legal document processing and legislative work

Explanation

AI can be used to simplify legal documents and legislation, making them easier for other agencies to understand. This improves efficiency in government operations.


Evidence

Example of LAMA model being used by French parliamentarians


Major Discussion Point

AI Applications for Government Services


Agreed with

Lucia Russo


Pellerin Matis


Agreed on

AI can enhance government efficiency and service delivery


B

Brandon Soloski

Speech speed

166 words per minute

Speech length

1903 words

Speech time

686 seconds

Need for AI education and explaining benefits to increase public trust

Explanation

Educating the public about AI and its benefits is crucial for building trust. Many people are still worried about new technologies like AI, but most are in favor of government adoption of generative AI.


Evidence

Survey by IBM Institute for Business Value showing public support for government AI adoption despite concerns


Major Discussion Point

Challenges in Government AI Adoption


Agreed with

Lucia Russo


Pellerin Matis


Sarim Aziz


Agreed on

Need for trust-building measures in AI adoption


A

Audience

Speech speed

125 words per minute

Speech length

63 words

Speech time

30 seconds

Concerns about data privacy and security impact AI trust

Explanation

One of the prevailing challenges in AI implementation is the trust issue, particularly in terms of data sharing between government and public partnerships. Overcoming this challenge is crucial for AI adoption.


Major Discussion Point

Challenges in Government AI Adoption


Agreements

Agreement Points

AI can enhance government efficiency and service delivery

speakers

Lucia Russo


Pellerin Matis


Sarim Aziz


arguments

AI can enhance government efficiency and responsiveness


AI can improve healthcare delivery and hospital management


AI enables more efficient passport renewal and citizen services


AI enhances agricultural monitoring and public safety


AI assists with legal document processing and legislative work


summary

All speakers agreed that AI has the potential to significantly improve government operations and services across various sectors, including healthcare, citizen services, agriculture, and legal processes.


Need for trust-building measures in AI adoption

speakers

Lucia Russo


Pellerin Matis


Sarim Aziz


Brandon Soloski


arguments

Sovereign AI and cloud infrastructure protect government data


Open source AI increases transparency and accessibility


Need for AI education and explaining benefits to increase public trust


summary

Speakers emphasized the importance of building trust in AI through measures such as sovereign infrastructure, open-source approaches, and public education about AI benefits and safeguards.


Similar Viewpoints

Both speakers advocated for flexible, principle-based approaches to AI regulation that can adapt to rapidly evolving technology, rather than rigid, prescriptive rules.

speakers

Lucia Russo


Sarim Aziz


arguments

Risk-based and evidence-based regulatory approaches are important


Many Asia-Pacific countries adopting principle-based rather than prescriptive AI regulations


Unexpected Consensus

Importance of public-private partnerships in AI innovation

speakers

Lucia Russo


Sarim Aziz


Pellerin Matis


arguments

Public-private partnerships drive AI innovation in government


Open source AI increases transparency and accessibility


AI can improve healthcare delivery and hospital management


explanation

Despite representing different sectors (international organization, tech company, and cloud infrastructure provider), all speakers unexpectedly agreed on the crucial role of collaboration between government and private sector in driving AI innovation and implementation in public services.


Overall Assessment

Summary

The main areas of agreement included the potential of AI to enhance government efficiency and service delivery, the need for trust-building measures in AI adoption, and the importance of flexible regulatory approaches. There was also unexpected consensus on the value of public-private partnerships in driving AI innovation in government.


Consensus level

The level of consensus among the speakers was relatively high, particularly on the benefits and potential applications of AI in government. This strong agreement implies a shared vision for the future of AI in public services, which could facilitate more coordinated efforts in AI development and implementation across different sectors and regions. However, some differences in approach (e.g., sovereign infrastructure vs. open-source) suggest that while the goals are aligned, the methods to achieve them may vary.


Differences

Different Viewpoints

Approach to data protection and sovereignty

speakers

Pellerin Matis


Sarim Aziz


arguments

Sovereign AI and cloud infrastructure protect government data


Open source AI increases transparency and accessibility


summary

Pellerin Matis advocates for sovereign AI and dedicated cloud infrastructure to protect government data, while Sarim Aziz argues that open source AI provides better control and transparency for governments.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around data protection strategies and regulatory approaches for AI.


difference_level

The level of disagreement among the speakers is moderate. While there are some differences in approach, particularly regarding data protection and regulatory strategies, the speakers generally agree on the potential benefits of AI for government services and the need for responsible implementation. These differences reflect the complexity of balancing innovation, security, and regulation in AI adoption for government use.


Partial Agreements

Partial Agreements

All speakers agree on the need for AI regulation, but differ on the specific approach. Lucia Russo emphasizes risk-based and evidence-based approaches, Pellerin Matis advocates for global harmonization, while Sarim Aziz highlights the principle-based approach adopted by many Asia-Pacific countries.

speakers

Lucia Russo


Pellerin Matis


Sarim Aziz


arguments

Risk-based and evidence-based regulatory approaches are important


Need for harmonized global AI regulations to avoid fragmentation


Many Asia-Pacific countries adopting principle-based rather than prescriptive AI regulations


Similar Viewpoints

Both speakers advocated for flexible, principle-based approaches to AI regulation that can adapt to rapidly evolving technology, rather than rigid, prescriptive rules.

speakers

Lucia Russo


Sarim Aziz


arguments

Risk-based and evidence-based regulatory approaches are important


Many Asia-Pacific countries adopting principle-based rather than prescriptive AI regulations


Takeaways

Key Takeaways

AI has significant potential to improve government efficiency and services, but adoption lags behind the private sector


Building public trust is crucial for successful government AI implementation


Open source and sovereign AI approaches can help address data privacy/security concerns


Public-private partnerships and engagement with local startups are important for driving AI innovation in government


There is a need for harmonized global AI regulations to avoid fragmentation


Risk-based and evidence-based regulatory approaches are recommended for AI governance


Resolutions and Action Items

None identified


Unresolved Issues

How to overcome trust issues in data sharing between government and private sector


Balancing innovation with data privacy/security concerns in government AI adoption


Addressing the digital divide and ensuring equitable access to AI benefits across countries


Suggested Compromises

Using hybrid approaches that combine sovereign infrastructure with open source AI models to balance control and flexibility


Adopting principle-based AI regulations rather than overly prescriptive rules to allow for innovation


Thought Provoking Comments

AI is not a new thing, right? I think sometimes we forget it’s, you know, I think it’s important to kind of differentiate and reframe the discussion around, like, why is this, the trust that you definitely mentioned, like, why is that increasing? You know, for something that, you know, what is the difference between the AI that we were using five years ago versus the AI today, from the perspective that, you know, AI has been used in any computer system that helps, like, analyze, perform functions on existing data. That’s been happening for a while. But what’s so exciting about this new age of AI, so to speak, is the fact that its ability to not just perform tasks on existing data, but to create new data.

speaker

Sarim Aziz


reason

This comment reframes the discussion by highlighting that AI isn’t new, but its current capabilities are what’s driving increased interest and trust concerns. It provides important context for understanding the current AI landscape.


impact

This comment shifted the conversation to focus more specifically on the unique aspects of current AI technology, particularly its ability to generate new data. It set the stage for a more nuanced discussion of AI’s potential and challenges.


We need to fundamentally change the way, the path forward needs to be an open source one that has wide acceptance, that is accessible to all countries. And I think that’s why Meta has, our CEO wrote this letter about open source AI is the way forward to ensure that nobody gets left behind, to ensure that people in this part of the world and other parts of the world have a part in their conversation.

speaker

Sarim Aziz


reason

This comment introduces the idea of open source AI as a solution to ensure global accessibility and participation in AI development. It challenges the notion that AI should be controlled by a few large companies.


impact

This comment sparked discussion about different approaches to AI development and deployment, particularly contrasting open source models with proprietary systems. It led to further exploration of how different approaches might impact trust, innovation, and global participation in AI.


AI will for sure make government more efficient. For instance, to give you a few examples, you can use AI to manage a relationship with your citizens. Instead of having to send an email to a public administration to ask some question, I don’t know if some people in the audience have already tried to send a request to your tax authority, for instance. You want to know if you’re subject to this regulation or if you need to submit this revenue. It may take too much to get an answer from the tax authorities. If you’re able to embed an AI chatbot which is connected to your tax regulation, so the data set of your tax regulation, but it’s also connected to the revenue from the finance ministry, which are declared by your employer, well, the chatbot can, in a few seconds, give you the answer about your request.

speaker

Pellerin Matis


reason

This comment provides a concrete, relatable example of how AI can improve government efficiency and citizen services. It helps illustrate the practical benefits of AI in governance.


impact

This comment grounded the discussion in practical applications, moving from theoretical benefits to specific use cases. It led to further discussion of various ways AI could be applied in different government sectors.


We should not make sure we should be careful about not Reducing the trust about these technologies because these regulations are great and I’m not saying it’s not it’s not it’s a bad thing But in the book in the global opinion sometimes there is some misunderstanding about about about this technology and it’s not helping adoption because people think it might be dangerous or think their data are not safe and And sometimes these regulatory and policy discussion generated generate some mistrust about technology

speaker

Pellerin Matis


reason

This comment highlights the potential unintended consequences of regulation and policy discussions, suggesting they might inadvertently reduce trust in AI technologies. It introduces a complex dynamic between regulation, public perception, and technology adoption.


impact

This comment shifted the discussion towards the challenges of balancing regulation with innovation and adoption. It led to a more nuanced conversation about how to approach AI governance without stifling progress or eroding public trust.


Overall Assessment

These key comments shaped the discussion by moving it from general principles to specific applications and challenges of AI in governance. They introduced important tensions between open source and proprietary models, between regulation and innovation, and between theoretical potential and practical implementation. The discussion evolved to consider not just the benefits of AI in governance, but also the complex dynamics of public trust, global accessibility, and the potential unintended consequences of regulatory approaches. This resulted in a more nuanced and multifaceted exploration of the topic, considering both opportunities and challenges in the use of AI for responsible governance.


Follow-up Questions

How can AI be applied in emerging and frontier markets?

speaker

Brandon Soloski


explanation

This was mentioned as a topic the speaker wanted to explore further but didn’t have time for, indicating its importance in understanding the global impact of AI.


How can governments overcome trust issues in data sharing for public-private partnerships in AI implementation?

speaker

Anil Pura (audience member)


explanation

This was raised as a prevailing challenge in AI implementation, particularly for countries like Nepal, highlighting the need for strategies to build trust in data sharing.


How can countries ensure interoperability across various AI regulatory frameworks?

speaker

Lucia Russo


explanation

This was mentioned as a key concern for the OECD, as regulatory fragmentation can lead to higher compliance costs for enterprises operating across borders.


How can governments balance innovation and adoption of AI technologies with concerns about data privacy and security?

speaker

Pellerin Matis


explanation

This was raised as a crucial consideration, noting that overly strict regulations might slow down AI adoption and innovation.


How can open-source AI models be leveraged to ensure data sovereignty and control for governments?

speaker

Sarim Aziz


explanation

This was suggested as a potential solution to data trust issues, allowing governments more control and flexibility in AI implementation.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.