Welfare for All Ensuring Equitable AI in the Worlds Democracies

Welfare for All Ensuring Equitable AI in the Worlds Democracies

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by warning that without intervention most of AI’s economic value could become concentrated in Western corporations and China, with estimates that up to 70 % may reside there, and argued that this outcome is not inevitable but must be democratized through intentional design and international collaboration [2-4][6-7].


Lee explained that the newly released international AI safety report highlights progress in evaluation but stresses a gap that can be narrowed by expanding standards such as ISO 42001 and by accelerating work through NIST drafts and regional pre-standard initiatives like the Hiroshima AI process, while also allowing cultural and linguistic customization [33-41][44-46]. Sachin added that simply copying regulations across markets often fails, citing Google’s Indiq GenBench which supports 29 Indian languages as an example of needed localization, and emphasized the necessity of continuous auditing to prevent drift as AI models evolve [48-56].


Building on this, participants described co-creation models where developers and governments act as enablers rather than barriers, with Google promoting open-source frameworks, the secure AI framework (SAIF), tools such as SynthID, and the Coalition of Secure AI (COSI) to support capacity building and workforce upskilling [61-78]. Amit highlighted that excessive regulation can stifle innovation and that his company balances carrots-recognition for patents, papers, and speaking engagements-with budget allocations for training to boost productivity, reporting an increase from 73 % to 83 % utilization [131-133][218-233]. Microsoft’s Amanda detailed the “Microsoft Elevate” initiative, which aims to upskill 20 million Indians by 2030 through partnerships with schools, vocational institutes, and government ministries, and stressed a holistic approach that includes infrastructure, multilingual AI, local deployment, and diffusion measurement [105-126].


The discussion then turned to trust and security, with Brad noting a U.S. fintech survey showing less than 20 % public trust in AI and raising concerns about prompt-injection attacks, especially in low-resource languages [241-244]. Amit explained that attackers can exploit unsupported languages to jailbreak models, and that expanding the ML Commons jailbreak benchmark to include Indic and Asian languages is a step toward mitigating such threats [250-257]. Sachin argued that AI-driven defensive agents can reverse the traditional “defender’s dilemma” by automating routine security work, thereby giving defenders an aggregate advantage over attackers [263-272].


Lee concluded that global cooperation across government, academia, industry, and civil society is essential, and that standardizing data formats and licensing will enable the regional customization needed for AI to support UN Sustainable Development Goals [283-294]. Amit reflected that India’s focus is on grassroots AI impact for farmers, schools, and hospitals, positioning the country as a front-office for AI rather than a back-office [307-313]. Amanda observed that recent weeks have integrated governance with impact discussions, emphasizing multilingual AI, partnership, and recent Indian legislation on AI-generated content as signs of mature, responsible deployment [334-342].


The panel closed on an optimistic note, asserting that despite digital-divide challenges, collaborative AI solutions and continued public-private partnerships can address both technical and societal risks [409-414].


Keypoints


Major discussion points


International collaboration and adaptable standards are essential to prevent AI value concentration in a few Western or Chinese entities. Brad frames the risk of a 70 % concentration of AI’s economic value in those regions and stresses the need for intentional design and global cooperation [2-4][6-8]. Lee highlights the role of ISO and NIST in drafting standards while warning that standards must be customizable for different languages and cultures [33-41][44-46]. Sachin adds that simply copying regulations across borders often fails, underscoring the need to localize standards for diverse markets [48-52].


Public-private capacity-building and upskilling programs are critical to bridge the AI skills gap, especially in developing economies. Amanda describes Microsoft’s “Elevate” initiative, its multi-year commitment to train millions of Indians and its partnership with schools and ministries [105-124][125-126]. Amit explains L&T’s three-pronged approach: collaborating with colleges, upskilling current staff while they remain billable, and incentivising personal research and patent work [149-158][165-174]. Sachin stresses continuous auditing of AI models because a one-time certification cannot keep pace with rapid model evolution [56-57].


A tension exists between global regulation/standards and the need to foster innovation; a co-creation, adaptive approach is advocated. Brad asks whether setting global standards may hinder innovation [79-87]. Lee warns that moving too quickly to regulation can outpace technological change, suggesting a bottom-up, science-first evaluation framework before deciding on rules [89-96]. Sachin argues that global standards should be a flexible “creative tension” that adapts to local constraints such as bandwidth and linguistic diversity [81-87]. Amit echoes the concern that over-regulation can stifle innovation and calls for careful, targeted rules [130-133].


Security, trust, and AI-specific cyber risks (e.g., prompt-injection) require immediate, scalable defenses, including multilingual robustness. Brad notes the public’s low trust in AI-driven financial services and asks for priority actions against threats like prompt injection [237-244]. Amit points out that models weak in low-resource languages become attack vectors, and he cites work on a multilingual jailbreak benchmark to harden systems [250-258]. Sachin describes the development of self-defending AI agents that act like an immune system, aiming to give defenders an aggregate advantage over attackers [263-272].


Localization-multilingual AI, culturally aware data standards, and open data frameworks-is vital for equitable AI deployment. Sachin’s “IndiQ GenBench” demonstrates the need for language-specific evaluation tools [53-55]. Amit stresses that poor support for Indic languages can enable prompt-injection attacks, reinforcing the push for multilingual capabilities [250-258]. Lee calls for voluntary data-exchange foundations and standardized data licenses to reduce friction in cross-regional collaborations [286-294].


Overall purpose / goal of the discussion


The panel convened to explore how the global AI ecosystem can be democratized: preventing concentration of economic value, establishing inclusive standards, building a skilled workforce, ensuring security and trust, and tailoring AI to diverse cultural and linguistic contexts through coordinated public-private and international effort.


Overall tone


The conversation begins with a measured, forward-looking tone emphasizing collaboration and optimism about shaping AI’s future. As the dialogue progresses, it becomes more technical and urgent, addressing concrete challenges such as regulatory trade-offs, skills shortages, and security threats. By the closing remarks, the tone shifts to reflective optimism, acknowledging the rapid pace of change while expressing confidence that coordinated action can deliver equitable, trustworthy AI outcomes.


Speakers

Amit Chadha – Managing Director and CEO, L&T Technology Services – expertise in AI engineering, technology services, and industry leadership. [S1]


Sachin Kakkar – India Site Development, Privacy, Safety and Security, Google – expertise in AI privacy, safety, security, and localization for Indian markets. [S4]


Amanda Craig Deckard – Senior Director, Office of Responsible AI, Microsoft – expertise in responsible AI policy, AI governance, skilling initiatives, and digital inclusion.


Brad Staples – Panel moderator/host – expertise in AI policy discussion facilitation and moderation. [S6]


Lee Tiedrich – Inaugurable AI Multidisciplinary Initiative Fellow, University of Maryland; Senior Advisor on the International AI Safety Report – expertise in AI safety standards, international collaboration, and evaluation frameworks.


Julian Waits – Chief Experience Officer, Rapid7 – expertise in cybersecurity, AI security, and AI-driven threat mitigation.


Audience – Various participants (e.g., Yuv from Senegal, Professor Charu from the Indian Institute of Public Administration, Dr. Nazar) – expertise not specified. [S13][S14][S15]


Additional speakers:


Steve – Briefly addressed by Sachin Kakkar (“Thanks, Steve”); role and expertise not identified in the transcript.


Full session reportComprehensive analysis and detailed insights

1. Opening framing (Brad Staples) – Brad warned that, if current trends continue, roughly 70 % of AI’s economic value could become concentrated in Western corporations and China [2-4]. He emphasized that this outcome is not inevitable; democratising AI will require intentional design, international collaboration, and coordinated action across research, workforce development, private-sector partnerships, and robust safety and security measures [6-8].


2. International standards & evaluation (Lee Tiedrich) – Lee presented the second International AI Safety Report, noting progress in evaluation techniques but a persistent gap [33-36]. He highlighted ISO 42 001 as an early standard and described the NIST “zero draft” that will feed into future ISO work [38-41]. Regional pre-standard initiatives such as the Hiroshima AI process were cited as venues for cross-regional stakeholder cooperation [42-46].


3. Localisation & continuous compliance (Sachin Kakkar) – Sachin argued that transplanting regulations across markets often fails, underscoring the need for localisation. He showcased Google’s Indiq GenBench, which supports 29 Indian languages, 12 scripts, and four language families for fine-tuning large language models [52-55]. He warned that one-off audits are insufficient for evolving models and advocated continuous scanning pipelines to prevent temporal drift [56-57].


4. Co-creation model (Sachin Kakkar) – Building on localisation, Sachin described an open-source “Safe SAIF (Secure AI Framework)” and tools such as SynthID, a watermarking technique that flags AI-generated content [63-66][74-76]. He also outlined the Coalition of Secure AI (COSI), an industry partnership expanding across APAC, and stressed capacity-building through threat-intelligence sharing and workforce upskilling [69-78].


5. Global standards vs. regulation trade-off – Lee argued that regulation should follow robust, evidence-based evaluation frameworks, warning that regulators often lag behind rapid technological change [89-95]. Amit cautioned that excessive regulation can stifle innovation and must be applied judiciously [130-133]. Sachin framed the tension as a “creative tension”: global standards should be adapted to local constraints such as bandwidth and linguistic diversity, turning potential hurdles into co-creation opportunities [81-87].


6. Skills-gap & public-private upskilling


Microsoft (Amanda Craig-Deckard)* – The “Elevate” programme aims to upskill 20 million Indians by 2030, combining cloud-compute access, AI tools, and partnerships with schools, vocational institutes, and government ministries. A dedicated “Elevate for Educators” track trains teachers at scale. The effort sits within a five-pillar strategy: hard infrastructure, AI compute capacity, multilingual AI, local deployment, and systematic diffusion measurement [108-118][119-126].


L&T Technology Services (Amit Chadha)* – L&T pursues a three-pronged approach: (i) collaborating with colleges to refresh curricula for the next five years [149-152]; (ii) upskilling current employees while they remain billable, integrating training into project work [156-164]; and (iii) incentivising personal research time, raising patent filings from 50 to 200 per year and increasing staff contributions beyond billable hours from 19 % to 52 %, which lifted productivity from 73 % to 83 % [165-174][230-233].


Rapid7 (Julian Waits)* – Julian noted that Rapid7 relies on talent from abroad to maintain its competitive edge and highlighted that AI can eliminate 60 % of the routine tasks humans currently perform [300-304][366-368].


7. Carrot-vs-stick discussion – Brad asked whether global standards might hinder innovation. Amanda responded with mixed-tactics such as weekly tips and hackathons to encourage adoption [89-95]. Amit advocated a “carrot-only” approach, using patent-glory incentives and budget allocations to motivate compliance [130-133].


8. Trust, security & multilingual vulnerabilities – Brad cited a YouGov survey showing fewer than 20 % of Americans trust AI in financial services [241-244]. Amit explained that models weak in low-resource languages become attack vectors; attackers can jailbreak systems by exploiting unsupported languages such as Tamil [250-252]. To counter this, Google contributed to an expanded ML Commons jailbreak benchmark that now includes Indic and other Asian languages [255-257].


9. AI-driven defensive agents (Sachin Kakkar) – Sachin described emerging AI-driven defensive agents that act like an immune system. He argued that, unlike the traditional “defender’s dilemma,” AI can automate 80 % of routine defensive work, giving defenders an aggregate advantage [263-272].


10. Data-exchange & licensing (Lee Tiedrich) – Lee called for voluntary data-exchange foundations, standard agreements, and Creative-Commons-like licences for data to lower friction in cross-border collaborations [280-285].


11. Closing reflections – Lee stressed that global cooperation across government, academia, industry, and civil society remains vital for mitigating AI risks and achieving the UN Sustainable Development Goals [283-286]. Amit highlighted India’s shift from a “back-office” to a “front-office” AI role, focusing on grassroots impact for farmers, schools, and hospitals [307-313]. Julian warned that the industry’s rapid pace could render today’s skills obsolete within five years, yet reiterated that AI can automate a large share of security tasks while still requiring human judgement [300-304][366-368]. Amanda reiterated that recent weeks have seen genuine integration of governance with impact discussions, citing India’s new law on marking AI-generated content as evidence of mature, responsible deployment [334-342].


12. Audience Q&A – Rita Soni raised digital-divide concerns, referencing the Digital Empowerment Foundation, prompting discussion of Microsoft’s infrastructure and diffusion work [360-376]. An audience member warned of “information arbitrage” between AI creators and broader society, echoing fears that exponential AI growth could outpace up-skilling and exacerbate power polarisation [387-393]. Lee responded by advocating AI literacy, problem-solving skills, and lifelong learning as remedies [387-393].


13. Consensus pillars – The panel converged on four pillars: (1) globally coordinated yet locally adaptable AI standards; (2) evidence-based evaluation before regulation; (3) large-scale public-private capacity-building programmes to close the skills gap; and (4) multilingual AI as both an inclusion and security imperative [6-8][33-41][108-126][263-272]. Points of disagreement centred on the timing and extent of regulation-Amit warned against over-regulation while Lee urged robust technical evaluation first-and on the magnitude of imminent job displacement, with Julian optimistic about AI-assisted roles and an audience member fearing far-greater displacement [89-95][130-133][300-304][360-376].


14. Action items – Expand ISO 42 001 and NIST drafts to cover cultural variations; extend multilingual benchmarks; scale Microsoft Elevate; reinforce L&T’s incentive-based upskilling; develop self-defending AI agents; implement continuous audit pipelines; and establish voluntary data-sharing frameworks [38-41][250-257][119-126][165-174][263-272][280-285].


Session transcriptComplete transcript of the session
Brad Staples

by corporations, by innovators to secure that outcome. And if current trends continue, the majority of AI’s economic value risks being centered in the hands of countries and corporations in the Western economies in China. And some estimates suggest that 70 % of the value could be created and reside in those locations. And I think it’s for us in this context to think a bit about why we don’t need to accept that outcome. It’s by far means not an inevitability. And to democratize the impact of AI, it requires intentional design, it takes international collaboration, and it takes societies coming together to ensure that doesn’t happen. It also takes innovation and research, workforce development, private sector partnerships, and also trust, safety, and security.

And they’re the things we’re going to talk about on the panel today. And my colleagues are extremely well -placed. to share their thoughts and insights on those topics. So let me introduce the panel. We have Amit Chandha, Managing Director and CEO of L &T Technology Services. Good to see you, Amit.

Amit Chadha

Happy to be here.

Brad Staples

Great to have you with us. Amanda Craig -Dekard, Senior Director, Office of Responsible AI at Microsoft. Great to have you with us. Sachin Kakar from India Site Development, Privacy, Safety and Security at Google. Good to have you with us, Sachin. Thank you for being with us. Lee Tedrick, Inaugurable AI Multidisciplinary Initiative Fellow, University of Maryland, Senior Advisor on the International AI Safety Report. Lee, good to have you with us. And last but by no means least, Julian Waits, Chief Experience Officer with Rapid7. Good to have you with us. Good to have you with us. Okay. So without further ado, let’s take a look at international and scientific research collaboration. And, Lee, let me come to you.

Let me pose. Here’s a question. Okay. And, Lee, let me pose. And the second international AI safety report was released just ahead of this conference, something that you’re very much an author of. Let’s start by hearing from you and then maybe, Sachin, I’ll bring you in. What opportunities do you see, Lee, in open international standards to address the technical challenges that we face while also building trust in AI -based systems and services? Which of these, how would you characterize those challenges and which are most critical in a developing country context?

Lee Tiedrich

Yeah, thanks for the question, Brad, and there’s a lot here. So the international AI safety report that I worked on with a panel of about 100 experts was just released. And one of the key takeaways from the report is that while we have made a lot of progress over the past year in evaluations and developing evidence, there’s still a long way to go. There’s a gap. And I think, you know, internationally. International standards organizations and similar efforts is a good way to work together to try to fill some of the gaps. ISO has already released one standard, 42 ,001, which is a good start, but we need to accelerate this, and we need to also recognize the fact that standards and evaluation metrics, you know, there’s a tension.

On the one hand, we want them to be able to apply across borders because we want to enable companies to have responsible technology flow across borders. But on the other hand, it’s really important because we all differ in terms of language and culture that we need to be able to customize them for different cultures, norms, languages. And I think, you know, the standards organizations will continue to play an important role. I spent a year working at NIST, the U .S. National Institute of Standards and Technology. One of the NIST projects is working on what we call the zero draft of trying to create a draft that we could then feed into the ISO process, and NIST is trying to collect stakeholder input into that draft.

And I think, you know, more globally, you know, efforts like the Hiroshima AI process, there are sort of all these pre -standards efforts where different stakeholders across different regions can work together. And I think that the ACs, the AI safety institutes across different countries and how they can coordinate. So I think there’s a lot of work to be done, but I think there’s a lot of avenues where we can collaborate together and make sure that we’re addressing the needs of everybody around the globe. Thank you.

Sachin Kakkar

Yeah, thanks, Steve. Very well covered. If I can add just a few more points. I think one of the challenges we see is copy pasting the regulations or standards from, you know, international markets to local markets may not always work. So localizing them, understanding the needs and constraints of the local area. Google launch Indiq GenBench. It’s a test bench for fine tuning. And assessing the. LLM models for local languages, supporting 29 Indian languages, 12 scripts, and 4 language families. So that shows an example of how we need to localize things. The second point is one -time audit or certification may not work as AI evolves. We need a continuous scanning and auditing to make sure we avoid any temporal drift in these standards and the applications.

Brad Staples

So Sachin, let’s build on that. How do governments and developers collaborate in a way that we get the outcome that everyone desires, which is not to see the developed markets race ahead of developing countries? What does that collaboration need to look like?

Sachin Kakkar

Yeah, that’s an interesting question. I think at highest level, the way we think to bridge the gap between AI divide is to move away from traditional, traditional transfer approach. to more co -creation where developers and government coming together and and the underlying goal is that standards and regulations are seen as enablers and equalizers not as barriers or compliance hurdle so three specific dimensions in which we believe developers and government can collaborate and Google specifically focuses on number one is open source frameworks and interoperability and standards second capacity building and third is workforce upskilling and research I’ll quickly unpack each one of them so starting with open source frameworks AI is not new to Google we have been working on AI for past decades and remember Alpha fold and we were the first one to share the transformer paper on which all the LLMs are built when we were building AI we were also focusing on best AI practices and safety practices on AI And we have open sourced all the best practices to keep AI safe.

Safe SAIF, secure AI framework is something we have shared outside. And it is important to understand supply chain risk. And India’s digital transformation is characterized by DPI, the digital public infrastructure on which Aadhaar and UPI are built. So they can actually leverage some of these secure AI framework to make sure the malware attacks and the vulnerability in open source components are taken care. Now, standards is one thing. The collaboration goes beyond to adoption of them. And Google has co -built the COSI, Coalition of Secure AI Framework, with various industry partners. And this is what we are expanding in APAC, including India. Now, we are also committed to capacity building. With the government. And which means we need to provide tools and infrastructure, not just standards.

So we are proactively sharing the threat intelligence. We are building tools like SynthID and sharing with the community abroad. SynthID is a watermark technique which goes into the text, image, video, audio, and it can tell you whether it is AI -generated content. So some of these tools are also helping us to make sure our commitment towards standards goes into actual adoption. And finally, upskilling workforce, digital literacy, working with government to make sure the vulnerable section of the society, like elderly and teenagers, are aware of some of these challenges. And giving grants to institutes like IITs to push the frontier of research, like PQC, post -quantum cryptography, are other areas of collaboration between AI developers and the government and academia.

Brad Staples

Let me just ask you both a question. Is there a trade -off between setting global standards and regulation? ensuring the right environment for innovation and collaboration?

Sachin Kakkar

Oh, yeah, that’s right. And that’s where you can start with the global regulations but then adapt them to the local constraints. Like we have bandwidth constraints in India. We have linguistic diversity. And therefore, the global standard should not become a hurdle for the young startups in India. Rather, they become co -creators in enabling the innovation that can happen and then evolve from there. So it’s a creative tension, and I think the best way is to be adaptive in this situation and eventually evolve to the international standard.

Brad Staples

How do you see this interplay, Lee?

Lee Tiedrich

Yeah, I think, I mean, kind of in my work, you know, both in government, academia, and I spent 30 years working with the private sector, I think sort of figuring out the standards and the standards that are in place and the values that are in place. evaluation techniques is really key. You know, how are we going to evaluate these systems so we can, they can meet a certain threshold of safety. And then I think the question kind of comes in, you know, afterwards, once we know what it is, you know, should there be regulation or not? You know, I worry a lot of times that when we go too quickly toward the regulation, you know, the best of intentions may be there, but, you know, the technology is moving so quickly, regulators don’t necessarily know how to style the regulations to achieve the goal.

And I think sort of working from the bottom up with the science, developing the evaluation technique, taking into account that we do need to socialize, you know, customize for local markets is really important. And then we can get to the question of, well, should there be a regulation or not? And that’s where, you know, different countries may have different answers, but at least we’re working from a common technical framework and evaluation framework to assess systems. Thank you.

Brad Staples

Thank you both. Let’s make a shift to… The conversation towards more public -private… collaboration, which I think we know is at the heart of driving the success that everybody’s looking for. And Sachin was talking a little bit about capacity building. Maybe we focus on those two elements. And Amanda, I’ll come to you and then to Amit. So there’s a persistent skills gap in AI. It’s very apparent and a lot’s being done to try and bridge that here in this country. How are your, has your organization, and I’ll come to you Amit with the same question, how are your organizations grappling with that challenge and also collaborating with government to help to narrow that skills gap?

Amanda Craig Deckard

Thank you. Yes, skills gap is really important. We see it as part of the sort of foundational infrastructure for what we need to work on together as Microsoft with other industry partners, government partners, other local partners. It’s going to take a whole community really working together to do this at scale. And just to take a step back for a moment briefly before I talk more specifically about skills, you know, we kind of see this as part of a holistic effort where you kind of need to support all of the enabling infrastructure for AI deployment, kind of from from the infrastructure layer all the way through sort of realizing value in local use cases. So we actually published on Wednesday a blog from our president, Brad Smith, our chief responsible AI officer, Natasha Crampton, where we talk about sort of five areas where we’re really focused on investing to kind of close the gaps between AI diffusion and the global north and global south.

So we talk about, like, hard infrastructure investment, right, in terms of connectivity, AI compute capacity, scaling is the second part of that plan. And the third part is really thinking about multilingual, multicultural AI capability. And the fourth is really working with local partners on local AI deployment and really what we can learn and what’s going to serve local communities, also what we can learn through that process around how we need to adapt the technology so it’s ready for those local use cases. And then really measuring diffusion so that we actually understand how things are going and how we can do that. And then really measuring diffusion so that we actually understand how things are going and how we can do that.

And then really measuring diffusion so that we actually have really informed interventions. And then really measuring diffusion so that we actually have really informed interventions. And then really measuring diffusion so that we actually have really informed interventions. So that’s the kind of holistic approach that we’re thinking about for public -private partnership. And looking at skilling more specifically, we actually have a new sort of initiative that we launched last July at Microsoft called Microsoft Elevate, which is really bringing together a number of ways that we engage with a community that is going to also be part of skilling everyone at scale, so sort of nonprofit communities, schools, and actually ensuring that they’re equipped with the technology itself, so with cloud compute access and with access to AI.

And then we are coupling that with investments in skilling. So we have made some big -number commitments around how we are really trying to do this at scale. I would say specifically for India is, you know, we last year, early last year, we made this commitment to scale up 10 million Indians by 2030. This year, we upscaled 5 .6 million Indians, and so we actually doubled that commitment to 20 million people by the end of 2030. And one of the ways that we’re doing that is we’re actually, we just announced this week a new Elevate for Educators in India program where we’re partnering with local schools, with vocational institutes, with higher education institutions to sort of teach the teachers, right? So you can actually work at scale, and we’re working with a number of Indian government ministries in this program to figure out, you know, what, how we can ensure that we have tailored programs for all of those different communities and that we’re thinking holistically about how.

You know, we, across those different sort of educational paths, are really meeting people where they are and equipping them to kind of do the next powerful thing with AI.

Brad Staples

Thanks, Amanda. And as a business, L &T Tech Services, I mean, part of L &T originating here in India, but now very much involved in global markets. How are you tackling this in terms of addressing the skills gap?

Amit Chadha

Sure. So thank you. So before I go to skill gap, I do want to make a point on the regulation part. I do believe that too much of regulation can stifle innovation as well. So we’ve got to be careful on how much we do and where do we take it. And then the second part, of course, is to do regulation of traffic control in Delhi for our next event that we have. I think all of us will agree. Let’s get down to skills in a second now. I had to say that because it was a mess in the last two days. I’ve got pictures of myself in an auto rickshaw as well. So if we get down to skill gap, I want to address this three ways.

So I am responsible. I run a company which is potentially India’s first, engineering intelligence company with about 25 ,000 employees. I’ve been CEO for five years. When I took over, we used to be about 15 ,000 employees. We’re about 14 now, we’re about 25 ,000 employees. So, we look at skill gap and I look at skill levels. Three things you have to think about. Whatever work we’re doing in engineering consulting today, I want to say 40 to 50 % of that is new and built in the last five years, did not exist. I also want to say that whatever we are doing today, 60 % will be gone in about three to five years time. That’s the rate and pace of change. So, while my colleague from Microsoft spoke about skilling school stem as well as colleges, we’re doing two different things to stay current with the changing dynamics or three things.

One, we are actually reaching out to colleges. In the last year of their curriculum and we are making sure that the curriculum is going to be in the last five years. in India is contextual to what the industry needs. So we are sending our employees to teach. We are using CSR hours. We are doing all of that to build that up. We are actually participating with NASSCOM as well to be able to do that in the skill development. The second thing we are doing is upskilling our own employees. Now, again, in a developed economy, it’s very simple that you hear these layoffs that happen all the time and they are not because people don’t have work but because the skill is redundant.

So let’s go ahead and get a new set of skills. In an Indian context, my colleague here spoke about that very nicely. You can’t cut and paste. You fire a thousand people, you will actually end up spending half your working hours plus more with the labor commissioner here locally. You can’t do that. So you have to be able to skill people up while they are in the workforce. Now, one thing is developing curriculum, developing modules for them. to go through but the second part is actually making them do it so and normally in a consulting company you would send people to get get coached and do upskilling when they are not billable we actually doing it while they are billable because when they become non billable that’s not when you want it you want it before that right so that’s and it’s a major shift in how we’ve been operating the third thing that we are tracking as an engineering and a technology company is how much of personal time is the employee spending on technology development efforts beyond billing hours to the client so you come in and spend 40 hours right and that’s what you normally work now if you spend another three hours to write a technology paper you file a patent you actually go speak at a symposium all that is towards technology effort beyond billable hours.

The percentage of workforce within the company five years ago that did that was at 19%. Today, that number stands at 52 % of our workforce spends time, personal time to go spend time on technology beyond billable hours. And the net result of that has been we used to file 100 patents per year. We have gone from there, sorry, we used to file 50 patents per year. We have gone to filing 200 patents per year. So the point is that so again, summarizing, one, reach out to the local ecosystem and do it and spend the last year with them. That’s the hook in. Second, upskill the workforce within. And third, beyond just money, find a bigger purpose like technology or betterment of human race with technology to motivate your workforce to actually spend time on that.

And I think that’s what we’ve been doing and we think will be helpful. One last thing and we keep discussing India. But if I look at the US today, and I’ve lived there for 27 years now, is we will need schools to start mandating a certain level of STEM education that has to be done. Today, both my boys went to public schools in Virginia. I can tell you that in some schools, it’s broken. And we don’t do that in the US. We don’t do it in parts of Europe. We will continue to look at different countries for skills. And that is not where we want to be in 20 years time. I’m sorry. Jump in. Jump in, Julius.

Julian Waits

I was going to agree with what you just said. Because Rapid7, like your company, of course, we’re a software company. We’ve basically mandated the use of agentic technologies by our employees, especially the ones in developing countries or countries that aren’t as developed as the United States. What I would tell you also on the education system, which is unique to the US, which is what makes India special. And that’s why we’re in such a wonderful place. It’s because of the technology. we’re so far behind, we’re forced to use labor in other societies that appreciate the use of STEM technology and where it’s embedded in the way that they learn. We have no choice. If we didn’t have foreign workers in the U .S., we would fall behind the rest of the world.

You don’t hear that too often.

Brad Staples

Let me just probe a little bit on this. How much is carrot and how much is stick when you’re looking to upskill the workforce and bring them into more of an AI mindset? You’ve got a very bold program at Microsoft reaching across colleges, but you’re also active, I know, in creating the capabilities within the workplace. How much of this, to both of you, is carrot or stick? I was at a dinner in D .C. a few weeks ago where the head of a large media group had told his team they had to be two times more productive by the end of 25 using AI. to stay in their roles and 10 times more productive by the end of 26.

That was an expectation. But it was set very much as a minimum standard and goal. They were putting training programs in place, but there was a clear metric to achieve. What’s your perspective based on how you’ve seen this work?

Amanda Craig Deckard

You mean internally?

Brad Staples

Either within Microsoft or within the companies that you collaborate with in training.

Amanda Craig Deckard

In our experience, I think we are much more leaning in the direction of using CareReds. So we have a lot of programs internally that are a mix, I think, in terms of tactics that’s important. Both kind of like, here’s a day -long training or a week -long training program, right? Which I think is really valuable. It gives you an opportunity to really dig in. But also really difficult. Difficult to find the time for. And so we actually have weekly tips. for how colleagues that are in similar roles are using Copilot, for example, internally to have more efficiency in their work. And I feel like that’s the kind of thing where, you know, is that skilling, is that training?

I don’t know, but it certainly is helpful because that’s the kind of thing that in my day -to -day job I can look to and integrate much more easily. And the other thing that we’ve started doing is hackathon -type exercises internally that are not just oriented towards engineering communities, but actually our corporate external legal affairs group, which is not just lawyers, but is a lot of lawyers, for example, having a hackathon that’s really meeting that community where we are and building a Copilot to serve our kind of day -to -day work. And so a lot of, like, different kind of carrot approaches is what we’re doing internally and where we see, I personally can say, like, I feel especially the latter two, it’s just hard to find, like, time to do a deep training program.

But if you integrate sort of into your day -to -day work, make it easy with these kind of carrots, you can really start seeing the impact, and that motivates you to use the technology more.

Amit Chadha

So, stick is out of the window, you can’t do that anymore, right? But we use carrots and budgets. Okay? When I say carrots, it’s basically appealing to the individual now and their glorification. So if it’s a patent, you’re filing it. The company doesn’t own it, you own it, right? If there’s a paper, you’re writing it. If you’re speaking at a symposium, you’re doing it, right? And that allows them to think. And then we’ve actually spent a lot of time through HR to try and explain that with the pace of change of technology, if you don’t upscale, you don’t change, you actually are facing extinction in about five, ten years’ time. Gone are the days where you can be there on the same technology for 30 years, will not work, right?

So we home in the message, provide that, and then provide the push. we glorify people that file patents, we glorify them within the company so that’s one. Second when I come to budgets, we actually leverage budgets with our segment heads. So they’re given budgets, they’re given training budgets, we also provide them headcount budgets and say can’t exceed. So we’ve been able to actually improve productivity with AI so we used to run on a utilization of productivity basis the metrics all service companies track at about 73 % five years ago. We’re already at 83 % and I think I can push this up another 2 % in terms of productivity levels in the company again leveraging AI and that’s the budget approach that we use but with the seniors.

So it’s a mix of both if I may to be able to manage this and motivate this. But it’s an ongoing exercise.

Brad Staples

It’s fascinating maybe we’ll come back to it as we talk to a close. Let me shift gears a little bit and talk a bit more about security and trust and come to you Julian if I can. So I think we’ve recognized and we’ve heard it in different conversations this week that there’s a trust deficit around the use of AI, certainly in a public context. There is some fear, suspicion, and anxiety in a global context. I’m not talking just about India. YouGov carried out a survey in the U .S. last month, and in the context of fintech, they found that less than 20 % of Americans trust AI in financial services. And they’re also sort of struggling, I think, with some of the cybersecurity questions and issues, which you’re very well placed to address.

So if public trust in AI remains fragile and AI -specific cyber risks are growing, which they clearly are, what are the immediate steps that industry should prioritize to counter those threats? And… Things like prompt injection attacks. How can these solutions be scaled? Thank you. particularly for developing countries?

Amit Chadha

of seven. So other than the incentives that we’re giving you to learn these technologies, which of course is to the company’s benefit, it’s to your benefit because these skills that you’re learning and that you’re going to be using will translate to the next thing that you do, and it makes you that much better. If we do enough of that, not only are we helping the employees, but we’re helping the societies and the ecosystem that they live in, including in India. I wanted to add one additional area that we’re really focused on to address the kind of AI cyber threats, particularly relevant in India and other areas in the global south. I mentioned that one of the areas that we’re focused on is multilingual and multicultural AI capabilities, and one of the most important foundational reasons for doing that, of course, is that you have an AI that works well.

and in different languages and cultural context is reliable performs well. Another reason is also that AI that is not robust and it’s multilingual and multicultural capabilities does have additional security weaknesses. You mentioned prompt injection attacks and you know one way in which you can think about a prompt injection attack is basically if you have an AI system and you have the sort of safety system around that, someone who is misusing the technology can sort of try to break that safety system or get around it and one of the ways that attackers do that is by using languages that are not well supported in that model or system right so if a model or system is primarily prepared to perform well in high resource languages, but not in low resource languages.

Tamil, for example, or some other sort of language that is not really built in to how the model performs, if companies aren’t attuned to that, then an attacker could use that language and jailbreak the system, basically get around the safety system. And so it’s just another reason why it’s really important from our perspective, and we’re partnering with a lot of others in industry and government, so this comes back to a public -private partnership opportunity, to really work on multilingual and multicultural AI capabilities. One of the things that we announced this week is actually there’s a benchmark from an organization called ML Commons, which is a jailbreak benchmark. It’s actually measuring how robust systems are against that kind of prompt injection attack technique.

And we worked with a number of others to really build out the current version of that. which is really English -specific, to include multiple Indic languages and Asian languages in terms of its capability. It’s not going to solve the problem. It’s one step of what we see in the right direction. But I just want to draw that sort of really specific area of focus in India and other areas for thinking about the kind of AI and cyber threats.

Brad Staples

That’s wonderful. Thank you.

Sachin Kakkar

Can I add a point?

Brad Staples

Sure.

Sachin Kakkar

So this is about the rise in prominence of AI agents. And we have been constantly investing in self -defending systems, just like a human immune system. As agents grow and they can – the scale and speed at which they can attack infrastructure, the hospitals, the energy grids, we need agents on the other side. And this becomes AI versus AI story, where we are smartly inventing agents. And we believe, first time, with AI. We can reverse the defender’s dilemma. So the dilemma, many of you might already know, attackers have to find just one open wallet in this crowd, but defenders have to protect all the wallets all the time. And first time, AI will give us aggregate advantage to defenders because majority of defenders’ time, 80%, goes in drudgery and skunk work.

And AI can actually automate and uplift that work. So the entire stack of defenders can improve and uplift with AI. And we believe that we’ll be able to build a self -defending adaptive system which can protect us from various vulnerabilities.

Brad Staples

Wonderful. Thank you. Well, we’re drawing towards the close of the session, and it’s been a very rich conversation. I just wanted to take a step back and ask you all, you’ve been – most of you have been here all week. And you’ve heard a whole host of different interventions and some very significant investments and initiatives. What are your conclusions? What’s changed? changed in your perspective when you look at AI for the future from your own vantage point? What’s this event given you a new perspective on or crystallized in your minds? Maybe, let me go back to Lee. Do you want to share your thoughts?

Lee Tiedrich

It’s reinforced for me, you know, something I’ve seen through a lot of my international work with OECD, with global partnership on AI, just the need for the global cooperation, and not just at the government level, but among all different types of stakeholders, you know, within academia, within industry, within civil society, and working together. And I think, you know, we can sort of pause at this moment and say, you know, if you look at the safety report, we’ve made a lot of progress over the last few years, but we need to continue to work together and not just focus on the harms and the risks that AI can have, but think about the benefits. You know, if we are able to leverage AI, we might be able to, you know, help achieve some of the UN Sustainable Development Goals.

I think one other thing I want to just kind of enter into the mix, you know, the customization of AI for different regions also depends upon data. And a lot of my work has focused on, you know, how do we create voluntary foundations so we can exchange data more easily? Like right now, we don’t have data standardization. So if I want to exchange my data with any of you, my data may be in a different format. As a former lawyer, a lot of my work is also focused on we don’t even have standard agreements. So if we want to exchange data, how can we easily transact and not have all that friction and transaction costs?

You know, we don’t have the Creative Commons licenses right now for data. And if we’re ever going to get to that localization and that ideal point where we’re customizing for different cultures, we’re going to have to have a lot of different tools. we’re going to have to figure out ways where we can voluntarily and responsibly share data. And this has been part of the discussion, but hearing the conversations over the past week kind of underscore the need to continue to advance that work while we work on some of the other topics that we’ve been discussing.

Brad Staples

Great. Julian?

Julian Waits

More than anything, what this week has taught me is I’m old and this industry is moving.

Brad Staples

Okay, so stop saying you’re old. You don’t look old. You look great.

Julian Waits

This industry is moving so quickly. Again, skills that are needed and considered to be important today will no longer be necessary in five years. And if the workforce and if the users of the technologies aren’t evolving with it, we all fall behind. So what is a great advantage and opportunity in using AI, the danger is it also cannot. obsolete at the same time. And we need to be very careful of that and how we use it and then how we help, hopefully, to promote this throughout the world in a way that makes it equitable for everyone.

Brad Staples

Great. Thank you. Amit, Sachin, any reflections?

Amit Chadha

Yeah, I think one of my big takeaway from this week was some parts of the world are focused on AI as an influence. Some part of the world is focused on governance of the AI. I think India is focused on impact of AI at the grassroots level. Thinking about how AI will impact a farmer or a small school or an NGO or a small hospital has been the focus. And it resonates with me because mission of my team is to keep everyone safe at scale. And when I say everyone, it’s not just about Google or Alphabet or not just about our billions users. but the entire society, everyone at scale, and how to make sure we become the architect and not just the consumer of AI and make sure it reaches to the grassroots level is one area to think about.

Sachin Kakkar

I agree with that. So, of course, outside of the traffic bit, right? What you learn, if you ask me, in the whole week that I’ve seen is that if I, and I’ve been in this business for, I don’t want to date myself, so say a couple of decades and we leave it there. But people used to say India is a back office. That’s how it started in 90s. People said India is a back office. Y2K happened and they said the IT industry will be over, right? Because Y2K, that’s all there is. Today, the IT industry, engineering industry together is $600 billion. We move forward. People said, are you going to take data? And are you going to?

Is data going to get leaked? and then COVID came and India proved yet again there was not a single data leakage that happened from India Inc anywhere. There are some draconian rules. We don’t allow our employees to use USBs, blah, blah, blue, blue. Net result, zero data leakage, absolute privacy and the government comes down very heavily if they get something like this. So they’ve been able to create a safe environment. Move forward. People used to say is India a market? This last week and forget technology companies, if you just walk the floors, you see people like Schneider, you see people like Vertiv, you see others, they are developing products for India. In India, you’re developing products to the world from India and it’s no longer just a cost base.

So if I was to say there’s one thing that I’ve learned in the last week, it is that India is no longer the back office for AI. it is actually the front office for AI for the world and that’s the net summary that I would draw in the entire week that I’ve been here

Brad Staples

Thank you, that’s very funny Bill

Amanda Craig Deckard

And I, you know, zooming out to the sort of highest level, one of the things that I really genuinely felt this week that has been very exciting to me is that there is a lot of energy around how to deploy this technology, how to have impact it’s been actually really fun to be in a lot of sessions with students and entrepreneurs that you can really feel the energy and I feel that it has the conversation around governance has come along and felt integrated in a really genuine way as well, if we look at the kind of summit series that kicked off a few years ago at Bletchley, I think it’s fair to say early on the emphasis of the conversation felt very safety and security heavy last year In France, there was a big pivot to trying to think about the opportunity.

And what I see in India this week is a genuine integration of those conversations and a deepening of those conversations. So really, what do we mean when we say impact? What really do we want to see in deploying this technology? And then sort of not taking for granted that, of course, governance actually has to come along with that. You have to really do the deep, hard work around things like multilingual AI. And there’s a real need for a partnership in moving those things forward. And there’s a real need to think about governance steps so that you can have trust in this technology. India actually just passing a law last week thinking about how to mark AI -generated content.

There’s a real sort of recognition that some of those steps are going to be important. And you don’t want to stop or have those steps sort of prevent deployment of the technology or realization of the benefits. But, like, you know, we have to do the deep work together to sort of move. Forward across a dime. A dime. and impact and governance together.

Brad Staples

Thank you. Thanks, Amanda. We’ve got a few minutes. If anyone would like to chip in. Great. Hands are going up. The room’s filled, by the way, while we’ve been going along, and it’s been a great conversation. Let’s hand one or two mics out to colleagues around the room, if we can, to the lady here on the front.

Audience

Hello? Hello? Okay. Right. Thanks, and I appreciate the comments and the traffic. I think we’ve all got a traffic story. Now, I hear a lot of talk about upskilling, co -creation, which are all very important things. I agree. But what I’m also hearing a lot from, and I’m sure you all are too, is the issue of speed of this technology that could potentially outspace some of this real scenario. So my question to you is, you know, what do we – and this goes to anyone who might want to answer or has some real thoughts on it. What do you think might – be the gaps between that that we would need to address in a transition process between upscaling and real economic displacement.

Brad Staples

Who can grab that? Yeah you put the mic Julian you’re gonna give it a go.

Julian Waits

It’s a real problem right meaning technology is moving so quickly as I said years ago I would tell young people in technology learn to be the best programmer you can. Now with agentic AI especially with the usage of MCP where you can have multiple agents talking to each other sharing information it’s now learning to be the best user and prompter of the technology understanding the outcomes but there’s gonna be some displacement. It’s you know right now I would tell you AI especially in the security context I can probably eliminate 60 % of the things that humans have to look at today. but there’s still the 40 % where a human has to be involved to make a determination around risk to an entity, whether it’s a government, whether it’s defense, whether it’s a business.

And so it’s really helping them evolve to this next level of user, this next level of programmer, if you want to call it that. And there probably will be some displacement that we just can’t get around.

Brad Staples

Gentleman in the front.

Audience

I actually have an extension of the same concern that the lady shared. The speed is one aspect, but also I think there’s a whole information arbitrage between the people who are creating and pioneering in the AI space versus the others to whom the information is reaching. And the impact of that on the power polarization and even the democracies. You know, that possibility I sense. And a lot of the conversation that I hear today is assuming that, you know, AI is moving linearly, but I see it moving exponentially. I agree. With a polarizing effect. Yes. Yes. Both. Both the polarizing effect and the effect, you know, like I think 40 % that Serge just spoke about. For me, that 40 % is not really 40%.

It’s just that we want to be very, very careful. But if we were to not care so much about how accurate and how much data standards we have. It could be 100%. You know, it’s very large. You know, I think the displacement can happen very fast. So I’m really concerned about how things are moving. I’m not sure if my concern is being shared by people in the panel.

Brad Staples

Anyone want to respond?

Lee Tiedrich

I mean, I think we need to focus on AI literacy because, you know, again, the technology is moving so fast. How do we make sure people in their everyday lives? People in the workforce have access to education so they can continue to upskill. and I also think being in academia after having been in the private sector for we won’t go into how many decades but teaching students how to think I think a good student when you’re looking at your career trajectory it’s not just coming out of college with a set of skills but teaching them how to think, how to problem solve and I think it’s really the public -private partnerships that Amanda mentioned with academia is really important because a lot of times the tenured faculty, they don’t know how to teach that to students and bringing people in to tell them this is how you adapt, these are sort of what you’re going to expect in your career and I say this not only from the perspective of being in academia but having two children of my own in their 20s who are just starting their career and sort of expect the unexpected but learn how to be on your toes I think a lot of it is just having the good analytics skills, having good communication skills and if you have those core skills you’re going to be able to adapt and it will carry forward in the future.

Brad Staples

Great. I think we’ve got time for one more question. Okay, gentle. Oh, the lady who, sorry, the lady who has the mic. She has the mic.

Audience

Thank you so much. My name is Rita Soni. I work with a company that’s operating in small -town India, delivering all these tech services that many of these companies are doing. And my question is actually for Amanda, because I think she was the only one who really brought up the digital divide that continues to exist, both in India and across the globe. I actually didn’t feel like I heard very much about how to actually bridge that. Yesterday I didn’t have one of those special passes to go to the events on the 19th, so instead I visited a local nonprofit called the Digital Empowerment Foundation, which has been around for more than 20 years, doing incredible work in rural India.

And they’re simply talking about last -mile Internet connectivity, let alone the enablement or ease. in the critical thinking that Lee just mentioned. So just a few more words on how it is that we can bridge this digital divide and make it more equitable, because I think the more folks are going to be excluded, the more different kinds of problems that we’re going to have.

Amanda Craig Deckard

Yeah, and I think you may have come in after we talked briefly about some of the work that we’re doing to address the digital divide. And for a lot of words, I would point you actually to we published a blog on Wednesday where we talked about investments in five areas that we’re thinking about to close the gaps that we see. And we actually point to the work that we’ve done using our own telemetry to sort of track these gaps and their trajectory and really lifted up our own concerns about the trajectory. And so among the areas of investment, infrastructure is really foundational. And we actually do talk in the blog about of course, infrastructure in terms of like AI.

compute capacity, but actually the fundamentals beyond, like, in terms of connectivity, energy access as really important as well. And then we talked about scaling multilingual and multicultural AI capabilities, really working with local communities on local use cases and the kind of deep work that we can do to sort of help bring the technology to people and see, like, even in agriculture, for example, we at Microsoft Research have done a lot of projects, like, in close collaboration with local communities and try to see, like, how could this serve you and then also learn from how the technology needs to evolve in order to do so better. And basically then also taking a step back and continuing to study diffusion so we understand, like, are our interventions working?

Are they not? If so, what can we learn and how can we improve how we’re intervening?

Brad Staples

Okay, so time’s up, everyone. Thank you so much for your contributions and for joining us at different points during the conversation. Thanks to the panelists for a really rich and diverse conversation. It’s been a real pleasure to have you with us. And I think we end with a sense of optimism that no matter what the challenges of the digital divide and those other elements, there’s probably an AI solution to the AI challenges that we’re creating. Thanks. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (19)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Roughly 70 % of AI’s economic value could become concentrated in Western corporations and China if current trends continue.”

The knowledge base notes that some estimates suggest 70 % of AI’s economic value risks being concentrated in Western economies and China under current trends [S1].

Additional Contextmedium

“Democratising AI will require intentional design, international collaboration, and coordinated action across research, workforce development, private‑sector partnerships, and robust safety and security measures.”

Additional sources highlight that global AI governance must involve inclusive participation and address concentration of power in a few companies and countries, underscoring the need for coordinated, democratic action [S94].

Confirmedhigh

“Lee Tiedrich presented the second International AI Safety Report, noting progress in evaluation techniques but a persistent gap.”

Lee Tiedrich is cited as emphasizing the need for global collaboration to develop common evaluation standards, indicating awareness of both progress and remaining gaps in AI safety assessment [S10].

Confirmedmedium

“ISO 42 001 is an early AI safety standard and the NIST “zero draft” will feed into future ISO work.”

The knowledge base reports ongoing work to incorporate AI safety assessments into the ISO process, with expectations that drafts will be accepted within ISO standards [S99].

Confirmedmedium

“Regional pre‑standard initiatives such as the Hiroshima AI process foster cross‑regional stakeholder cooperation.”

The Hiroshima process is identified as an instrument to promote collaboration among regional stakeholders on AI governance [S101].

Confirmedhigh

“Regulators often lag behind the rapid pace of AI development, so regulation should follow robust, evidence‑based evaluation frameworks.”

The rapid development of AI is described as presenting unprecedented challenges for slower-moving regulatory frameworks, confirming the lag noted in the claim [S10].

Additional Contextmedium

“International agreements and verification technologies will be needed for AI safety at the global level.”

The knowledge base stresses that future AI governance will require international agreements and technical means for verification, adding nuance to the discussion of regulation and standards [S96].

External Sources (102)
S1
Welfare for All Ensuring Equitable AI in the Worlds Democracies — -Amit Chadha- Managing Director and CEO of L&T Technology Services
S2
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — And they’re the things we’re going to talk about on the panel today. And my colleagues are extremely well -placed. to sh…
S3
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — Great to have you with us. Amanda Craig -Dekard, Senior Director, Office of Responsible AI at Microsoft. Great to have y…
S4
Welfare for All Ensuring Equitable AI in the Worlds Democracies — -Sachin Kakkar- India Site Development, Privacy, Safety and Security at Google
S5
Welfare for All Ensuring Equitable AI in the Worlds Democracies — – Amanda Craig Deckard- Amit Chadha – Sachin Kakkar- Amanda Craig Deckard- Amit Chadha – Sachin Kakkar- Julian Waits- …
S6
S7
Announcement of New Delhi Frontier AI Commitments — -Brad: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S8
Keynote-Brad Smith — -Brad Smith: Role/Title: Vice Chair and President of Microsoft; Areas of expertise: Technology policy, privacy, cybersec…
S9
Welfare for All Ensuring Equitable AI in the Worlds Democracies — – Lee Tiedrich- Amanda Craig Deckard – Lee Tiedrich- Sachin Kakkar
S10
Agents of Change AI for Government Services & Climate Resilience — – Lee Tiedrich- Srinivas Tallapragada Tiedrich advocates for developing comprehensive global standards through internat…
S11
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Great to have you with us. Amanda Craig -Dekard, Senior Director, Office of Responsible AI at Microsoft. Great to have y…
S12
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — Great to have you with us. Amanda Craig -Dekard, Senior Director, Office of Responsible AI at Microsoft. Great to have y…
S13
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S14
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S15
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S16
Artificial Intelligence & Emerging Tech — Jörn Erbguth:Well, I would like to stress that flexibility is key, because we don’t know what applications will be there…
S17
Responsible AI for Children Safe Playful and Empowering Learning — For a child living in urban Delhi, AI has found its way into their education either through the home or the school. But …
S18
AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence — The panel reached consensus on the need for fundamental educational reform to prepare students for an AI-integrated futu…
S19
Education meets AI — They stressed that understanding where students currently stand in terms of education and adapting teaching methods acco…
S20
Opening of the session — El Salvador: Thank you, Chair. El Salvador, thank you for convening this session. For my country, it is essential to …
S21
HIGH LEVEL LEADERS SESSION IV — Cooperation among stakeholders, including the government, industry, academia, and civil society, is seen as crucial to a…
S22
WS #199 Ensuring the online coexistence of human rights&child safety — The conversation also touched on the global nature of the problem, the importance of considering victims’ perspectives, …
S23
What is it about AI that we need to regulate? — Multiple sessions highlighted the dangers of simply copying governance models without adaptation. InDay 0 Event #257, Lu…
S24
The Tokenization Economy — However, it was noted that the principle of ‘same activity, same risk, same regulation’ presents challenges when it come…
S25
Digital Public Goods and the Challenges with Discoverability | IGF 2023 — Nonetheless, the path to widespread adoption of open-source software necessitates capacity development across multiple d…
S26
Open Forum #66 the Ecosystem for Digital Cooperation in Development — Tale Jordbakke: Sure. I do think that we as a government agency can play a role. Firstly, by being clear on that the pol…
S27
Laying the foundations for AI governance — Dawn Song: Yeah, that’s a great question. I think in AI safety and security, we are facing huge challenges. The field is…
S28
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — All right. Just speaking for myself, I can’t wait to use agents. I feel like it’s a lot of developer communities that ha…
S29
Keynote by Uday Shankar Vice Chairman_JioStar India — This comment is transformative because it reframes India’s role from service provider to global leader. The distinction …
S30
From India to the Global South_ Advancing Social Impact with AI — AI is the new electricity. The question is who has the switch? And today that’s what we will be discussing. You know, if…
S31
Opening — There is a need to strike the right balance between fostering innovation and implementing regulation in the field of AI …
S32
E-commerce and Sustainability: an overlooked nexus (Brazilian Center for International Relation – CEBRI) — They caution against excessive regulation, as it may stifle innovation and economic progress, particularly in developing…
S33
Microsoft details threat from new AI jailbreaking method — Microsoft haswarnedabout a new jailbreaking technique called Skeleton Key, which can prompt AI models to disclose harmfu…
S34
AI Meets Cybersecurity Trust Governance & Global Security — Udbhav highlights that large language models are inherently probabilistic, which makes them vulnerable to prompt‑injecti…
S35
How to make AI governance fit for purpose? — International Cooperation and Standards Role of international cooperation and standards Singapore advocates against fr…
S36
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — – **Balancing Global Cooperation with Regional Diversity**: Extensive discussion on how to achieve policy interoperabili…
S37
Smart Regulation Rightsizing Governance for the AI Revolution — The speakers demonstrated strong consensus around pragmatic, collaborative approaches to AI governance that balance glob…
S38
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — – Kristalina Georgieva- Brad Smith 38,000 GPUs available through public-private partnership as common compute facility….
S39
What policy levers can bridge the AI divide? — – Tatenda Annastacia Mavetera- Hubert Vargas Picado- Emmy Lou Versoza Delfin Development | Sociocultural Kone argues t…
S40
Global Digital Compact: AI solutions for a digital economy inclusive and beneficial for all — Microsoft Elevate represents the next chapter of corporate philanthropy, combining technology support, donations, and sa…
S41
WS #162 Overregulation: Balance Policy and Innovation in Technology — Amattey uses the COVID-19 pandemic as an example of how innovation can thrive with less regulation in times of crisis. H…
S42
Safe Digital Futures for Children: Aligning Global Agendas | IGF 2023 WS #403 — The analysis argues for equalizing trust and safety investment. Market concentration is also opposed, with a call for a …
S43
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Moreover, Aryal urges for a thorough exploration of the potential risks that come with AI in the context of cybersecurit…
S44
Secure Finance Risk-Based AI Policy for the Banking Sector — “And it should be seen as a, it should be seen as an instrument.”[6]. “That can be addressed only through the governance…
S45
Ten cybersecurity predictions for 2026 from experts: How AI will reshape cyber risks — Evidence from threat intelligence reporting and incident analysis in 2025 suggests that AI will move from experimental u…
S46
Advancing Scientific AI with Safety Ethics and Responsibility — -Balancing Open Science with Security: Panelists explored the challenge of preserving open science benefits while preven…
S47
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S48
WS #279 AI: Guardian for Critical Infrastructure in Developing World — AI technologies can facilitate multilingual support in security applications. This capability allows for broader access …
S49
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — AI policies in Africa should ideally espouse a context-specific and culturally sensitive orientation. The prevailing ten…
S50
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Xianhong Hu:Thank you very much Mr. Ambassador. Good morning everyone. First of all please allow me, I’d like to be able…
S51
Ateliers : rapports restitution et séance de clôture — Joseph Nkalwo Ngoula Merci. C’est toujours difficile de restituer la parole d’experts de haut vol. sans courir le risque…
S52
Safe Digital Futures for Children: Aligning Global Agendas | IGF 2023 WS #403 — The analysis examines topics such as online crime, the dark web, internet fragmentation, internet companies, innovation,…
S53
Hello from the CyberVerse: Maximizing the Benefits of Future Technologies — The timing of introducing frameworks, standards, and regulations is also deemed critical. If introduced too soon, regula…
S54
WS #162 Overregulation: Balance Policy and Innovation in Technology — It prompted discussion of specific examples where regulation enabled or catalyzed innovation, adding nuance to the debat…
S55
WS #438 Digital Dilemmaai Ethical Foresight Vs Regulatory Roulette — This perspective reframes regulation as potentially enabling innovation by providing predictability, building trust, and…
S56
Hard power of AI — In conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It hig…
S57
Australia weighs risks and rewards of rapid AI adoption — AI is reshaping Australia’s labour market at a pace that has reignited anxiety aboutjob security and skills. Experts say…
S58
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — A critical concern addressed is workforce displacement, with approximately 27% of employment in occupations at highest r…
S59
RegHorizon 2nd AI Policy Conference — The wide application of AI technologies has enormous benefits, but it also presents unprecedented challenges in terms of…
S60
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Sachin Kakkar from Google illustrated the localisation challenge through the company’s IndIC GenBench initiative, which …
S61
Discussion Report: Sovereign AI in Defence and National Security — Faisal responds to concerns about competing global AI policies by arguing that the sovereign AI framework is adaptable t…
S62
Empowering Workers in the Age of AI — Development | Economic Speed of technological change vs. training capacity The rapid pace of technological change, par…
S63
Shaping the Future AI Strategies for Jobs and Economic Development — “what they sometimes upskill with may not be enough in two years time so I think this upskilling is going to be really a…
S64
The open-source gambit: How America plans to outpace AI rivals by democratising tech — Labour:AI-related job displacement is considered a significant risk. The plan calls for guidance on using state Rapid Re…
S65
Bottom-up AI and the right to be humanly imperfect | IGF 2023 — A particularly thought-provoking point in the discourse was the expression of concern regarding the rapid displacement o…
S66
AI, automation, and human dignity: Reimagining work beyond the paycheck — Current reskilling initiatives, while well-intentioned, rarely address these structural inequalities. They tend to be de…
S67
The Declaration for the Future of the Internet: Principles to Action — The conversation also inputs a compelling argument on the intricate equilibrium between regulation and innovation. Addre…
S68
Tackling disinformation in electoral context — While some regulation is necessary, over-regulation should be avoided as it could stifle innovation and growth in the di…
S70
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Public-private partnerships play a key role in these collaborations.
S71
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Multi-stakeholder partnerships between policy researchers and private sector are essential for surfacing potential harms…
S72
Open Forum #33 Building an International AI Cooperation Ecosystem — Multi-stakeholder framework bringing technical expertise, public interest, and global perspective is essential Public-p…
S73
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Sachin Kakkar from Google illustrated the localisation challenge through the company’s IndIC GenBench initiative, which …
S74
How to make AI governance fit for purpose? — International Cooperation and Standards Role of international cooperation and standards Singapore advocates against fr…
S75
Parliamentary Roundtable Safeguarding Democracy in the Digital Age Legislative Priorities and Policy Pathways — International Cooperation and Global Standards Need for international cooperation and global standards rather than frag…
S76
Artificial intelligence (AI) – UN Security Council — The discussion on the unintended consequences of rushed AI regulations was a central theme across multiple sessions duri…
S77
Chinese leading AI expert argues for AI governance by the UN — The rapid development of AI technology has outpaced existing regulatory frameworks, creating challenges in areas such as…
S78
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — – Kristalina Georgieva- Brad Smith 38,000 GPUs available through public-private partnership as common compute facility….
S79
Global Digital Compact: AI solutions for a digital economy inclusive and beneficial for all — Development | Economic Microsoft Elevate represents the next chapter of corporate philanthropy, combining technology su…
S80
What policy levers can bridge the AI divide? — – Tatenda Annastacia Mavetera- Hubert Vargas Picado- Emmy Lou Versoza Delfin Development | Sociocultural Kone argues t…
S81
Manufacturing’s Moonshots Are Landing . . . Are You Ready for the Next Wave? — Furthermore, it highlights the significance of collaboration between the public and private sectors in future skills tra…
S82
WS #162 Overregulation: Balance Policy and Innovation in Technology — Amattey uses the COVID-19 pandemic as an example of how innovation can thrive with less regulation in times of crisis. H…
S83
Safe Digital Futures for Children: Aligning Global Agendas | IGF 2023 WS #403 — The analysis argues for equalizing trust and safety investment. Market concentration is also opposed, with a call for a …
S84
Conversational AI in low income & resource settings | IGF 2023 — They also highlight the importance of regulations to provide guardrails and prevent potential misuse of AI. However, it …
S85
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Moreover, Aryal urges for a thorough exploration of the potential risks that come with AI in the context of cybersecurit…
S86
Ten cybersecurity predictions for 2026 from experts: How AI will reshape cyber risks — Evidence from threat intelligence reporting and incident analysis in 2025 suggests that AI will move from experimental u…
S87
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — Patel outlines a three‑layer security approach: protect agents from malicious inputs, protect the world from rogue agent…
S88
AI Meets Cybersecurity Trust Governance & Global Security — “Let’s figure out what has to be done.”[88]”We need to be able to know a lot more about how we roll it out safely.”[89]”…
S89
Driving Social Good with AI_ Evaluation and Open Source at Scale — Benchmarking, Standardization, and Multilingual/Local Contexts
S90
Ateliers : rapports restitution et séance de clôture — Aurélien Macé Apparemment, j’ai droit à 6,6 minutes, deux fois plus que les autres, ce qu’on m’a dit. Le thème de vendre…
S91
#205 L&A Launch of the Global CyberPeace index — Sociocultural | Human rights | Development Wisniak highlights that AI systems perform poorly for languages and dialects…
S92
AI race shows diverging paths for China and the US — The US administration’s new AI action plan frames global development as anAI racewith a single winner. Officials argue A…
S93
The Foundation of AI Democratizing Compute Data Infrastructure — So as we come to the end of our panel, with everything that’s been said, even with all the money on the table, free mone…
S94
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Global governance of AI is a precursor for a democratic development and evolution. And we need to continue to develop an…
S95
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S96
AI Safety at the Global Level Insights from Digital Ministers Of — And I’m… I’m really gratified that the report continues to be anchored in that broader aperture of risk. And eventual…
S97
Who Watches the Watchers Building Trust in AI Governance — These technical limitations highlight why current benchmarks, while useful, remain inadequate for comprehensive safety a…
S98
WS #31 Cybersecurity in AI: balancing innovation and risks — Dr. Alison: Okay. Thank you. So I speak from a personal perspective here. So I don’t know if, realistically, I don’t…
S99
https://dig.watch/event/india-ai-impact-summit-2026/setting-the-rules_-global-ai-standards-for-growth-and-governance — I think that would be super useful. We’re leading some work on testing, well, benchmarking and rate teaming, primarily m…
S100
Internet Governance at the Point of No Return — Besides that, standards of different natures can constitute a contribution for companies in the efforts to open up new m…
S101
Multi-stakeholder Discussion on issues about Generative AI — Hiroshima process will be one of the instruments to foster this collaboration
S102
Open Forum #71 Advancing Rights-Respecting AI Governance and Digital Inclusion through G7 and G20 — Sabhanaz Rashid Diya: Thank you, Alison. And good morning, everyone. I am Sabhanaz Rashid Diya, I’m with the Tech Global…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
L
Lee Tiedrich
4 arguments190 words per minute1193 words374 seconds
Argument 1
Need for global standards with cultural customization (Lee Tiedrich)
EXPLANATION
Lee argues that international AI standards are essential but must be adaptable to different languages, cultures, and norms. He stresses that while standards like ISO 42001 provide a starting point, they need to be accelerated and customized for local contexts.
EVIDENCE
He notes that ISO has released a standard (ISO 42001) and calls for faster development, while also highlighting the tension between cross-border applicability and the need for cultural and linguistic customization [38-41]. He references his experience at NIST working on a zero draft for ISO and mentions initiatives such as the Hiroshima AI process that bring together diverse regional stakeholders [42-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Accelerated international standards that must accommodate cultural and linguistic differences are highlighted in [S1], and Lee’s call for global collaboration on evaluation standards is echoed in [S10].
MAJOR DISCUSSION POINT
Global standards customization
AGREED WITH
Sachin Kakkar, Brad Staples
Argument 2
Evaluation‑first approach before imposing regulation (Lee Tiedrich)
EXPLANATION
Lee contends that technical evaluation frameworks should precede any regulatory action on AI. By establishing robust assessment methods, regulators can make informed decisions without stifling innovation.
EVIDENCE
He describes a 30-year career across government, academia, and private sector, emphasizing the need to develop evaluation techniques that set safety thresholds before debating regulation [89-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Lee’s emphasis on developing technical evaluation frameworks prior to regulation is supported by the discussion of evaluation-first strategies in [S10] and the explicit mention of his stance in [S1].
MAJOR DISCUSSION POINT
Evaluation before regulation
AGREED WITH
Amit Chadha, Brad Staples
DISAGREED WITH
Amit Chadha
Argument 3
Emphasis on AI literacy and problem‑solving skills in education (Lee Tiedrich)
EXPLANATION
Lee stresses that AI literacy, critical thinking, and problem‑solving are core competencies needed for the workforce and everyday citizens. He advocates for public‑private partnerships to embed these skills in curricula and lifelong learning.
EVIDENCE
He calls for AI literacy to keep pace with rapid technology change, suggests teaching students how to think and solve problems, and highlights the importance of analytics and communication skills, noting his personal perspective as a parent of two young adults [387-393].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for AI-enabled education and curriculum reform is discussed in [S17], [S18] and [S19], providing context for Lee’s focus on AI literacy and problem-solving competencies.
MAJOR DISCUSSION POINT
AI literacy in education
AGREED WITH
Sachin Kakkar, Amit Chadha, Amanda Craig Deckard, Julian Waits
Argument 4
Global cooperation across academia, industry, and civil society is essential for benefits and risk mitigation (Lee Tiedrich)
EXPLANATION
Lee concludes that achieving AI safety and realizing its benefits requires coordinated action among governments, academia, industry, and civil society worldwide. He links this cooperation to broader goals such as the UN Sustainable Development Goals.
EVIDENCE
He references his work with OECD and global AI partnerships, noting progress in safety reports but urging continued collaboration and attention to benefits, not just risks, and mentions the need for data standardization and voluntary data-sharing foundations [283-285].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of multi-stakeholder collaboration for AI governance is underscored in [S21] and reinforced by the global collaboration theme in [S10]; Lee’s work with OECD is noted in [S1].
MAJOR DISCUSSION POINT
Global multi‑stakeholder cooperation
S
Sachin Kakkar
4 arguments149 words per minute1152 words462 seconds
Argument 1
Risk of copying regulations without local adaptation (Sachin Kakkar)
EXPLANATION
Sachin warns that transplanting regulations or standards from one market to another often fails because local needs, languages, and constraints differ. He advocates for localized solutions that respect regional specifics.
EVIDENCE
He cites the challenge of copying regulations, the need to localize them, and gives Google’s Indiq GenBench as an example that supports 29 Indian languages, 12 scripts, and four language families, illustrating the importance of localization [50-52].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Warnings against transplanting regulations without adaptation appear in [S23] and the challenges of “same activity, same risk” adaptation are detailed in [S24]; Sachin’s localisation example is cited in [S1].
MAJOR DISCUSSION POINT
Localizing regulations
AGREED WITH
Lee Tiedrich, Brad Staples
Argument 2
Co‑creation model: open‑source frameworks, capacity building, workforce upskilling (Sachin Kakkar)
EXPLANATION
Sachin proposes moving from a traditional technology transfer model to a co‑creation approach where developers and governments collaborate on open‑source frameworks, capacity building, and upskilling. This model treats standards and regulations as enablers rather than barriers.
EVIDENCE
He outlines three dimensions: open-source frameworks (e.g., Safe SAIF, COSI coalition), capacity building (sharing threat intelligence, tools like SynthID), and workforce upskilling (digital literacy, grants to institutes) [61-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift to open-source co-creation and capacity development is discussed in [S25] and government open-source policy in [S26]; the co-creation concept is also highlighted in [S1].
MAJOR DISCUSSION POINT
Co‑creation for AI development
AGREED WITH
Amit Chadha, Amanda Craig Deckard, Julian Waits, Lee Tiedrich
Argument 3
Emergence of AI agents vs AI defenders; self‑defending adaptive systems (Sachin Kakkar)
EXPLANATION
Sachin describes a future where AI agents can both attack and defend infrastructure, arguing that AI‑driven defenders can reverse the traditional defender’s dilemma. He envisions self‑defending adaptive systems that automate security tasks.
EVIDENCE
He explains that AI agents can scale attacks on critical infrastructure, but AI-powered defenders can automate 80 % of drudgery, giving defenders an aggregate advantage and enabling self-defending adaptive systems [263-273].
MAJOR DISCUSSION POINT
AI agents vs AI defenders
Argument 4
India shifting from back‑office to front‑office AI role, focusing on grassroots impact (Sachin Kakkar)
EXPLANATION
Sachin asserts that India has moved beyond being a low‑cost back‑office hub to becoming a front‑office AI innovator that addresses grassroots challenges such as agriculture, healthcare, and education. He emphasizes AI’s impact at the community level.
EVIDENCE
He notes that India now develops products for the world, cites the lack of data leaks during COVID-19, and describes India’s role as a front-office for AI rather than a cost base [311-321].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s transition to a front-office AI innovator is described in [S29] and [S30]; Sachin’s framing of this shift is reflected in [S1].
MAJOR DISCUSSION POINT
India’s evolving AI role
A
Amit Chadha
3 arguments164 words per minute1854 words675 seconds
Argument 1
Over‑regulation can stifle innovation (Amit Chadha)
EXPLANATION
Amit cautions that excessive regulation may hinder AI innovation and urges a balanced approach. He suggests careful calibration of regulatory scope to avoid choking technological progress.
EVIDENCE
He explicitly states that “too much of regulation can stifle innovation” and calls for careful consideration of how much regulation to apply [131-133].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The tension between regulation and innovation is examined in [S31] and [S32], and Amit’s caution aligns with the broader discussion in [S1].
MAJOR DISCUSSION POINT
Regulation vs innovation
AGREED WITH
Lee Tiedrich, Brad Staples
DISAGREED WITH
Lee Tiedrich
Argument 2
Multilingual AI reduces prompt‑injection vulnerabilities; new jailbreak benchmark (Amit Chadha)
EXPLANATION
Amit explains that AI systems lacking robust multilingual capabilities are vulnerable to prompt‑injection attacks in low‑resource languages. He highlights a new multilingual jailbreak benchmark as a step toward mitigating this risk.
EVIDENCE
He describes how attackers can exploit poorly supported languages (e.g., Tamil) to bypass safety systems, and notes the development of a multilingual jailbreak benchmark by ML Commons that now includes Indic and Asian languages [250-257].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multilingual jailbreak benchmarks and prompt-injection risks are detailed in [S33] and [S34]; the vulnerability of low-resource languages is noted in [S1].
MAJOR DISCUSSION POINT
Multilingual security
Argument 3
Internal upskilling via curriculum updates, billable‑time training, patent incentives (Amit Chadha)
EXPLANATION
Amit outlines his company’s strategy to address the AI skills gap by updating college curricula, integrating upskilling into billable work, and incentivizing innovation through patents and publications. This approach aims to keep the workforce future‑ready while maintaining productivity.
EVIDENCE
He details three actions: collaborating with colleges to refresh curricula [150-155], upskilling employees during billable time and tracking personal technology effort, leading to an increase in patent filings from 50 to 200 per year and a rise in employee-driven innovation from 19 % to 52 % [156-170].
MAJOR DISCUSSION POINT
Company‑driven upskilling
AGREED WITH
Sachin Kakkar, Amanda Craig Deckard, Julian Waits, Lee Tiedrich
A
Amanda Craig Deckard
2 arguments180 words per minute1537 words509 seconds
Argument 1
Microsoft Elevate program targeting 20 million Indians, educator training, multilingual AI (Amanda Craig Deckard)
EXPLANATION
Amanda describes Microsoft’s Elevate initiative, which aims to skilling millions of Indians through educator programs, cloud access, and multilingual AI tools. The goal is to close the AI skills gap at scale.
EVIDENCE
She cites a commitment to upskill 10 million Indians by 2030, already reaching 5.6 million, and a doubled target of 20 million by 2030, supported by the new Elevate for Educators program partnering with schools, vocational institutes, and higher-education institutions [122-126].
MAJOR DISCUSSION POINT
Microsoft Elevate scaling
Argument 2
Investment in infrastructure, connectivity, energy, and multilingual AI to close gaps (Amanda Craig Deckard)
EXPLANATION
Amanda outlines Microsoft’s broader investment strategy to bridge digital divides, focusing on foundational infrastructure such as connectivity, energy, AI compute, and multilingual capabilities. She emphasizes measuring diffusion to guide interventions.
EVIDENCE
She references a blog detailing five investment areas: hard infrastructure (connectivity, AI compute), scaling, multilingual AI, local AI deployment with community use cases, and diffusion measurement to assess impact [404-408].
MAJOR DISCUSSION POINT
Infrastructure and multilingual investment
B
Brad Staples
1 argument151 words per minute1335 words529 seconds
Argument 1
AI value concentration is not inevitable; intentional design and international collaboration required (Brad Staples)
EXPLANATION
Brad argues that the projected concentration of AI economic value in Western countries and China is not a foregone conclusion. He calls for intentional design, international collaboration, and inclusive innovation to democratize AI’s impact.
EVIDENCE
He cites estimates that 70 % of AI value could reside in Western economies, warns against accepting this outcome, and lists needed actions such as international collaboration, workforce development, private-sector partnerships, and trust, safety, and security measures [2-8].
MAJOR DISCUSSION POINT
Democratizing AI value
J
Julian Waits
3 arguments172 words per minute437 words152 seconds
Argument 1
Reliance on foreign talent and the need for continuous AI literacy (Julian Waits)
EXPLANATION
Julian points out that the U.S. tech sector depends heavily on foreign workers to stay competitive, and stresses the necessity of ongoing AI literacy to avoid falling behind. He links talent mobility to national AI capability.
EVIDENCE
He states that without foreign workers the U.S. would fall behind, emphasizing the reliance on overseas talent for AI development and the need for continuous learning [188-193].
MAJOR DISCUSSION POINT
Foreign talent dependence
Argument 2
AI can automate 60 % of security tasks, but human judgment remains essential (Julian Waits)
EXPLANATION
Julian notes that AI can handle the majority of routine security tasks, yet a significant portion still requires human expertise for risk assessment. This hybrid approach balances efficiency with necessary human oversight.
EVIDENCE
He explains that AI could eliminate 60 % of current security work, while the remaining 40 % needs human determination of risk for governments, defense, or businesses [366-368].
MAJOR DISCUSSION POINT
Human‑AI security partnership
DISAGREED WITH
Audience member
Argument 3
Rapid industry change demands continuous learning; optimism about AI solutions (Julian Waits)
EXPLANATION
Julian reflects on the fast pace of AI development, warning that skills become obsolete quickly and emphasizing the need for continual learning. He remains optimistic that AI itself can help solve the challenges it creates.
EVIDENCE
He remarks that the industry is moving so quickly that today’s important skills may disappear in five years, and stresses careful use of AI while expressing confidence that solutions will emerge [300-304].
MAJOR DISCUSSION POINT
Continuous learning and optimism
A
Audience
2 arguments157 words per minute519 words198 seconds
Argument 1
Concern about rapid technology outpacing upskilling efforts (Audience)
EXPLANATION
The audience member expresses worry that AI’s exponential speed outstrips upskilling programs, creating information arbitrage and polarizing effects. They highlight the risk of rapid displacement if standards and literacy do not keep pace.
EVIDENCE
The participant mentions the speed of AI, information arbitrage between pioneers and broader society, potential polarization of democracies, and fears of exponential displacement beyond the 40 % figure cited earlier [360-376].
MAJOR DISCUSSION POINT
Speed vs upskilling gap
DISAGREED WITH
Julian Waits, Audience member
Argument 2
Need for last‑mile connectivity and community‑level empowerment (Audience)
EXPLANATION
The audience member asks how to bridge the digital divide, emphasizing the importance of last‑mile internet connectivity and grassroots empowerment in rural India. They reference a local nonprofit’s work as an example of needed action.
EVIDENCE
She describes visiting the Digital Empowerment Foundation, which focuses on last-mile connectivity and community empowerment, and asks for concrete steps to address the divide [398-400].
MAJOR DISCUSSION POINT
Last‑mile connectivity
Agreements
Agreement Points
International AI standards must be adaptable to local languages, cultures and regulatory contexts
Speakers: Lee Tiedrich, Sachin Kakkar, Brad Staples
Need for global standards with cultural customization (Lee Tiedrich) Risk of copying regulations without local adaptation (Sachin Kakkar) Democratizing AI requires intentional design and international collaboration (Brad Staples)
All three speakers stress that while global standards are essential, they must be accelerated and customized for different languages, cultures and local market constraints to avoid ineffective copy-pasting of regulations [38-41][50-52][6-8].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for culturally sensitive standards is highlighted in discussions on inclusive AI for Africa and localisation initiatives, and the sovereign AI framework that can be tuned to national contexts [S49][S60][S61].
Regulation should be carefully calibrated and preceded by robust technical evaluation to avoid stifling innovation
Speakers: Lee Tiedrich, Amit Chadha, Brad Staples
Evaluation‑first approach before imposing regulation (Lee Tiedrich) Over‑regulation can stifle innovation (Amit Chadha) Question on trade‑off between global standards/regulation and innovation (Brad Staples)
Lee argues that evaluation frameworks must be established before debating regulation, Amit warns that excessive regulation harms innovation, and Brad explicitly asks about the trade-off, indicating shared concern for a balanced regulatory approach [89-95][131-133][79-80].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on the timing of standards warn that premature regulation can hinder innovation, while other analyses show that well-balanced regulation can enable innovation when based on solid technical assessment [S53][S54][S55][S68].
Building AI capacity through widespread upskilling, lifelong learning and AI literacy is critical
Speakers: Sachin Kakkar, Amit Chadha, Amanda Craig Deckard, Julian Waits, Lee Tiedrich
Co‑creation model: open‑source frameworks, capacity building, workforce upskilling (Sachin Kakkar) Internal upskilling via curriculum updates, billable‑time training, patent incentives (Amit Chadha) Microsoft Elevate program targeting millions of Indians, educator training, multilingual AI (Amanda Craig Deckard) Continuous learning and AI literacy needed to keep pace with rapid change (Julian Waits) Emphasis on AI literacy and problem‑solving skills in education (Lee Tiedrich)
All speakers highlight the need for systematic skill development-from open-source co-creation and university curricula to corporate incentives and national programmes-to ensure the workforce can adapt to fast-moving AI technologies [61-78][156-170][119-126][364-368][387-393].
POLICY CONTEXT (KNOWLEDGE BASE)
Reports identify large training gaps and call for continuous learning and targeted upskilling programs to mitigate displacement risks [S58][S63][S64][S66].
Public‑private partnership and multi‑stakeholder collaboration are essential for responsible AI deployment
Speakers: Brad Staples, Lee Tiedrich, Amanda Craig Deckard, Sachin Kakkar, Amit Chadha
Democratizing AI requires international collaboration (Brad Staples) Global cooperation across academia, industry, and civil society is essential (Lee Tiedrich) Public‑private partnership as a core pillar of Microsoft’s approach (Amanda Craig Deckard) Co‑creation model with governments and industry (Sachin Kakkar) Collaboration with government for capacity building and budgeting (Amit Chadha)
The panel repeatedly stresses that coordinated action among governments, industry, academia and civil society is needed to create standards, build capacity and ensure trustworthy AI [6-8][283-284][118-119][61-78][71-74].
Multilingual and multicultural AI capabilities are vital for inclusion, security and reducing vulnerabilities
Speakers: Sachin Kakkar, Amit Chadha, Amanda Craig Deckard, Lee Tiedrich
Risk of copying regulations without local adaptation; example of Indiq GenBench supporting 29 Indian languages (Sachin Kakkar) Multilingual AI reduces prompt‑injection vulnerabilities; new multilingual jailbreak benchmark (Amit Chadha) Investment in multilingual AI as part of Microsoft’s holistic approach (Amanda Craig Deckard) Need to customize standards for different languages and cultures (Lee Tiedrich)
All four speakers underline that supporting many languages and cultural contexts not only promotes equitable access but also mitigates security risks such as prompt-injection attacks, calling for dedicated benchmarks and tools [50-52][250-257][111-113][40-41].
POLICY CONTEXT (KNOWLEDGE BASE)
AI applications for security benefit from multilingual support, and localisation projects such as India’s GenBench illustrate the importance of cultural and linguistic diversity in AI systems [S48][S49][S60].
Similar Viewpoints
Both caution that premature or heavy regulation can hinder AI progress and advocate for technical evaluation as a prerequisite to policy decisions [89-95][131-133].
Speakers: Lee Tiedrich, Amit Chadha
Evaluation‑first approach before regulation (Lee Tiedrich) Over‑regulation can stifle innovation (Amit Chadha)
Both emphasize the importance of multilingual AI and localized solutions to ensure relevance and effectiveness across diverse linguistic communities [50-52][111-113].
Speakers: Sachin Kakkar, Amanda Craig Deckard
Risk of copying regulations without local adaptation (Sachin Kakkar) Microsoft Elevate program targeting multilingual AI capability (Amanda Craig Deckard)
Both argue that unchecked regulation or market concentration threatens equitable AI outcomes and that deliberate design and balanced policy are required [131-133][2-8].
Speakers: Amit Chadha, Brad Staples
Over‑regulation can stifle innovation (Amit Chadha) AI value concentration is not inevitable; needs intentional design and collaboration (Brad Staples)
Both stress that AI literacy, critical thinking and lifelong learning are essential to keep the workforce and citizens adaptable to rapid AI change [364-368][387-393].
Speakers: Julian Waits, Lee Tiedrich
Emphasis on AI literacy and continuous learning (Julian Waits) Emphasis on AI literacy and problem‑solving skills in education (Lee Tiedrich)
Both present concrete corporate strategies that blend open‑source collaboration with internal skill development to address the AI talent gap [61-78][156-170].
Speakers: Sachin Kakkar, Amit Chadha
Co‑creation model: open‑source frameworks, capacity building, workforce upskilling (Sachin Kakkar) Internal upskilling via curriculum updates, billable‑time training, patent incentives (Amit Chadha)
Unexpected Consensus
Rapid AI advancement outpacing upskilling efforts and causing potential displacement
Speakers: Audience, Julian Waits, Amit Chadha
Concern about rapid technology outpacing upskilling, information arbitrage and polarization (Audience) AI can automate 60 % of security tasks but 40 % still needs human judgment; displacement will occur (Julian Waits) Internal upskilling and patent incentives as a response to fast‑changing skill needs (Amit Chadha)
While the audience warned that AI’s exponential speed could outstrip training programs, both Julian and Amit acknowledged inevitable displacement and described proactive upskilling measures, revealing an unexpected alignment on the urgency of continuous learning [360-376][364-368][156-170].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses from Australia and global studies highlight that the speed of AI adoption exceeds workforce training capacity, raising concerns about job displacement [S57][S58][S62][S65].
Overall Assessment

The panel displayed a high degree of consensus around four core themes: (1) the need for globally coordinated AI standards that are culturally and linguistically adaptable; (2) a cautious, evaluation‑first approach to regulation to preserve innovation; (3) extensive public‑private collaboration coupled with robust capacity‑building programmes; and (4) multilingual AI as both an inclusion and security imperative. These shared positions suggest a collective willingness to pursue coordinated, inclusive and technically grounded AI governance frameworks.

Strong consensus across speakers, indicating that future policy and industry initiatives are likely to prioritize collaborative standard‑setting, balanced regulation, large‑scale upskilling and multilingual inclusivity, which together could mitigate concentration of AI value and enhance equitable AI diffusion.

Differences
Different Viewpoints
Timing and role of regulation versus innovation
Speakers: Amit Chadha, Lee Tiedrich
Over‑regulation can stifle innovation (Amit Chadha) Evaluation‑first approach before imposing regulation (Lee Tiedrich)
Amit warns that excessive regulation will choke AI innovation and calls for careful calibration of regulatory scope [131-133]. Lee argues that robust technical evaluation frameworks should be established first, and that regulators often cannot keep pace with rapid technology, suggesting regulation should follow evaluation rather than precede it [92-95]. The two speakers differ on when and how regulation should be applied.
POLICY CONTEXT (KNOWLEDGE BASE)
Ongoing debate about when to introduce regulations shows that both premature and delayed rules can hinder or help innovation, underscoring the need for balanced timing and scope [S53][S54][S55][S56][S67][S68].
Perceived speed of AI displacement and security automation
Speakers: Julian Waits, Audience member
AI can automate 60 % of security tasks, but human judgment remains essential (Julian Waits) Concern about rapid technology outpacing upskilling efforts (Audience)
Julian states that AI can eliminate about 60 % of current security work, leaving a remaining 40 % that still requires human judgment [366-368]. An audience participant counters that the exponential speed of AI could lead to far higher displacement-potentially 100 %-and that upskilling programs cannot keep pace, raising fears of polarization and rapid job loss [360-376]. This reflects a disagreement on the magnitude and immediacy of AI-driven displacement.
POLICY CONTEXT (KNOWLEDGE BASE)
Observations on rapid AI deployment in security contexts and its impact on employment illustrate concerns about the pace of displacement, echoed in security-focused AI discussions [S48][S56][S57].
Unexpected Differences
Speed of AI development versus upskilling capacity
Speakers: Audience member, Lee Tiedrich, Julian Waits
Concern about rapid technology outpacing upskilling efforts (Audience) Emphasis on AI literacy and lifelong learning as a remedy (Lee Tiedrich) Optimistic view that AI itself will help solve the displacement problem (Julian Waits)
The audience raised alarm that AI’s exponential pace could outstrip education and upskilling programs, potentially leading to massive displacement [360-376]. Lee responded by stressing AI literacy, problem-solving skills and public-private partnerships to keep the workforce adaptable [387-393]. Julian, however, expressed confidence that AI will provide solutions despite the rapid change [300-304]. The stark contrast between the audience’s urgency, Lee’s educational remedy, and Julian’s optimism was not anticipated earlier in the discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses point to a gap between fast AI progress and slower upskilling pipelines, calling for accelerated lifelong learning initiatives to keep pace [S62][S63][S64].
Overall Assessment

The panel displayed broad consensus on the importance of democratizing AI, building capacity and fostering public‑private collaboration. Disagreements centered on the timing and nature of regulation, the perceived immediacy of AI‑driven job displacement, and the preferred mechanisms for addressing the skills gap. While most participants agreed on the goals of inclusive AI development and security, they diverged on policy sequencing and implementation tactics.

Moderate – the disagreements are substantive but do not fracture the overall consensus. They highlight the need for coordinated policy design that balances innovation, regulation, and rapid upskilling, especially for developing regions.

Partial Agreements
All three panelists agree that the AI skills gap must be closed, but propose different pathways: Amanda emphasizes large‑scale public‑private skilling programs and educator partnerships; Sachin advocates a co‑creation approach with open‑source tools, capacity‑building grants and digital‑literacy initiatives; Amit focuses on aligning college curricula, integrating upskilling into billable work and incentivising innovation through patents and research time. The shared goal is workforce readiness, while the means diverge.
Speakers: Amanda Craig Deckard, Sachin Kakkar, Amit Chadha
Microsoft Elevate program targeting 20 million Indians, educator training, multilingual AI (Amanda Craig Deckard) Co‑creation model: open‑source frameworks, capacity building, workforce upskilling (Sachin Kakkar) Internal upskilling via curriculum updates, billable‑time training, patent incentives (Amit Chadha)
Both speakers concur that AI standards and regulations cannot be a one‑size‑fits‑all. Lee calls for accelerated international standards that are customizable to languages, cultures and norms [38-41], while Sachin warns against transplanting regulations wholesale and highlights the need for localized test‑beds such as Indiq GenBench [50-52]. They differ on the mechanism: Lee focuses on adapting global standards, Sachin on building local solutions.
Speakers: Lee Tiedrich, Sachin Kakkar
Need for global standards with cultural customization (Lee Tiedrich) Risk of copying regulations without local adaptation (Sachin Kakkar)
Takeaways
Key takeaways
AI’s economic value is currently concentrated in Western economies and China, but this outcome is not inevitable; intentional design and international collaboration are needed to democratize AI benefits. Global AI standards are essential, but they must allow cultural, linguistic, and regulatory customization for different regions. Co‑creation between governments and developers—through open‑source frameworks, capacity‑building, and workforce upskilling—is more effective than a simple transfer of regulations. Over‑regulation can stifle innovation; an evaluation‑first, evidence‑based approach should precede regulatory mandates. Public‑private partnerships are critical for closing the AI skills gap; programs such as Microsoft Elevate, internal upskilling, curriculum updates, and patent incentives are being deployed. AI‑specific security risks (e.g., prompt‑injection, jailbreaks) require multilingual robustness and the development of AI‑defender agents; continuous scanning and adaptive defenses are necessary. Bridging the digital divide requires investment in basic infrastructure (connectivity, energy), multilingual AI, local use‑case development, and systematic measurement of AI diffusion. India is transitioning from a back‑office to a front‑office role in AI, focusing on grassroots impact and local innovation rather than merely cost‑center services.
Resolutions and action items
Expand and localize the ISO/ISO‑42,001 standard and related drafts (e.g., NIST zero draft) to incorporate cultural and linguistic variations. Google to extend the Indiq GenBench benchmark to additional Indic and low‑resource languages and to contribute to the multilingual jailbreak benchmark with ML Commons. Microsoft to scale the Elevate program to 20 million Indians by 2030, including the new Elevate for Educators initiative and continued cloud/AI access for schools and vocational institutes. L&T Technology Services to continue internal upskilling through billable‑time training, curriculum alignment with industry needs, and incentive structures (patent and publication recognition). Develop AI‑defender agents and self‑adapting security stacks (as described by Sachin Kakkar) to counter AI‑driven attacks. Implement continuous auditing mechanisms for AI systems rather than one‑time certifications, as advocated by Sachin Kakkar. Establish voluntary data‑exchange frameworks and standardized data licensing (e.g., Creative‑Commons‑style for data) to reduce friction in cross‑border collaborations.
Unresolved issues
How to ensure rapid upskilling keeps pace with the exponential speed of AI advances, especially in developing economies. The precise balance between global regulatory frameworks and local adaptation without creating compliance burdens for startups. Effective mechanisms for last‑mile connectivity and digital empowerment in rural areas beyond high‑level investment commitments. Quantitative metrics for measuring AI diffusion and the impact of public‑private interventions over time. Long‑term economic displacement effects of AI automation and the extent to which AI can replace versus augment human security analysts.
Suggested compromises
Adopt a “creative tension” approach: start with global standards and regulations, then adapt them to local constraints (e.g., bandwidth, linguistic diversity). Use a mixed “carrot‑and‑stick” strategy for workforce upskilling—combine incentives (patents, recognition, budgets) with clear productivity expectations. Balance regulation with innovation by prioritizing evaluation‑first technical frameworks before imposing mandatory rules.
Thought Provoking Comments
There is a tension. On the one hand we want standards to apply across borders so companies can have responsible technology flow, but on the other hand we need to customize them for different cultures, languages, and norms.
Highlights the fundamental dilemma of creating universal AI standards while respecting cultural diversity, pushing the conversation beyond technical specifications to sociopolitical considerations.
Shifted the discussion toward the need for flexible, locally‑adaptable standards and prompted Sachin and others to talk about localization of regulations and tools, deepening the debate on how global frameworks can be made inclusive.
Speaker: Lee Tiedrich
Copy‑pasting regulations from international markets to local markets may not work. We need continuous scanning and auditing to avoid temporal drift as AI evolves.
Introduces the concept that static, one‑time compliance checks are insufficient for rapidly evolving AI systems, adding a dynamic, lifecycle‑focused perspective to governance.
Led to a follow‑up on the trade‑off between global standards and local adaptation, and set up later remarks about AI agents versus AI defenders, expanding the conversation to ongoing security monitoring.
Speaker: Sachin Kakkar
We track how much personal time employees spend on technology development beyond billable hours; that rose from 19 % to 52 %, and our patents per year jumped from 50 to 200. We reward patents, papers, and talks as personal achievements.
Provides a concrete, innovative model for aligning employee incentives with AI upskilling and innovation, showing how a company can turn upskilling into a productivity driver rather than a cost.
Introduced a new dimension to the skills‑gap discussion, prompting other panelists (e.g., Amanda) to compare corporate‑wide programs with individual incentive structures, and highlighted practical ways to embed AI learning into daily work.
Speaker: Amit Chadha
Microsoft Elevate aims to upskill 20 million Indians by 2030, with programs for teachers, vocational institutes, and higher‑education partners, and we measure diffusion to inform interventions.
Shows a large‑scale, data‑driven public‑private initiative that combines infrastructure, multilingual AI, and continuous measurement, illustrating a holistic strategy to bridge the digital divide.
Steered the conversation toward measurable impact and the importance of tracking adoption, influencing later audience questions about speed of deployment and prompting Lee to stress data‑standardization.
Speaker: Amanda Craig Deckard
If we didn’t have foreign workers in the U.S., we would fall behind the rest of the world. We are forced to use labor in other societies that appreciate STEM technology.
Points out the geopolitical dependency on talent from developing countries, framing the skills gap as not just a corporate issue but a national competitiveness concern.
Triggered a broader reflection on global talent flows, leading Brad to ask about carrot vs stick incentives and prompting audience concerns about rapid displacement and equity.
Speaker: Julian Waits
AI literacy is essential. We must teach students how to think, problem‑solve, and communicate so they can adapt as technology changes, not just hand them a fixed skill set.
Shifts the focus from technical upskilling to foundational education that equips people to navigate future AI disruptions, emphasizing long‑term resilience.
Answered the audience’s worry about exponential AI progress, reframed the skills‑gap debate toward education reform, and reinforced the call for public‑private partnerships in curriculum development.
Speaker: Lee Tiedrich
We need voluntary foundations and standardized data agreements (like Creative Commons for data) to enable easy, low‑friction data exchange across regions.
Identifies a practical bottleneck—data sharing—that underpins many of the earlier points about localization, standards, and AI benefits, proposing a concrete solution.
Closed the panel by linking earlier themes (standards, localization, trust) to a tangible action item, prompting Amanda to reference Microsoft’s measurement of diffusion and reinforcing the need for collaborative infrastructure.
Speaker: Lee Tiedrich
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the conversation from abstract concerns about AI concentration to concrete, actionable strategies. Lee Tiedrich’s articulation of the standards‑cultural tension set the stage for debates on localization and continuous governance, which Sachin expanded with the idea of ongoing audits. Amit Chadha’s insider view of incentive‑based upskilling and Amanda Craig’s large‑scale Elevate program offered contrasting but complementary models for closing the skills gap. Julian Waits highlighted the geopolitical reliance on talent, prompting a deeper look at equity and displacement, while Lee’s later emphasis on AI literacy reframed the problem as one of education rather than mere training. Together, these comments created turning points that broadened the scope, introduced new dimensions (data sharing, measurement, talent flows), and steered the panel toward a consensus that collaborative, adaptable, and measurable approaches are essential for democratizing AI benefits.

Follow-up Questions
What are the gaps between upscaling AI capabilities and the real economic displacement that need to be addressed in the transition process?
Understanding these gaps is crucial to design policies and reskilling programs that mitigate job loss and ensure a smooth economic shift as AI diffuses.
Speaker: Audience member (unidentified)
How can the digital divide be bridged to make AI access more equitable, especially in rural and underserved regions?
Identifying concrete strategies for last‑mile connectivity, affordable infrastructure, and inclusive AI literacy is essential to prevent exclusion and associated societal risks.
Speaker: Rita Soni (Audience)
How can continuous scanning and auditing of AI systems be implemented to avoid temporal drift, rather than relying on one‑time audits?
AI models evolve rapidly; ongoing monitoring is needed to ensure compliance with standards and maintain safety over time.
Speaker: Sachin Kakkar
What mechanisms can be created for data standardization and voluntary data‑sharing agreements to reduce friction in cross‑border AI collaboration?
Standardized data formats and clear licensing (e.g., Creative Commons‑like for data) would facilitate international cooperation and enable localized AI solutions.
Speaker: Lee Tiedrich
What approaches are needed to improve AI literacy and embed adaptable problem‑solving skills across the workforce and education systems?
Broad AI literacy, supported by public‑private partnerships, is key to enable individuals to keep pace with fast‑moving AI technologies.
Speaker: Lee Tiedrich
How can self‑defending, AI‑powered security agents be developed to create an ‘AI‑versus‑AI’ defense against emerging cyber threats?
Research into autonomous defensive agents could shift the defender’s dilemma, providing scalable protection against AI‑driven attacks on critical infrastructure.
Speaker: Sachin Kakkar
What are the implications of rapid, exponential AI advancement on information arbitrage, power polarization, and democratic stability?
Investigating how speed and unequal access to AI knowledge may exacerbate societal divides is vital for policy and governance frameworks.
Speaker: Audience member (unidentified)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

The Future of Public Safety AI-Powered Citizen-Centric Policing in India

The Future of Public Safety AI-Powered Citizen-Centric Policing in India

Session at a glanceSummary, keypoints, and speakers overview

Summary

The discussion centered on how the Ministry of Panchayati Raj (MOPR) is leveraging language-AI, particularly the Bhashini platform, to make rural digital governance more inclusive, transparent and participatory [1-4][5][18-20]. MOPR’s eGram Swaraj portal, originally English-only, was enhanced by Bhashini in 2023, allowing panchayat officials and citizens to view financial and planning data in their native languages, thereby reducing reliance on intermediaries [5][21-24]. The AI-enabled “Sabha Sar” tool converts audio or video recordings of Gram Sabha meetings into draft minutes in the local language, addressing the bottleneck that 65 % of secretaries reported in spending time on minute-taking [6-8][42-45][56-60]. Integration of drone-derived Swamitva data with Bhashini enabled roof-wise solar potential mapping for over 2.38 lakh gram panchayats, linking it to the PM Surigar Yojana for community-driven renewable projects [11-17]. Rapid adoption was demonstrated when Uttar Pradesh onboarded 59 000 gram panchayats onto eGram Swaraj within 40 days, showing that a user-friendly product meeting both ministry and panchayat needs can overcome perceived implementation hurdles [115-121][118-120]. Capacity-building programmes have been launched to train officials, while ongoing surveys reveal that many languages are still unsupported, prompting the addition of eleven new languages such as Assamese, Bodo and Santali [33-35][63-68]. Connectivity issues were mitigated by designing Sabha Sar as a separate upload tool, allowing recordings to be processed offline before syncing, which helped villages with limited internet [53-56]. The overall experience has been described as an “incredible journey” with positive feedback from villages, demonstrating cultural acceptance of AI-driven governance [61]. Amit Kumar highlighted that the solution requires no extra hardware-just a mobile phone-and that a “human-in-the-loop” model ensures accuracy while gradually automating agenda tracking and public disclosure of meeting outcomes [73-80][75-79]. Both speakers agreed that open, API-based architecture is essential for long-term sustainability, avoiding vendor lock-in and enabling modular expansion to future use cases like image-based issue reporting and spatial development planning [170-205]. Looking ahead, MOPR plans to extend AI services to other ministries, integrate weather forecasts, and develop a “Meri Panchayat” interface that can automatically interpret citizen-submitted photos and route them to the appropriate agency [152-154][235-239]. The participants concluded that language-AI, when built on a public, sovereign stack and coupled with strong stakeholder engagement, can transform gram panchayats into effective platforms for participatory democracy at a population scale [255-264][246-252].


Keypoints


Major discussion points


Language-AI (Bhashini) as the key enabler of inclusive, participatory rural governance – The Ministry realised that English-only portals alienated villagers; Bhashini was introduced to translate finance-commission grant data, meeting minutes and other documents into local languages, allowing citizens to read and act on information in their own tongue [5][21-24][56-59].


Sabha Sar: AI-driven voice-to-text meeting summarisation that improves transparency and efficiency – A survey of 8,000 panchayat secretaries showed minutes-taking consumed 65 % of their time; the AI tool now generates draft minutes from audio/video recordings, which are edited and uploaded, dramatically reducing workload and creating a public record [42-48][53-60][63-68].


AI integration with existing schemes and new service-delivery models – Drone-survey data from the Swamitva programme was repurposed to map rooftop solar potential and linked to the PM Surigar Yojana [11-16]; the Ministry is expanding AI-driven services (e.g., spatial development plans, “Meri Panchayat” issue-capture, Pancham chatbot) to cover health, roads, street-lights and other citizen needs [210-218][294-306][308-311].


Operational challenges and scaling successes – Deploying eGram Swaraj across 59 000 gram panchayats in Uttar Pradesh in 40 days demonstrated that a well-designed, user-friendly product can overcome registration, digital-signature and connectivity hurdles [114-122]; language coverage gaps are being addressed by adding 11 more languages to Bhashini [65-68]; simple mobile-phone-based tools are emphasized to ensure rapid adoption [140-141].


Need for open-architecture, data-sovereignty and sustainable AI ecosystems – Participants stressed that modular, API-based designs, interoperable standards and Indian data residency are essential to avoid vendor lock-in, ensure long-term scalability and protect against geopolitical risks [170-205][179-205].


Overall purpose of the discussion


The conversation was a fireside-chat aimed at showcasing how the Ministry of Panchayati Raj is leveraging language-AI (Bhashini) and related AI tools (Sabha Sar, Pancham, etc.) to make rural digital governance transparent, participatory and scalable, while sharing lessons learned, challenges faced, and a roadmap for broader integration across ministries and services.


Tone of the discussion


The tone remained largely optimistic and collaborative, celebrating concrete achievements (e.g., rapid UP rollout, 1.15  lakh meetings processed) and the transformative potential of AI. It was interspersed with candid acknowledgments of practical hurdles-language gaps, connectivity, capacity building-and a forward-looking, solution-oriented attitude toward overcoming them. The dialogue stayed constructive throughout, moving from problem identification to success stories and future vision.


Speakers

Shri Alok Prem Nagar


Area of Expertise: Rural governance, public administration, AI-enabled service delivery in Panchayati Raj


Role / Title: Senior official, Ministry of Panchayati Raj (MOPR), Government of India


Source: [S1]


Amit Kumar


Area of Expertise: Digital transformation, AI applications for governance, public sector innovation


Role / Title: Participant / Contributor (affiliation not specified in transcript)


Source: [S3]


Moderator


Area of Expertise: Session moderation, facilitation of policy discussions


Role / Title: Moderator of the fireside chat / conference session


Source: [S4]


Additional speakers:


Ms. Deepika


Area of Expertise:


Role / Title: Invited to felicitate Mr. Alok at the close of the session


Source:


Swalokhji (referenced in the dialogue)


Area of Expertise:


Role / Title: Mentioned in the moderator’s prompt; no speaking turn recorded in the transcript


Source:


Full session reportComprehensive analysis and detailed insights

The Ministry of Panchayati Raj (MOPR) was created in 2004 to empower Gram Panchayats and to nudge state governments toward legislation that makes local bodies truly self-governing [1-3]. In 2019, a People’s Plan Campaign demonstration of the then-English-only eGram Swaraj portal at a Gram Sabha in Karnataka revealed that villagers could not understand the displayed information [5]; this prompted the conception of Bhashini, a language-AI layer that translates portal content into the vernacular with a single click [5-6][S34-36].


From 2023 onward, the Manthan event formalised the Bhashini vision, and between 2024-2025 the eGram Swaraj platform was upgraded to embed the AI-driven translation engine. By making finance-commission grant data, planning documents and expense tables available in local languages, Bhashini removed the need for a literate intermediary, thereby fostering inclusive digital governance and building participation and trust [21-24][56-59][5-6]. This vernacular access is now a core pillar of the Ministry’s “good servant, bad master” guardrails: AI outputs are reviewed by officials before publication, ensuring that AI never operates autonomously [56-60][274-280][N-M].


A major bottleneck identified in an 8 000-panchayat-secretary survey was the time spent producing minutes of Gram Sabha meetings-65 % of respondents flagged this as their most time-consuming task [42-45]. In response, MOPR launched the AI-driven Sabha Sar tool in 2025. The workflow requires only a mobile-phone recording of the meeting; the audio/video file is uploaded to the Sabha Sar platform (offline-capable upload tool) [53-56]; Bhashini first transcribes it into English, then translates the draft back into the local language, after which the secretary makes minimal edits and publishes the minutes [53-60][56-60]. This “human-in-the-loop” approach dramatically reduces workload while creating a public, searchable record [7-8][63-68]. By 4 February 2026, more than 1.15 lakh Gram Sabha meetings had been processed through the system [39].


Parallel pilots have leveraged existing data assets for new services. Drone surveys undertaken for the Swamitva land-recording scheme produced dense point-cloud data that AI teams repurposed to estimate rooftop-solar potential; the resulting roof-wise solar-panel recommendations are now visible for 2.38 lakh Gram Panchayats via the Gram Manchitra portal and are linked to the PM Surigar Yojana, enabling community-led renewable-energy campaigns [11-16][17]. The Meri Panchayat pilot in Guwahati uses AI to analyse photos of drains, potholes and other local issues, automatically assigning them to the responsible department and triggering escalation when resolutions are delayed [N-M]. The Department of Drinking Water & Sanitation (DWSD) approached MOPR to use Bhashini for Village Water-Committee (VWC) meetings, extending vernacular AI support to water-governance [N]. Spatial Development Plans-initially resisted-were AI-assisted for zoning and road-network design in 34 highway-adjacent panchayats; the success led Andhra Pradesh to mandate spatial planning for all its panchayats [N-M].


Operational challenges have been addressed through a frugal, mobile-first design and strong stakeholder engagement. Uttar Pradesh’s onboarding of 59 000 Gram Panchayats onto eGram Swaraj in just 40 days demonstrated that perceived barriers-digital-signature registration, abandonment of checkbooks, and connectivity issues-can be overcome when the product meets both ministry and panchayat needs [115-122]. Capacity-building programmes launched the previous year are now scaling this knowledge across the country [33-35]. Language coverage, however, remains incomplete; 11 additional languages (including Assamese, Bodo, Maithili and Santali) are being added to Bhashini to close the gap [65-68].


Both speakers agree on the necessity of open, API-based architectures for long-term sustainability, but they differ on the breadth of cross-ministerial sharing. Amit Kumar stresses that modular, interoperable standards and data residency within India are essential to avoid vendor lock-in and to ensure sovereign AI infrastructure [actual-line-range-Amit] and highlights the DPDP Act as the privacy-safeguard underpinning public trust [N]. Alok Prem Nagar cautions that advising other ministries-each with robust legacy systems-constitutes “dangerous territory” and prefers to focus on the Panchayati Raj context [actual-line-range-Alok]. This reflects a moderate disagreement on how broadly the design principles should be propagated.


Future integrations are already being piloted. A larger catalogue of “common minimum services” is being defined, and the Ministry is moving beyond the minimum to deliver health, road-maintenance, street-lighting and water-drainage services through AI-enhanced portals [210-218]. The Pancham WhatsApp-based chatbot enables two-way communication with sarpanches and secretaries and can generate AI-driven audio-video messages for rapid dissemination [308-311]. The public portals now let any villager drill into finance-commission grant usage, view geotagged assets and track plan execution, thereby fostering transparency and citizen empowerment [30-32].


Underlying the deployment is India’s “five-layer” architecture: open-source large language models built on a modular, API-first stack that keeps costs low and ensures technological sovereignty [246-252][S34-36]. The scale of these deployments-over 1.15 lakh meetings processed and rapid state-level rollouts-demonstrates that AI can operate at unprecedented magnitude, comparable to earlier successes such as Aadhaar and UPI [39][64-66][246-252].


In conclusion, the MOPR’s AI-driven, language-inclusive platform demonstrates how public-stack AI can scale participatory governance across India’s vast, multilingual rural landscape while maintaining transparency, accountability and technological sovereignty [final-line-citation].


Session transcriptComplete transcript of the session
Shri Alok Prem Nagar

Just a little background, why Ministry of Panchayati Raj exists at the centre, because rural local governance is a state subject. We are rather new in this business, we came into being in the year 2004. Our objective was, or the purpose why we exist, is how we can empower panchayats, how we can nudge states into having acts that really transform our people into self -governing, responsible local bodies and so on. So, as a part of our job, we also have oversight over how the ministry, how the panchayats spend their finance commission grants. Finance commission grants are devolution grants, they go directly to the people in their… bank accounts and then subsequently… all panchayats, all two and a half lakh of them they are present on eGram Swaraj right from planning to the payment stage, everything is done on a portal which is called eGram Swaraj this portal works in the English language so I will tell you in 2019 when we were starting something called the People’s Plan Campaign, I happened to attend a Gram Sabha in the state of Karnataka I was there for something like 45 minutes and I was felicitated and sat on stage and I didn’t understand a thing and then it struck me I had this thing that how do you expect these people really to relate to what is happening because it is public money Everybody in the panchayat needs to know what kind of plans are uploaded How many works got done that were asked for the plans How much did it cost them to do it And subsequently they can raise issues in the meetings pertaining to the works close to their residences And along came Bhashini I think we had in the year 2023 an event called Manthan Where we invited a lot of people from the industry to tell us how we could conduct our business better And so Bhashini was a revelation And imagine that a person from a panchayat is looking at the expenses page for his gram panchayat or her gram panchayat And then by the end of the month, he has to pay the expenses And by a click of a button, they are able to see it in their own language It was magic That was the starting point.

Yeah, and subsequently, of course, we went from there and we found out through a survey that what really hurts a panchayat secretary is not to be able to produce the minutes of meeting in time, which are very important, which are the only record of a panchayat’s proceedings. And then again, using Bhashani and another tool, we were able to create Sabha Sar, in which if you input the video slash audio recording of your meeting, you are able to get a minuted draft, which you can then edit and upload. So that was miracle number two. And briefly, if I could also address Swamitva, the scheme that you mentioned. The Swamitva is a scheme where. Drone surveys are carried out over all the village habitations, so there are these pictures.

that are subsequently converted to orthorectified images and they lead to property rights for the people living inside those villages. But the way the images have been captured, there is dense point cloud information, all of which was getting wasted. Why? Because we were confining our attention only to the orthorectified images. So we had the AI guys look at that and then they converted all those rooftops that they could see into the solarization potential. As a result of which now, out of the 3 .3 lakh gram panchayats where drone surveys have been carried out, in 2 .38 lakh gram panchayats, you can go to gram Manchitra, and you can zoom into your village and then you can click the icon corresponding to the solar ability potential and it will tell you roof -wise how many panels can you fit there.

We’ve gone further and we’ve integrated that with the PM Surigar Yojana portal. As a result of which, the Gram Panchayat can drive it like a campaign and lead to greater rewards for everybody all around.

Moderator

Actually, it reaches the last mile citizen when you talk about those benefits. So India’s last mile operates in local languages and dialects, as you mentioned, solving that problem. So in your view, how critical is language AI in ensuring that digital governance platforms are inclusive and participatory and increases citizen trust and participation in Gram Sabhas?

Shri Alok Prem Nagar

Like I said, people are now able to follow what… what was something that was written in, they could still see it, of course. In the English language, then they’d have to go to the person who they knew to be very smart in the village and they’d have this person read it out to them. Now they can see it at their leisure. Not just people here, but people outside who are working in Mumbai can see what is happening in their panchayats close to Pune or something and immediately they can get active about it. And the militarization tool that I mentioned, that opens a whole new set of avenues now. You can have a record, then against that you can have action -taken reports and then you could have follow -up in the next meeting.

It makes it all amenable to a very systematic representation on portals. So that is what some of the states have already started doing. And it is truly remarkable that, anybody can go in there. And when I say anybody, I don’t mean just the panchayat secretaries. Anybody in a village can drill into their gram panchayat’s record and see that corresponding to the finance commission grants for any year, what was the plan against which how much has been executed, how many bills were prepared against each activity, and what is the status of the payment, whether it has been completed, where the asset exists, the geotags, and then you can zoom in and maybe see it on gram panchayat.

So there are great rewards for everybody all around. And we need to, of course, now intensify it through a capacity building training program. That is something we started doing from the previous year. But it has been an incredible journey. And it is being adopted all over.

Moderator

So, Alokji, let’s talk about… Let’s talk a little bit about Sabha Saar Impact. let’s let our audience know about it and with its launch on 14th August 2025 MOPR introduced an AI enabled voice to text meeting summarization tool powered by Bhashini ASR services. So as of 4th February 2026 over 1 ,15 ,100 15 gram sabha meetings have been processed. So this is a good number I need a thank you for the round of applause. So what structural changes have you observed in the panchayat functioning after sabha sar?

Shri Alok Prem Nagar

Sabha sar was one thing that we carried out for the convenience of the panchayats and the panchayat secretaries as opposed to E. Gram Swaraj which was which was our selfish motive we wanted panchayats to plan there and show all their vouchers there so that we could tell that this is how the money has been spent but sabha sar actually came through and As a part of a survey that was carried out using RapidPro by UNICEF, we asked something like 8 ,000 panchayat secretaries all over the country that how do you spend your time? How much of it is spent in inspections and attending programs and meetings and making records? So one thing that came through was the conduct and recording of meetings, meetings was the, in 65 % of the respondents, that was the activity that was sitting, you know, very heavy on their entire time availability.

And so having realized this and having the help of Bhashani, we converted it into a tool. So in Bhashani, it’s very simple. There is no big. Standard operating procedure as it were. So if you’re standing having a meeting, there has to be a recording device. It could well be your mobile phone. And then through audio or video recording, you can just place it each time somebody speaks. And later on, you input this into the Sabasa tool. The Sabasa tool is not something that is a part of the device on which you carried out your recording. So the issue related to connectivity in villages is something that we’ve been able to sidestep. And once you do that, it gives you a draft minute of meeting.

So Bhashani turns it into English. And the English thing is monetized using the AI engine. Again, Bhashani gives it back to them in their own language. And that efficiency. Yes, it’s voila. The person can just make a few changes and upload it. And we have. We’ve had some heartfelt gratitude coming to us from villages. as a result of this.

Moderator

Okay. So has the structured documentation improved transparency, participation tracking or monitoring of meeting frequency and agenda quality too?

Shri Alok Prem Nagar

Now that the minute is ready, if there are five items, ten items, so the states that have really gone ahead and adopted it, which is Odisha, which is Tamil Nadu, which is Tripura, all these people are into the second stages now where they are looking at the minutes of meeting and converting it into or refining it into tools that help them keep track of the activities after they have been created. We also realized through our meetings that why is the number just 1 ,15 ,000? So there are a whole lot of people whose languages do not exist on Bhashani. So from there, we ask those states to provide Bhashani with the necessary expertise so that they can train their bots.

And they’re already working on something like 11 more languages, which includes Assamese and Bodo and Maitali and Santal and whatnot. Yes. So those languages are also. So it’s been a very gratifying experience. And then the learning continues.

Moderator

Yeah, it’s commendable that things have reached to that level. So over to you, Amitji, from an accountability lens, does structured documentation change behavior with the governance systems?

Amit Kumar

Thank you. So I think, you know, so if you have understood the enormity of the situation, right, what we are talking about, 200. 150 ,000 plus gram panchayats and different kind of languages. so just to circle back if you look at the frugality of the situation right so so so for example if you look at in india we generally people talk about either we live in a bullock cart stage right or we are aspiring for bullet train right so so the point is if ai has to tell us in terms of you know how we learn in the future how will we transform so we cannot i mean leave out 900 plus million people who are living in villages absolutely so the idea is not to make it very very urbanized you know very very kind of elitist idea that you know that ai is only for urban ai is only for industries ai is only for commercial sector so obviously this is a journey right so you have to start somewhere so for example i mean the frugality what i was talking about that we did not ask gram panchayat to invest anything right all they need to have is a lot of money and they have a mobile phone which any which way they have right and the idea is just to kind of record and upload obviously there will be some you know challenges and kind of resistance also in the beginning But, you know, once they get used to it, so for example, today we are asking them to kind of, you know, upload your recording, right?

The rest is done by system. And system also has a provision of, you know, human in the loop so that we can go and correct it. Now tomorrow we see the next step what we will be doing, what we can do perhaps, right? When the next meeting happens, we can also populate the agenda from last meeting, right? So what was discussed last time, what was committed, whether you are doing or not doing, right? And then everything goes to kind of public domain. So generally the people who live in city, they know that, you know, when there is a RWA meeting, nobody goes and attend, right? But they all, you know, wages, warfare in the WhatsApp group, right?

So same in the village also, it’s not easy to bring people, right? But once they start getting the hang of it, right? That okay, there is a meeting, I am getting the mom and it’s available in the public domain. We are using AI, AI is for good. AI can do it. AI can also be leveraged for rural sector, right? Why it has to be very, very elitist only for passport, say, wallet, say, right? Right. So so that’s just a beginning. It’s just a journey. Right. And also, if you see from an idea point of view, I mean, this is a phenomenal idea for Ministry of Panchayati Raj. Let me congratulate sir and the entire team to think of something like that.

Right. Because the AI is all about idea and use case. Right. If you have the right idea, you can do wonders. But you have to have idea and kind of, you know, muscles to execute it. So that way, I believe that in this whole documentation will do wonders for them. Graham Panchayat will also realize something which was missing in the most part of the word that, you know, the record keeping accountability, transparency, so on, so forth. Because generally these decisions were taken by some people only and executed by some. And the large population was largely kept out of it, knowingly or unknowingly. Right. So I think that’s what I said, that, you know, it will change the way they were.

it will change the way they think because this is only for a you know kind of we are starting only with a let’s say meetings but now they will start thinking and there will be demand from states and otherwise right what more can be done with AI so broader case would be achieved yeah Sabasa is an example like Praman we are doing we have launched this Pancham you know bought also for all elected and selected representatives so I think it’s a great you know kind of experience efficiency would obviously help them adopt I mean let me tell you in our own corporate meetings we are still some of us making notes right despite being on teams despite using co -pilots despite having all tools at our disposal but we are still using it right we expect a junior guy to take notes and circle back so that’s a cultural change which you have to also see and these changes and these changes couldn’t have been possible if we wouldn’t have the infrastructure like Bhashni right because ministry on its own how ministry got benefited, we have infrastructure like Basni, right?

We have the you know, GPUs got available to us through the NDIA mission, right? Otherwise, you know, procurement itself could have been a big challenge, right? So we have a team to kind of build applications. So I think you know, it takes a village to move something, right? So that’s what has happened here.

Shri Alok Prem Nagar

Thank you for sharing your thoughts. In fact, just continuing with that, the Department of Drinking Water and Sanitation has actually approached us that the meetings of their village VWCs village water committees they want to use Bhashini for that and there has been some initial interaction between the two.

Moderator

That’s commendable, I would say. That’s awesome. So Alokji, let’s talk of some implementation challenges in rural India with AI. AI in rural governance is transformating but complex. So what are the biggest operational challenges, infrastructure, though a bit, I think Amit you were about to share that, but then infrastructure, training, dialect diversivity and connectivity. So what challenges are you facing? How receptive are panchayat functionalities and rural citizens to AI -enabled systems?

Shri Alok Prem Nagar

Challenges, of course, there are many and you would have anybody tell you. What we have found out, the adoption of eGram Swaraj by our villages is gram panchayats. A case in point, Uttar Pradesh has got something like 59 ,000 gram panchayats. And for Uttar Pradesh to onboard eGram Swaraj seemed like an impossible task because it involved registering your digital signing certificates and then everybody agreeing to completely dispense with checkbooks. All your payments were then going to be… Can you imagine Uttar Pradesh did it in 40 days flat, all 59 ,000 gram panchayats. So my point was that if you are ready with a product that addresses their needs and it is friendly and it meets, of course, my need was that I needed the money well accounted for and their need was a system that could make it very easy for them to do it.

So we met halfway and if UP can do it in 59 ,000, I am not prepared to hear an excuse from any other state in the country. It’s a trial by fire. Likewise for Sabasar. Sabasar, again, I said initially that there was a demand that was indicated from the state. So when we set out to meet that, we were clear what is it that we are looking for and people were so forthcoming. In fact, Bhashini also enabled me. to write letters to the states in their languages and people were gushing with affection and what not. I got a letter in Telugu for the first time and all that. So there are challenges but then the Ram Panchayats are predisposed to meet you halfway.

So you need to begin that journey and we have seen that with regard to a number of things. There have been campaigns every year they carry out a campaign from 2nd October to the 31st of December which extends to January typically where all two and a half lakh Ram Panchayats prepare their Ram Panchayat development plans and upload it on the portal. So 2 .5, 250 ,000 Ram Panchayats all of them planning for the next year and so before you enter the next financial year their plans are ready. I mean we don’t… We don’t do it in the departments, in the ministries. And all these Ram Panchayats have… not done it once, twice. They started in 2018. They’ve continued to do it ever since.

In the COVID year, there was a request that campaign. So there was a massive pushback from the states that no, we want to do it. The inertia was so great that they still did it. So there are challenges but if we make an application like you were saying that this is a simple recording device, this is a mobile phone, there aren’t things that you need to procure to set it up. So if you make a simple tool, people would grab it with both hands. So I think that is the embracing of challenges rather with the response we are getting with Bhashani.

Moderator

So for ministries delivering last mile services such as Ministry of Rural Development and the Ministry of Agriculture and Farmers Welfare, what lessons from MOPR’s AI journey would you share? How important is open architecture and in your sense?

Shri Alok Prem Nagar

That is dangerous territory. I am not in a position where I could start advising anybody because they’ve got pretty robust systems of their own. If you look at Manrega Soft and the PM Avas Yojana, because they are running schemes which are very pointed. Avas Yojana is just about houses. Manrega is a scheme where there is, of course, it’s as large as the things that you do in the Finance Commission grants, but it is fairly well organized. And in all of these, typically, the beneficiary is the individual. In Panchayati Raj, there are individuals at the end of it, but our emphasis is on the institution, the panchayat, and not just E. Gram Swaraj and the things that we do for their accounting and planning.

We also hooked up with the… Meteorological Department… and there are daily forecasts being generated for every gram panchayat. This people are able to see on their phones and all with the similar ability as they are able to see everything using Bhashini. So it’s a great enablement all around and it can only get better.

Moderator

Absolutely. So, Amitji, over to you. How critical is open architecture ensuring long -term sustainability and avoiding vendor lock -in?

Amit Kumar

If I can take a minute and talk about the previous question also.

Moderator

Sure, please go ahead.

Amit Kumar

Sir rightly mentioned that different ministries have got a different mandate. It’s not an apple to apple comparison. But see, you also have to see the panchayati raj, the main role of panchayati raj, what I understand is a mobilization. because they are not running major schemes on their own compared to others. And generally the best practices doesn’t have to be in form of technology or architecture only. The idea is that if you go down from top, there are two different ministries and if you go to the village, you will see the same infrastructure, same set of people are only working from both departments. So the idea is if one can do, others can also do. So there is a lot of learning in terms of method that how we could overcome, how could we mobilize, how we could implement some of these solutions.

And I am sure we know that RD and agriculture are also doing a lot of things, but their mandate is much bigger. But they can also take a lot of pride or learning from the success which we have. What was the second question? The second one was that how? Critical is open architecture in ensuring long -term sustainability and avoiding the window. So you must be hearing this word called sovereignty quite a lot, right, nowadays. So the whole idea of, you know, being sovereign in any part of the, you know, technology, be it defense, be it IT, be it any way, is the survivability, right? So the idea is despite, in spite any kind of, you know, geopolitical risk, we should survive.

Our system should run, right? So for that, generally people confuse sovereignty with also making India local, et cetera. So that’s not the case, right? We will always have some technology from outside. But we have to design in a way that it is kind of ready to shift, right? So either from a technology point of view, we have the interoperability, the standards which we have chosen, the models which we have chosen, the infrastructure which we can move around, and the teams which we can control, right? Right. So the data residency has to be within India and data is with us. So obviously if we have trained on one, we can train on another, something else also.

So the idea is also to look a little bit long term. See, what has happened that when we started, obviously, there were a lot of POCs. Nobody knew, right, how AI will behave. Still, we don’t know. Still, we don’t know, right? I mean, so obviously, that you have to start somewhere, right? And then you have to also ensure that in future, when we start with one use case, it becomes easy, right? When the department itself becomes fully AI enabled and we have 10 AI use cases running, then it becomes a problem, right? Problem of management. So that’s where I think we need to plan better for future so that, you know, we plan. I mean, it’s not that a use case is defined, then we found an easy method of procurement of infra or the model which I knew.

So going forward, I think there will be a platform approach, right? So where we have to think for future also that, okay, these AI cases are likely to come in future as well. Different kind of AI, right? DL. DL. DL. DL. DL. DL. DL. DL. DL. DL. DL. DL. DL. and accordingly we have to have open architecture like the way we did in a normal digital transformation. Even digital transformation, there used to be time where we created our own independent monolith applications. But now we are creating applications, you know, which are more API -based, can integrate with anybody, right? And futuristic, can scale our modular. So same concepts have to be used for AI

Moderator

Well said. So I think adoption comes with responsibility and that’s what you are scaling at, looking at. Swalokhji, Sabha sir demonstrates how language AI can power grassroots governance. After Sabha sir’s success, what deeper integrations do you envision with Bhashini and what does the next phase of collaboration looks like? Let’s talk about that.

Shri Alok Prem Nagar

About 16 have already started providing all those common minimum services. So minimum se nahi chalega. We wanted more. So now we had like a model list, union set, if you will, of all the desirable services that were being delivered. And the ministry carried out an exercise through an expert committee. And we have a much bigger list now. So we are not satisfied with the minimum. Now we are working towards that. But I think that AI has great potential in helping us. Thank you. So service delivery is something people don’t know to expect. and we would like through and people are going to be speaking in any number of languages. I think the next step, my government is something that has always been very invested in providing services to making ease of living easier as it were and providing all manner of things.

Everything is finally a service. You need to look at a doctor. You need your road fixed. You need a street light to be working. You want the log water to be drained or something. She needs more attention than us. Okay, over to you. So people should come to expect. they should demand these services from their Gram Panchayats. There are mechanisms of doing that because Gram Panchayats don’t have a lot of resources in terms of manpower, in terms of people who are at their beck and call to carry out the activities that are flowing from the Charter. So there are systems in a lot of these villages. You have common service centres in some states. They have their own system of common service centres like UP, like Bapuji Seva Kendra in Karnataka, like Me Seva.

So we need to take that further and we need people to be able to talk and find out if a certain service that is available to them, can they avail it in their village? If they are to do that, what is the mechanism? And if they’ve already made an application, that what should be able to tell them that where that thing currently stands? so that is a very wide area like I said that there are a number of services we also learnt of a pilot that was carried out in Guwahati where the bus used to have a camera it used to drive through, capture all number of images and basis that it would assign issue labels to them as it were if there is a drain overflowing so it takes note of that if there is a pothole then it takes note of that and then it assigns it to all these agencies whose job it becomes now to fix that so not that but maybe we have a mobile interface called Meri Panchayat which ports a lot of information from E Gram Swaraj Meri Panchayat also has the capability of capturing images of the issue that is being reported I think the next step is that image it makes sense of the image and it assigns it to the necessary department.

There are people who are mapped whose job it is to carry it out and within a certain amount of time it doesn’t happen, then there is escalation. We need to go deeper into that system. That, I think, is the next frontier. And, of course, because it involves vocalization of your demands, so bhajani is absolutely critical in this. So when we say there is a long way to go, I think that phrase is no more relevant. It’s a short way, but not even a big journey, an intelligent journey to move

Moderator

So India is building public digital infrastructure for AI at scale. So how do we balance scale with accountability and public trust? We have talked much about how we are building things. But let’s talk about the other side. And can India lead the world in population scale?

Amit Kumar

Of course it can. I am sure about that. But then multilingual AI for governance, when it comes, you would like to have a short -ended first. So one thing you all have to realize that whatever we do is a population scale and unparallel because of our size. So even our POCs exceed the kind of performance of European countries Our UP sir talked about UP 60 ,000 panchayats If you look at UP maybe it will be in top 10 country in terms of population and size. I think the world is vouching for us when it comes to the use cases So see if you look at that we have got that scale now. We have the experience behind us We did Aadhaar, we did UPI, we did Fastag, we did GST and we did Income Tax.

So now we have that confidence behind us that we can do anything of scale and with the same Prugal approach we will do 10 times cheaper than Western world and certainly not worse, better only. and also from last decade we have evolved right so for example the concept of privacy like dpdp act consent based usage like you know adhar brought so a lot of things have improved from a policy side of it now now once you have policies in place systems are easy because system themselves act as a rule you know once you have policies in place then you don’t need so much of human intervention or discretion so since we have done it since we have kind of you know done so much so now if you look at the very simple case bhashni i remember four five years back and i and amitabh used to i mean kind of debate also whether we need a bhashni okay right because we we had some of the google translate services so on for forth right but the idea is that i mean in the hindsight that was the right call right in future we have to have something called sovereignty word right we have we don’t have to depend on I mean we need to be frugal and we don’t want to use you know the applications which are very expensive from a taxpayer money point of view so similar things we have done a lot right so I think the next step for example if you look at roam around in AI summit you will see how many LLMs and SLMs we are building on our own right honorable ministers talked about five layers application I think we have ample talent to build applications LLMs we use open LLMs but we are developing our own and Basni also like one of the you know common infrastructure energy will take care right infra and chips anyway will have dependency but that’s the rest of the world also has a dependency right not that everybody has a rare earth and everybody is building chips so that way I believe that you know that and because we have that technical know how also I mean that’s our kind of bread and bread and butter nowadays right so we’ll be able to take the learnings from all these systems and we’ll move forward as of now we were a bit slow in last year or two because AI itself was new for everyone so we took some time but now I think from this year onwards we’ll really scale it up because we have tested the blood, we have seen the success and we will

Moderator

sure, thank you for sharing that so as we come towards the closure of this conversation I would like to leave with the one final thought which is like if Panchayati Raj institutions are the foundation of democracy can AI when built on a public stack and powered by language inclusion become the strongest enabler of participatory governance in 21st century just closing thoughts from you both Alokji, would you

Shri Alok Prem Nagar

absolutely he was just telling you that that we’ve been able to do things at scale this thing about UP that I told you I wear it like a badge that to have done it in some place so and it’s not an easy ask because there are so many stakeholders they’ve got various kinds of issues of their own you’ve got to engage with them address those things and if my problem is well defined and if I know what kind of a thing is going to help me redress that like Bhashini did for us I think that what you said is going to come true because that is so being able to understand my problem and knowing what parts of the problem can be fixed in what manner using the various tools that are available that is the key and I it’s not an over simplification but good servant bad master so that is something that stays and it is not it’s not going to land you in the right places if you just let it go around like an animal.

But then if you know where to put it, what modules to be inserted, where, what has been used in the background. And so that would make you more confident. I’m not really an AI person, so I’m just speaking on the strength of what I have learned. And the experience thus far has been outstanding, partly because we’ve had a very good partner. But other than that, I am not, you know, I’m not throwing it all open out to AI. I don’t wear T -shirts saying I love AI or something. But I have a problem, and it needs fixing. And I need to be able to know what aspects of AI can help me fix that in the best possible manner.

And that’s my thing on

Amit Kumar

So like Sir said, you know, Sir is not an AI person, neither am I. So if you look at, you know, that… But he was transparent enough to share that. No, no. So look at that way that none of us were, right? Yes, exactly. Because if you’re talking about AI, I have been doing this, you know, digital transformation for public sector for over 20 years. Yeah. Obviously, there was no AI, even there was no DPI, DPG also, you know, what we kind of retrofitted with the names, right? Right. So if you look at the idea of Panchayati Raj itself is a participative governance, right? That people have to assemble in the Gram Sabha and decide on the money which they’re getting, how to spend and prioritize.

Absolutely. And if AI tools like Praman and Sabha Sar and, you know, Pancham can help that strengthen, what best, you know, you can expect from a participative government, from a democratization point of view. Yes. So I think this sometimes, you know, that technology becomes secondary. Yes. In my view, most of the time, right? The ideas have to be clear in terms of what you want to achieve. and what problem you want to solve, what scale you want to solve, what are the guardrails you have to kind of, you know, also put in place. So, for example, when we do AI, that it cannot be 100 % autonomous, right? Of course. And it cannot be 100 % human in the loop also.

Because if we have each and every transaction being, you know, approved by human in the loop, then it defeats the purpose of AI. And there is no AI, right? Then we are still living in the rule -based algorithms. Algorithms. So the idea of, you know, that AI will be that we also train, monitor, have the mechanism to take complaints, have the mechanism to perfectly, you know, kind of train it better so that we improve our accuracy. So that is how AI journey. So AI journey is slightly different from the previous digital transformation journey, which were more like a transactional systems, right? So that way, I think, if you look at currently also Sabasar, I think whatever I am hearing from people market teams also, So it is giving great accuracy, right, in terms of translation and summarization.

And I’m sure whatever there are little bit areas to improve, it will improve on its own. So we cannot stop it, right? So once we have boarded a flight, then we can only get down at where we have to, right? So I think future is bright. And also from a MOPR experience point of view, it will also, I’m sure, energize and motivate a lot many others. I can say with my experience that if MOPR can do in rural, we can use AI tools. There is no stopping for us as a nation. This is truly an achievement when it comes to MOPR with the government.

Moderator

So you want to say anything regarding this, Alokji?

Shri Alok Prem Nagar

I thought of another application that works. That is something we’ve been working on, which was spatial development plans. Okay. we again engaged with a lot of panchayats that were close to the highways okay so typically if a panchayat is on a national highway close to a big city and have a population of 10 000 plus then you were eligible to participate in this program okay so there were 34 gp that we involved and we got the planning and architecture colleges to prepare spatial plans for them spatial plan would be futuristic it would zone and it would you know assign it would look into the future and see how this place was going to grow it would devise road networks or something and tell people what they would become over a period of time we had a conference with the with gram panchayats around bhopal bill and the people were so annoyed We don’t need a spatial plan.

Over a period of time, of course, we told them what it was going to be, but we had this epiphany that people need to be able to see what this spatial plan will help them become. And then we went into the next national conference. We had for each of these 34 spatial development plans a visualization. And we showed people that if you want to become this, you have to do this. And then there was greater enthusiasm. So the people on whom this plan is, who are going to be subjected to this plan, if I could use those words. So these people, if they’re not on board, there is no way you can carry it out. And that, I think, is wide open.

And we’ve had after that. But the entire state of Andhra Pradesh has gone ahead and said that all their planning is going to be spatial plans. So that is something that is amenable to AI tools. And a final thing that I remembered that lots of times we need to convey through audio video messages. He mentioned Pancham. So Pancham is a WhatsApp -based chatbot platform which allows us to have two -way conversation with all the sarpanchas and panchayat secretaries in the country. So all these people. And so if there is messaging that needs to be conveyed, if there are videos that need to be quickly created using AI tools, that is something that would be hugely effective in getting the message across in the quickest possible way.

Thank you.

Moderator

Thank you so much for such endeavor. insights on the Grampanchayath and how things are working behind. Actually, I’m sure the audience was truly, they were unknown about what’s happening around and this conversation has given a new tangent to how we look at the rural development. Thank you so much Shri Alok and thank you so much Shri Amin for sharing these thoughts on Grampanchayath development. Thank you so much for this fireside chat. Thank you. I would like to call Ms. Deepika. to please felicitate Mr. Alok.

Related ResourcesKnowledge base sources related to the discussion topics (4)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“The Ministry of Panchayati Raj (MOPR) was created in 2004 to empower Gram Panchayats and to nudge state governments toward legislation that makes local bodies truly self‑governing.”

The knowledge base states that the Ministry of Panchayati Raj was established in 2004 to empower gram panchayats and provide central coordination for rural local governance [S1].

Confirmedhigh

“In 2019, a People’s Plan Campaign demonstration of the then‑English‑only eGram Swaraj portal at a Gram Sabha in Karnataka revealed that villagers could not understand the displayed information.”

Witnesses note that eGram Swaraj operated only in English and that during a 2019 rollout the language barrier was highlighted, prompting concerns about villagers’ comprehension [S7] and [S9].

Confirmedhigh

“Bhashini is a language‑AI layer that translates portal content into the vernacular with a single click.”

Bhashini (spelled Bhasini in the source) is described as a software that translates messages into multiple Indian languages, enabling instant vernacular rendering of portal content [S8] and [S41].

Confirmedhigh

“Bhashini’s journey began in 2023.”

The knowledge base records that Bhashini’s remarkable journey started in 2023, marking the launch of the initiative that year [S39].

Additional Contextmedium

“Bhashini rapidly scaled to support millions of daily inferences on a large GPU cluster.”

Additional context: by 2023-2024 Bhashini was handling about 15 million daily inferences on a 200-GPU system, illustrating the scale of the deployment beyond the report’s description [S39].

Additional Contextmedium

“The Bhashini plugin automatically applies translation across all pages of a website, enabling one‑click vernacular access.”

Demonstrations show that once integrated, the Bhashini plugin translates content across an entire site without further user action, supporting the report’s “single click” claim [S41].

External Sources (46)
S1
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — -Shri Alok Prem Nagar: Senior official from the Ministry of Panchayati Raj (MOPR), Government of India. He discusses the…
S2
WSIS+20 Open Consultation session with Co-Facilitators — – **Amrit Kumar** – Dynamic Team Coalition co-chair – **Jennifer Chung** – (Role/affiliation not clearly specified) Am…
S4
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S5
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S6
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S7
Nepal Engagement Session — The transformative moment came with the integration of Bhashini, India’s language AI platform, into the eGram Swaraj sys…
S8
Leaders TalkX: Local Voices, Global Echoes: Preserving Human Legacy, Linguistic Identity and Local Content in a Digital World — NK Goyal, President of the CMAI Association of India, presented a series of strategies for digital empowerment, includin…
S9
https://dig.watch/event/india-ai-impact-summit-2026/nepal-engagement-session — All panchayats, all two and a half lakh of them, they are present on eGram Swaraj. For right from planning to the paymen…
S10
Agenda item 6 — Sri Lanka:Mr. Chair, my delegation underscores the critical importance of enhancing cyber security capacities in develop…
S12
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — So it’s actually a fairly complex, fairly advanced model, which again works in all… all Indian languages. From a bench…
S13
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — Himanshu Rai: Thank you very much. It’s always useful to be the last speaker because I can claim that I had the last wor…
S14
WS #283 AI Agents: Ensuring Responsible Deployment — Will Carter: Quite a lot of thought. This has been core to our mission at Google from the beginning, from our earliest d…
S15
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S16
AI Meets Agriculture Building Food Security and Climate Resilien — Low to moderate disagreement level with significant implications for AI governance in agriculture. The differences in ap…
S17
Ethics and AI | Part 5 — Concerned that certain activities within the lifecycle of artificial intelligence systems may undermine human dignity an…
S18
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Multi-stakeholder partnerships between policy researchers and private sector are essential for surfacing potential harms…
S19
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Galia Daor:Yeah, thanks very much. I admit it’s a bit challenging to speak after Allison on that front, but I will try, …
S20
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Qian Xiao:OK, well, I’m doing a lot of research on the international governance of AI. And from our perspective, we thin…
S21
Open Forum #33 Building an International AI Cooperation Ecosystem — High level of consensus on fundamental principles with complementary rather than conflicting perspectives. The agreement…
S22
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — “Critical is open architecture in ensuring long‑term sustainability and avoiding the window”[92]. “and accordingly we ha…
S23
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — “Open source is helpful for sharing”[9]. “Now I am also happy to introduce that the Chinese industrial community has als…
S24
Nepal Engagement Session — Language AI turns an English‑only portal into a service that can be understood by any gram panchayat official in their o…
S25
Multistakeholder digital governance beyond 2025 — Integration of artificial intelligence tools for language accessibility and participation enhancement
S26
Criss-cross of digital margins for effective inclusion | IGF 2023 Town Hall #150 — Another notable observation made in the analysis is the emphasis on digital needs and stakeholder participation in advan…
S27
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S28
Agentic AI in Focus Opportunities Risks and Governance — -Enterprise Guardrails and Risk Management: Panelists emphasized the critical importance of implementing robust safety m…
S29
Keynote-António Guterres — We need guardrails that preserve human agency, human oversight and human accountability
S30
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — The concept of “human in the loop” emerged as a critical principle for maintaining customer trust, ensuring that automat…
S31
Discussion Report: AI as Foundational Infrastructure – A Conversation Between Laurence Fink and Satya Nadella — Example of rural Indian farmer using early GPT models to reason over farm subsidies in local language and complete forms…
S32
How AI Drives Innovation and Economic Growth — However, Zutt acknowledged significant challenges facing developing countries in harnessing AI’s potential. Basic infras…
S33
Main Session | Policy Network on Artificial Intelligence — Muta Asguni: Thank you so much, Serena. Really happy to be here with you guys on this session at IGF. I think there i…
S34
Nepal Engagement Session — “This portal works in the English language.”[1]. “And then by a click of a button, they’re able to see it in their own l…
S35
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — “And imagine that a person from a panchayat is looking at the expenses page for his gram panchayat or her gram panchayat…
S36
How Multilingual AI Bridges the Gap to Inclusive Access — Nag describes Bhashini’s work on 22 constitutionally recognized Indian languages, covering speech recognition, text‑to‑t…
S37
Digital Government Strategy 2020 — 1. Ensure that all Central Administration services can be started online by 2016 and can be fully completed online by 20…
S38
https://dig.watch/event/india-ai-impact-summit-2026/the-future-of-public-safety-ai-powered-citizen-centric-policing-in-india — Everything is finally a service. You need to look at a doctor. You need your road fixed. You need a street light to be w…
S39
Inclusive AI_ Why Linguistic Diversity Matters — Bhashini’s remarkable journey, beginning in 2023, demonstrated impressive rapid development to support 15 million daily …
S40
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Abhishek Agarwal: Thank you, Minister. Abhishek? Yeah, I kind of echo the views of Her Excellency, like the three key in…
S41
ElevenLabs Voice AI Session & NCRB/NPMFireside Chat — During the presentation, the speakers provided a live demonstration using the Bhashini website itself, showing how the p…
S42
AI Safety at the Global Level Insights from Digital Ministers Of — And what it really means is this. It means a kind of humility and honesty, even when you may be biased in one way or ano…
S43
Pre 10: Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — Aloisia Wörgette: Thank you. Yes, that works. Thank you, Professor Kleinwächter. Dear colleagues, ladies and gentlemen, …
S44
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Pellerin Matis: Yeah, I mean, AI is a top priority for governments, as you said. But we need to be realistic, because…
S45
When AI use turns dangerous for diplomats — Diplomats are increasingly turning to tools like ChatGPT and DeepSeek to speed up drafting, translating, and summarising…
S46
https://dig.watch/event/india-ai-impact-summit-2026/secure-finance-risk-based-ai-policy-for-the-banking-sector — Thanks so much Priyanka. I would just make one correction as a cloud scientist. I am a cloud scientist and I am a cloud …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Shri Alok Prem Nagar
7 arguments139 words per minute3601 words1546 seconds
Argument 1
Bhashini enables panchayat members to view documents and minutes in their own language, increasing accessibility and participation
EXPLANATION
Bhashini translates portal content, such as expense pages and meeting minutes, into the local language of each gram panchayat, allowing villagers to read and understand information without relying on a literate intermediary. This improves accessibility and encourages greater citizen participation in local governance.
EVIDENCE
Alok recounts that a panchayat official can, with a click, view the expenses page in their own language, describing it as “magic” and a turning point for inclusivity [5]. He later notes that people can now see information in their language at their leisure, eliminating the need to ask a smart person in the village to read it out [21-24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Integration of Bhashini into eGram Swaraj to provide multilingual support and overcome language barriers is highlighted in the Nepal Engagement Session and the Transforming Rural Governance report [S7][S1].
MAJOR DISCUSSION POINT
Bhashini enables panchayat members to view documents and minutes in their own language, increasing accessibility and participation
AGREED WITH
Amit Kumar
Argument 2
Sabha Sar automatically generates draft minutes from audio/video recordings, saving time and improving efficiency for secretaries
EXPLANATION
The Sabha Sar tool uses Bhashini’s ASR services to transcribe audio or video recordings of gram sabha meetings and produces a draft minute that can be edited and uploaded, dramatically reducing the time secretaries spend on minute‑taking.
EVIDENCE
Alok explains that by inputting a video or audio recording into the tool, a minuted draft is generated which can then be edited and uploaded [7]. He further details the workflow: recordings are captured on a mobile phone, uploaded to the Sabasa tool, which then produces a draft minute in English, translates it back into the local language, and allows quick finalisation [42-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Sabha Sar tool generates draft minutes from audio/video recordings and translates them back into local languages, as described in the Nepal Engagement Session and the Transforming Rural Governance report [S7][S1].
MAJOR DISCUSSION POINT
Sabha Sar automatically generates draft minutes from audio/video recordings, saving time and improving efficiency for secretaries
AGREED WITH
Amit Kumar
Argument 3
Simple, mobile‑phone‑based tools and strong stakeholder engagement allowed Uttar Pradesh to onboard 59,000 gram panchayats in 40 days, showing that perceived barriers can be overcome
EXPLANATION
By offering a user‑friendly product that met both the ministry’s need for financial accountability and the panchayats’ need for ease of use, Uttar Pradesh was able to register all its gram panchayats on the eGram Swaraj portal within a remarkably short period, demonstrating that logistical challenges are surmountable.
EVIDENCE
Alok cites Uttar Pradesh’s onboarding of 59,000 gram panchayats in just 40 days, despite the need to register digital signing certificates and move away from checkbooks, illustrating rapid adoption when the solution aligns with stakeholder needs [115-120].
MAJOR DISCUSSION POINT
Simple, mobile‑phone‑based tools and strong stakeholder engagement allowed Uttar Pradesh to onboard 59,000 gram panchayats in 40 days, showing that perceived barriers can be overcome
AGREED WITH
Amit Kumar
Argument 4
While other ministries have robust systems, integrating language AI (Bhashini) with institutional services (e.g., meteorological forecasts) can enhance delivery at the panchayat level
EXPLANATION
Linking Bhashini with existing government data sources, such as daily weather forecasts from the Meteorological Department, enables gram panchayats to receive critical information in their native language, thereby improving service relevance and usability.
EVIDENCE
Alok mentions that the ministry has hooked up with the Meteorological Department to provide daily forecasts for every gram panchayat, which villagers can view on their phones through Bhashini’s language capabilities [151-154].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Daily weather forecasts from the Meteorological Department are delivered to gram panchayats in local languages through Bhashini integration, as noted in the Nepal Engagement Session [S7].
MAJOR DISCUSSION POINT
While other ministries have robust systems, integrating language AI (Bhashini) with institutional services (e.g., meteorological forecasts) can enhance delivery at the panchayat level
Argument 5
Extending AI to Swamitva solar‑potential mapping, spatial development plans, the Pancham WhatsApp chatbot, and image‑based issue routing will deepen service delivery
EXPLANATION
AI is being leveraged to extract solar‑panel potential from Swamitva drone imagery, to generate visual spatial development plans for villages, to power the Pancham WhatsApp‑based two‑way chatbot, and to automatically analyse images of local issues and route them to the appropriate agency, thereby broadening the range and efficiency of services offered to citizens.
EVIDENCE
Alok describes how drone-captured images from Swamitva were processed to reveal roof-wise solar potential and linked to the PM Surigar Yojana portal [11-16]; he details a pilot of spatial development plans visualised for 34 gram panchayats, which later influenced Andhra Pradesh’s statewide planning approach [294-306]; he notes the Pancham WhatsApp chatbot that enables two-way conversations with sarpanchas and secretaries [308-311]; and he explains an image-based issue-routing system that tags problems (e.g., overflowing drains, potholes) and assigns them to responsible agencies [234-236].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Pancham WhatsApp chatbot and image-based issue routing are examples of AI extensions to service delivery, mentioned in the Nepal Engagement Session [S7].
MAJOR DISCUSSION POINT
Extending AI to Swamitva solar‑potential mapping, spatial development plans, the Pancham WhatsApp chatbot, and image‑based issue routing will deepen service delivery
Argument 6
Public portals now let any citizen drill into finance‑commission grant usage, geotagged assets, and plan execution, fostering trust and enabling capacity‑building programs
EXPLANATION
The eGram Swaraj portal provides granular, searchable data on each gram panchayat’s financial plans, expenditures, bill status, asset geotags and execution status, which any villager can access, thereby increasing transparency, building trust and supporting capacity‑building initiatives.
EVIDENCE
Alok explains that any citizen can explore a gram panchayat’s finance-commission grant details, view plans, bills, payment status and geotagged assets, and even zoom into specific locations on the portal [30-32]; he adds that this openness underpins capacity-building training programmes that began the previous year [33-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The eGram Swaraj portal provides searchable data on finance-commission grants, geotagged assets and plan execution, enhancing transparency, as outlined in the Transforming Rural Governance report and the portal overview [S1][S9].
MAJOR DISCUSSION POINT
Public portals now let any citizen drill into finance‑commission grant usage, geotagged assets, and plan execution, fostering trust and enabling capacity‑building programs
Argument 7
Successful large‑scale deployments (e.g., 1.15  lakh gram‑sabhā meetings processed) demonstrate India’s ability to operate AI at unprecedented scale
EXPLANATION
Processing over a hundred thousand gram sabha meetings through the Sabha Sar AI tool showcases the scalability of India’s language‑AI infrastructure and its capacity to handle population‑scale digital governance tasks.
EVIDENCE
Alok references the figure of 1,15,000 meetings when discussing why the number is not higher, indicating that this volume has already been processed using the AI-enabled tool [64-66]; the moderator also cites the same figure of 1,15,100 meetings processed as of February 2026 [39].
MAJOR DISCUSSION POINT
Successful large‑scale deployments (e.g., 1.15  lakh gram‑sabhā meetings processed) demonstrate India’s ability to operate AI at unprecedented scale
A
Amit Kumar
7 arguments184 words per minute2674 words870 seconds
Argument 1
AI must serve the 900 million rural citizens, not just urban or elite users; language inclusion is essential for nationwide impact
EXPLANATION
Amit stresses that AI solutions for governance must be frugal, leverage existing mobile phones, and support the myriad rural languages so that the 900 million villagers are not excluded from digital transformation.
EVIDENCE
He notes that India’s 150,000-plus gram panchayats speak many languages and that AI should not be an elitist, urban-only tool; he emphasizes a frugal approach that requires only a mobile phone and no additional investment, ensuring inclusion of the 900 million rural population [73-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI solutions must be inclusive of the 900 million rural citizens, with language support and a frugal mobile-phone-only approach emphasized in the Transforming Rural Governance report and the Nepal Engagement Session [S1][S7].
MAJOR DISCUSSION POINT
AI must serve the 900 million rural citizens, not just urban or elite users; language inclusion is essential for nationwide impact
Argument 2
Structured meeting documentation enhances transparency, accountability and drives a cultural shift toward better record‑keeping
EXPLANATION
Systematic, AI‑generated documentation of gram sabha meetings improves visibility of decisions, creates accountability for officials, and gradually changes the cultural practice of informal or absent record‑keeping.
EVIDENCE
Amit remarks that structured documentation brings transparency and accountability, noting that it changes the culture of note-taking and improves record-keeping practices across villages [96-100].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Structured documentation of gram sabha meetings improves transparency, accountability and changes record-keeping culture, as highlighted in the Transforming Rural Governance report and the Nepal Engagement Session [S1][S7].
MAJOR DISCUSSION POINT
Structured meeting documentation enhances transparency, accountability and drives a cultural shift toward better record‑keeping
AGREED WITH
Shri Alok Prem Nagar
Argument 3
A frugal approach—requiring only existing phones and minimal investment—combined with a human‑in‑the‑loop model helps overcome initial resistance and ensures adoption
EXPLANATION
By asking panchayats to use devices they already own and keeping the cost low, while retaining a human‑in‑the‑loop for verification, the solution mitigates resistance and facilitates smooth uptake among rural officials.
EVIDENCE
Amit explains that the system only needs a mobile phone, no extra procurement, and incorporates a human-in-the-loop for correction, which helps overcome early resistance and promotes adoption [73-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A frugal approach using existing phones and a human-in-the-loop verification model helps overcome resistance and ensures adoption, described in the Nepal Engagement Session and the Transforming Rural Governance report [S7][S1].
MAJOR DISCUSSION POINT
A frugal approach—requiring only existing phones and minimal investment—combined with a human‑in‑the‑loop model helps overcome initial resistance and ensures adoption
AGREED WITH
Shri Alok Prem Nagar
Argument 4
Open, API‑based architecture, data residency within India, and modular design are critical to avoid vendor lock‑in and ensure long‑term sustainability
EXPLANATION
Amit argues that AI systems should be built on open, interoperable APIs, keep data within Indian jurisdiction, and adopt modular components so that future changes or vendor shifts do not jeopardise functionality.
EVIDENCE
He discusses the need for open architecture, data residency, API-based integration, and modular design to prevent vendor lock-in and ensure sustainability, outlining these principles in detail [170-205].
MAJOR DISCUSSION POINT
Open, API‑based architecture, data residency within India, and modular design are critical to avoid vendor lock‑in and ensure long‑term sustainability
Argument 5
Leveraging existing public digital infrastructure (Aadhaar, UPI, GST) and building indigenous LLMs allow population‑scale AI that is cost‑effective and sovereign
EXPLANATION
By reusing platforms such as Aadhaar, UPI, and GST and developing home‑grown large language models, India can deploy AI at the scale of its population while keeping costs low and maintaining technological sovereignty.
EVIDENCE
Amit cites India’s prior successes with Aadhaar, UPI, FASTag, and GST as foundations for scaling AI, and notes ongoing work on indigenous LLMs and the importance of sovereignty to keep costs down and reduce dependence on foreign providers [246-252].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Leveraging Aadhaar, UPI, GST and building indigenous LLMs enables cost-effective, sovereign, population-scale AI, discussed in the Transforming Rural Governance report and the Sovereign AI for India paper [S1][S11].
MAJOR DISCUSSION POINT
Leveraging existing public digital infrastructure (Aadhaar, UPI, GST) and building indigenous LLMs allow population‑scale AI that is cost‑effective and sovereign
Argument 6
Guardrails such as balanced AI‑human oversight, clear grievance mechanisms, and open data ensure accountability while maintaining citizen confidence
EXPLANATION
A balanced approach that combines AI automation with human verification, provides transparent grievance redressal, and makes data openly available safeguards against misuse and builds public trust in AI‑driven governance.
EVIDENCE
Amit outlines the need for AI-human balance, grievance mechanisms, monitoring, and open data to ensure accountability and maintain citizen confidence in AI systems [274-280].
MAJOR DISCUSSION POINT
Guardrails such as balanced AI‑human oversight, clear grievance mechanisms, and open data ensure accountability while maintaining citizen confidence
AGREED WITH
Shri Alok Prem Nagar
Argument 7
Prior digital successes (Aadhaar, UPI, FASTag) and a focus on sovereignty position India to become a global leader in multilingual, population‑scale AI
EXPLANATION
Building on its experience with large‑scale digital public goods and emphasizing technological self‑reliance, India is well‑placed to lead worldwide in deploying AI solutions that serve a multilingual, massive population.
EVIDENCE
Amit reiterates that India’s track record with Aadhaar, UPI, FASTag, and its commitment to building indigenous LLMs and ensuring sovereignty give it a competitive edge to become a global leader in multilingual, population-scale AI [246-252].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s prior digital successes and focus on sovereignty position it to lead globally in multilingual, population-scale AI, as noted in the Transforming Rural Governance report and the Sovereign AI for India analysis [S1][S11].
MAJOR DISCUSSION POINT
Prior digital successes (Aadhaar, UPI, FASTag) and a focus on sovereignty position India to become a global leader in multilingual, population‑scale AI
AGREED WITH
Shri Alok Prem Nagar
Agreements
Agreement Points
Language AI (Bhashini) enables panchayat members to view documents and minutes in their own language, increasing accessibility and participation
Speakers: Shri Alok Prem Nagar, Amit Kumar
Bhashini enables panchayat members to view documents and minutes in their own language, increasing accessibility and participation AI must serve the 900‑million rural citizens, not just urban or elite users; language inclusion is essential for nationwide impact
Both speakers stress that providing information in local languages through Bhashini removes the need for a literate intermediary and lets villagers read expense pages, plans and meeting minutes at their leisure, thereby widening participation and inclusion [5][21-24][73-78].
POLICY CONTEXT (KNOWLEDGE BASE)
The use of language AI to localise government portals aligns with documented pilots in Nepal that transformed English-only portals into multilingual services for gram panchayat officials [S24] and reflects broader multistakeholder efforts to improve language accessibility in digital governance [S25].
AI‑enabled Sabha Sar automatically generates draft minutes from audio/video recordings, saving time and improving transparency for panchayat secretaries
Speakers: Shri Alok Prem Nagar, Amit Kumar
Sabha Sar automatically generates draft minutes from audio/video recordings, saving time and improving efficiency for secretaries Structured meeting documentation enhances transparency, accountability and drives a cultural shift toward better record‑keeping
Alok describes the workflow where a recording is uploaded, Bhashini transcribes it into an English draft, translates it back, and the official makes a few edits before publishing; Amit notes that such systematic documentation raises transparency and changes the culture of record-keeping [7][42-60][56-60][96-100].
Simple, mobile‑phone‑based tools and strong stakeholder engagement enable rapid, large‑scale onboarding of gram panchayats, showing that perceived barriers can be overcome
Speakers: Shri Alok Prem Nagar, Amit Kumar
Simple, mobile‑phone‑based tools and strong stakeholder engagement allowed Uttar Pradesh to onboard 59,000 gram panchayats in 40 days, showing that perceived barriers can be overcome A frugal approach—requiring only existing phones and minimal investment—combined with a human‑in‑the‑loop model helps overcome initial resistance and ensures adoption
Alok cites Uttar Pradesh’s registration of 59,000 gram panchayats in 40 days using a phone-friendly portal, while Amit emphasizes that asking panchayats to use the mobile phones they already own, with a light-touch verification step, removes cost barriers and drives uptake [115-120][73-78].
POLICY CONTEXT (KNOWLEDGE BASE)
Mobile-first AI interfaces have been highlighted as a pragmatic solution for low-infrastructure settings, with studies noting voice-based and phone-centric tools as key enablers of digital inclusion in developing regions [S32].
India’s ability to operate AI at population scale is demonstrated by large‑scale deployments such as processing over 1.15 lakh gram‑sabha meetings and by prior digital public‑goods successes
Speakers: Shri Alok Prem Nagar, Amit Kumar
Successful large‑scale deployments (e.g., 1.15 lakh gram‑sabha meetings processed) demonstrate India’s ability to operate AI at unprecedented scale Prior digital successes (Aadhaar, UPI, FASTag) and a focus on sovereignty position India to become a global leader in multilingual, population‑scale AI
Alok points to the 1,15,000 meetings already processed through Sabha Sar, while Amit highlights India’s track record with Aadhaar, UPI, GST and the ongoing development of indigenous LLMs as proof of capacity for population-scale AI [39][64-66][246-252].
POLICY CONTEXT (KNOWLEDGE BASE)
Large-scale AI deployments in rural India have been cited as examples of foundational AI infrastructure, including early GPT-based models used by farmers for local-language tasks, underscoring the country’s capacity for population-wide AI services [S31].
Balanced AI‑human oversight (human‑in‑the‑loop) and clear guardrails are essential to maintain accountability and public trust
Speakers: Shri Alok Prem Nagar, Amit Kumar
Sabha Sar automatically generates draft minutes from audio/video recordings, saving time and improving efficiency for secretaries Guardrails such as balanced AI‑human oversight, clear grievance mechanisms, and open data ensure accountability while maintaining citizen confidence
Alok notes that after AI produces a draft, a person makes a few edits before uploading, indicating a human-in-the-loop step; Amit stresses that AI should not be fully autonomous and must be paired with human verification, grievance redressal and open data to safeguard trust [56-60][274-280].
POLICY CONTEXT (KNOWLEDGE BASE)
International AI governance discussions repeatedly stress human-in-the-loop safeguards and guardrails to preserve accountability, as articulated in the AI Automation in Telecom summit and UN AI security council deliberations [S30][S29][S28][S15].
Similar Viewpoints
Both see AI, especially language AI, as a catalyst that empowers rural citizens, makes governance information understandable, and deepens participatory democracy, moving beyond elite‑only applications [3][30-32][73-78][86-88][270-272].
Speakers: Shri Alok Prem Nagar, Amit Kumar
Bhashini enables panchayat members to view documents and minutes in their own language, increasing accessibility and participation AI must serve the 900‑million rural citizens, not just urban or elite users; language inclusion is essential for nationwide impact AI for good can strengthen participative governance and democratize decision‑making at the grassroots
Both stress the importance of building capacity among panchayat officials and citizens through easy‑to‑use digital tools and training, ensuring that technology adoption translates into effective governance practice [33-34][73-78].
Speakers: Shri Alok Prem Nagar, Amit Kumar
Public portals now let any citizen drill into finance‑commission grant usage, geotagged assets, and plan execution, fostering trust and enabling capacity‑building programs A frugal approach—requiring only existing phones and minimal investment—combined with a human‑in‑the‑loop model helps overcome initial resistance and ensures adoption
Unexpected Consensus
Both speakers agree that sophisticated AI outcomes can be delivered through extremely simple, phone‑based interfaces, contrary to expectations that high‑end infrastructure is required
Speakers: Shri Alok Prem Nagar, Amit Kumar
Simple, mobile‑phone‑based tools and strong stakeholder engagement allowed Uttar Pradesh to onboard 59,000 gram panchayats in 40 days, showing that perceived barriers can be overcome A frugal approach—requiring only existing phones and minimal investment—combined with a human‑in‑the‑loop model helps overcome initial resistance and ensures adoption
While large-scale AI projects are often assumed to need complex hardware and dedicated devices, both speakers highlight that a basic mobile phone and a lightweight app are sufficient to capture recordings, upload data and generate AI-driven outputs, enabling rapid, low-cost rollout across millions of villages [115-120][73-78].
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence from emerging-economy deployments shows that sophisticated AI functionalities can be accessed via basic mobile phones, supporting the claim that high-end infrastructure is not a prerequisite [S32][S31].
Overall Assessment

The discussion shows strong convergence among the participants on four core themes: (1) language‑AI is essential for inclusive rural governance; (2) AI‑generated structured documentation (Sabha Sar) improves transparency and drives cultural change; (3) frugal, mobile‑phone‑first solutions coupled with stakeholder engagement enable rapid, large‑scale adoption; (4) balanced human‑in‑the‑loop oversight and clear guardrails are needed to sustain trust. These shared positions underline a high level of consensus on how AI should be designed, deployed and governed in India’s Panchayati Raj system.

High consensus – the speakers repeatedly reinforce each other’s points, indicating a unified policy direction that can accelerate scaling of AI‑enabled rural governance while maintaining inclusivity, accountability and sovereignty.

Differences
Different Viewpoints
Open architecture and sharing of AI solutions with other ministries
Speakers: Shri Alok Prem Nagar, Amit Kumar
Alok: “I am not in a position where I could start advising anybody because they’ve got pretty robust systems of their own” (dangerous territory) [144-147] Amit: “Open, API-based architecture, data residency within India, and modular design are critical to avoid vendor lock-in and ensure long-term sustainability” [170-205]
Alok cautions against advising other ministries, suggesting that each has its own robust systems and that cross-ministerial guidance is a “dangerous territory” [144-147]. Amit argues that an open, API-based architecture with data residency is essential for sustainability and to avoid vendor lock-in, implying that sharing design principles across ministries is desirable [170-205].
POLICY CONTEXT (KNOWLEDGE BASE)
Open architecture is identified as a critical factor for sustainable digital transformation in Indian rural governance and is recommended for inter-ministerial reuse of AI assets [S22] and broader open-source sharing practices highlighted at the AI Impact Summit 2026 [S23].
Degree of AI autonomy versus human oversight
Speakers: Shri Alok Prem Nagar, Amit Kumar
Alok: Emphasises AI as a “good servant, bad master” and warns against over-reliance, stressing the need to know where to apply AI modules [255-259] Amit: States that AI cannot be 100 % autonomous nor 100 % human-in-the-loop, advocating a balanced approach with clear guardrails [274-280]
Alok warns that AI should be used as a tool and not become a master, implying a cautious, limited deployment [255-259]. Amit acknowledges the need for balance but explicitly outlines that AI should not be fully autonomous nor fully human-controlled, calling for structured oversight mechanisms [274-280].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on AI autonomy versus human oversight are central to AI agent governance, with experts warning about risks of high autonomy and advocating robust human-in-the-loop mechanisms [S14][S28][S15].
Unexpected Differences
Openness to cross‑ministerial AI collaboration
Speakers: Shri Alok Prem Nagar, Amit Kumar
Alok: Declares it “dangerous territory” to advise other ministries because they have robust systems [144-147] Amit: Emphasises open, API-based architecture and sharing of lessons across ministries as essential for sustainability [170-205]
Given the overall collaborative tone of the discussion, Alok’s reluctance to share AI design principles with other ministries was unexpected, contrasting with Amit’s strong advocacy for open architecture and cross‑sector learning.
POLICY CONTEXT (KNOWLEDGE BASE)
Cross-sectoral AI collaboration is promoted in multistakeholder partnership frameworks, emphasizing the need for coordinated ministry-level cooperation to surface harms and test solutions [S18][S21].
Overall Assessment

The speakers largely concur on the promise of language‑AI for inclusive rural governance, the utility of AI‑generated minutes, and the feasibility of low‑cost, phone‑based solutions. The principal disagreements centre on the extent of open, cross‑ministerial architecture and the balance between AI autonomy and human oversight.

Moderate – while consensus exists on goals and many benefits, divergent views on governance models (open architecture vs ministry‑specific robustness) and on AI control mechanisms could affect how quickly and uniformly AI solutions are rolled out across sectors.

Partial Agreements
Both agree that language‑AI is essential for inclusive rural governance, but Alok focuses on the specific tool (Bhashini) while Amit stresses the broader principle of serving the massive rural population with frugal, mobile‑phone‑only solutions [5][21-24][73-78]
Speakers: Shri Alok Prem Nagar, Amit Kumar
Bhashini enables panchayat members to view documents and minutes in their own language, increasing accessibility and participation AI must serve the 900 million rural citizens, not just urban or elite users; language inclusion is essential for nationwide impact
Both acknowledge that AI‑generated meeting minutes improve transparency and efficiency, though Alok describes the tool’s workflow while Amit highlights its cultural impact on accountability [7][42-60][96-100]
Speakers: Shri Alok Prem Nagar, Amit Kumar
Sabha Sar automatically generates draft minutes from audio/video recordings, saving time and improving efficiency for secretaries Structured meeting documentation enhances transparency, accountability and drives a cultural shift toward better record‑keeping
Both stress that low‑cost, phone‑based solutions and stakeholder buy‑in drive rapid adoption; Alok cites the Uttar Pradesh rollout as evidence, while Amit frames it as a general frugal strategy with human verification [115-120][73-78]
Speakers: Shri Alok Prem Nagar, Amit Kumar
Simple, mobile‑phone‑based tools and strong stakeholder engagement allowed Uttar Pradesh to onboard 59,000 gram panchayats in 40 days, showing that perceived barriers can be overcome A frugal approach—requiring only existing phones and minimal investment—combined with a human‑in‑the‑loop model helps overcome initial resistance and ensures adoption
Both view India’s scale‑up of AI as a strength; Alok points to the volume of meetings processed, while Amit links this capacity to prior digital public goods and sovereign LLM development [39][64-66][246-252]
Speakers: Shri Alok Prem Nagar, Amit Kumar
Successful large‑scale deployments (e.g., 1.15 lakh gram‑sabha meetings processed) demonstrate India’s ability to operate AI at unprecedented scale Leveraging existing public digital infrastructure (Aadhaar, UPI, GST) and building indigenous LLMs allow population‑scale AI that is cost‑effective and sovereign
Takeaways
Key takeaways
Language AI (Bhashini) is essential for inclusive, participatory rural governance, enabling Panchayat members to access documents, minutes, and services in their own languages. The Sabha Sar tool automates meeting transcription and draft minute generation, dramatically reducing secretarial workload and improving transparency and accountability. Simple, mobile‑phone‑based solutions combined with strong stakeholder engagement can overcome perceived infrastructure and adoption barriers, as demonstrated by Uttar Pradesh’s rapid onboarding of 59,000 gram panchayats. A frugal, human‑in‑the‑loop approach ensures adoption while maintaining data quality and trust. Open, API‑based architecture, modular design, and data residency within India are critical to avoid vendor lock‑in and to sustain long‑term AI deployments across ministries. Future integrations (Swamitva solar‑potential mapping, spatial development plans, Pancham WhatsApp chatbot, image‑based issue routing) will deepen service delivery at the grassroots level. Public portals now allow any citizen to drill into finance‑commission grant usage, geotagged assets, and plan execution, fostering accountability and citizen trust. India’s prior large‑scale digital successes (Aadhaar, UPI, FASTag, GST) and ongoing development of indigenous LLMs position the country to lead multilingual, population‑scale AI deployments.
Resolutions and action items
Expand Bhashini language coverage: add at least 11 more regional languages (e.g., Assamese, Bodo, Maithili, Santali). States to provide language expertise and training data to Bhashini for the new language models. Scale capacity‑building programmes for Panchayat officials on using eGram Swaraj, Sabha Sar, and related AI tools. Integrate Bhashini with other ministries’ service portals (e.g., Drinking Water & Sanitation, Rural Development, Agriculture) for meeting minutes and citizen‑service interactions. Develop and pilot image‑based issue detection and routing (e.g., potholes, overflowing drains) linked to relevant departmental workflows. Roll out the Pancham WhatsApp chatbot for two‑way communication with sarpanches and secretaries, including AI‑generated audio‑video messages. Formalise an open, API‑centric architecture framework for AI services across ministries, ensuring data residency and modular interoperability. Implement a balanced AI‑human oversight model with clear grievance and correction mechanisms for AI outputs.
Unresolved issues
Full coverage of India’s linguistic diversity: many dialects still lack Bhashini support, limiting adoption in some gram sabhas. Sustainable connectivity in remote villages remains a challenge for real‑time AI services. Standardised governance and accountability frameworks for AI (e.g., audit trails, bias monitoring) have not been fully defined. Integration pathways with other ministries’ legacy systems need detailed technical road‑maps. Long‑term funding and resource allocation for continuous AI model training and maintenance are not yet settled. Mechanisms for scaling human‑in‑the‑loop verification without creating bottlenecks require further design.
Suggested compromises
Adopt a ‘minimum‑service’ baseline first, then expand functionalities as language models mature (meeting halfway between state needs and ministry capabilities). Use existing mobile phones and low‑cost tools rather than procuring new hardware, reducing financial barriers for Panchayats. Combine AI automation with human‑in‑the‑loop review to balance efficiency with data quality and trust. Prioritise open, API‑based modules that can interoperate with both new AI services and legacy departmental applications. Leverage India’s existing public digital infrastructure (Aadhaar, UPI, GST) to host AI services, avoiding duplicate investments.
Thought Provoking Comments
I was at a Gram Sabha in Karnataka for 45 minutes, was felicitated and didn’t understand a thing – that’s when I realized how impossible it is for people to relate to public money information when the portal is only in English. That realization sparked the idea of Bhashini, a language‑AI layer that lets panchayat officials see expenses and minutes in their own language.
It pinpoints the core problem – language barrier – and directly links it to the creation of a concrete solution (Bhashini). It reframes the discussion from generic digital governance to the necessity of vernacular AI for inclusion.
This comment set the thematic foundation for the whole conversation, prompting the moderator’s follow‑up on language AI’s importance and leading other speakers (e.g., Amit) to discuss scalability, frugality, and broader applications of vernacular AI.
Speaker: Shri Alok Prem Nagar
We took the dense point‑cloud data from the Swamitva drone surveys, which were originally only used for orthorectified images, and asked our AI team to extract rooftop solar‑potential. Now 2.38 lakh gram panchayats can see roof‑wise solar panel recommendations, integrated with the PM Surya Yojana portal.
Shows innovative reuse of existing data assets, turning a land‑recording exercise into a renewable‑energy service, illustrating how AI can create unexpected value‑added services for rural communities.
Introduced a new topic – leveraging AI for climate‑friendly development – and demonstrated the multiplicative benefits of AI, prompting further discussion on cross‑sectoral integration and the potential of AI beyond finance.
Speaker: Shri Alok Prem Nagar
We didn’t ask gram panchayats to invest anything – all they need is a mobile phone they already have. The system records meetings, creates drafts, and a human‑in‑the‑loop can correct it. The frugality of the solution is what makes it adoptable at scale.
Highlights a pragmatic, low‑cost adoption model that respects the resource constraints of rural bodies, emphasizing that technology must fit existing realities rather than impose new burdens.
Shifted the conversation from showcasing technology to addressing implementation barriers, reinforcing the earlier point about language inclusion with a concrete, affordable rollout strategy.
Speaker: Amit Kumar
Uttar Pradesh onboarded 59,000 gram panchayats onto eGram Swaraj in just 40 days – a task that seemed impossible. If we can do it there, no other state can claim it’s too hard.
Provides a powerful empirical example of rapid, large‑scale adoption, countering any narrative that AI solutions are inherently slow or bureaucratically cumbersome.
Served as a turning point that bolstered confidence among participants, leading the moderator to probe deeper into operational challenges and prompting Amit to discuss open architecture and sustainability.
Speaker: Shri Alok Prem Nagar
Open architecture is essential for long‑term sustainability and avoiding vendor lock‑in. We need interoperable standards, modular APIs, and data residency within India so that we can shift technology stacks without losing functionality.
Moves the discussion from specific use‑cases to systemic design principles, stressing the strategic importance of openness for national AI sovereignty and future scalability.
Redirected the dialogue toward governance of the technology itself, influencing Alok’s later remarks about expanding services and encouraging a broader view of AI as public infrastructure.
Speaker: Amit Kumar
We created spatial development plans for 34 gram panchayats near highways, visualised them, and that visualisation convinced the communities to adopt the plans. Andhra Pradesh has now decided to make spatial planning mandatory for all its panchayats.
Demonstrates how AI‑driven visual tools can change stakeholder perception and drive policy adoption, illustrating the power of AI to not just automate but also persuade and shape planning processes.
Introduced a new dimension—AI as a catalyst for participatory planning—prompting the moderator to ask about future integrations and reinforcing the narrative that AI can deepen democratic engagement.
Speaker: Shri Alok Prem Nagar
AI cannot be 100 % autonomous nor 100 % human‑in‑the‑loop. Too much human approval defeats the purpose, while full autonomy risks errors. We need a balanced model with monitoring, complaints mechanisms, and continuous training.
Provides a nuanced, realistic view of AI governance, highlighting the need for calibrated human oversight, which adds depth to the earlier optimism about AI deployment.
Tempered the discussion, leading participants to acknowledge the importance of safeguards and quality control, and setting the stage for concluding remarks about responsible AI use.
Speaker: Amit Kumar
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved it from a descriptive overview of AI tools to a deeper exploration of why and how those tools succeed in rural India. Alok’s personal anecdote about language barriers sparked the central theme of vernacular AI, while his examples of repurposing drone data and spatial planning illustrated AI’s capacity to generate new public services. Amit’s emphasis on frugal, low‑cost implementation and open, modular architecture addressed practical adoption challenges and long‑term sustainability. The Uttar Pradesh onboarding story acted as a confidence‑boosting turning point, proving that large‑scale rollout is feasible. Together, these comments redirected the conversation toward scalability, inclusivity, governance, and the strategic design of AI infrastructure, culminating in a balanced view that acknowledges both transformative potential and the need for responsible oversight.

Follow-up Questions
How can Bhashini be expanded to support additional regional languages such as Assamese, Bodo, Maithili, Santali and others?
Alok highlighted that many users cannot access the tool because their language is not yet supported, and mentioned ongoing work on 11 more languages, indicating a need for further development and research.
Speaker: Shri Alok Prem Nagar
What are the technical and operational requirements for integrating Bhashini with existing service delivery platforms like Common Service Centres, Meri Panchayat, and automated image‑based issue detection systems?
Alok described a vision where images of local problems are automatically interpreted and routed to the correct department, but noted that deeper integration is needed, pointing to a research and implementation gap.
Speaker: Shri Alok Prem Nagar
How can AI‑driven spatial development planning tools be designed to improve community understanding, acceptance, and participation in the planning process?
Alok shared experiences with spatial plans that initially faced resistance and later gained enthusiasm after visualization, suggesting further study on effective visualization and engagement methods.
Speaker: Shri Alok Prem Nagar
What steps are required to extend Bhashini’s capabilities to the Village Water Committees (VWCs) of the Department of Drinking Water and Sanitation?
Alok mentioned an initial interaction with the department to use Bhashini for VWC meetings, indicating a need for cross‑ministry integration research.
Speaker: Shri Alok Prem Nagar
What is the most effective model for capacity‑building and training programmes that enable panchayat officials to adopt AI tools like Bhashini and Sabha Sar at scale?
Alok referenced ongoing capacity‑building initiatives and the rapid onboarding of thousands of panchayats, implying a need to evaluate and refine training approaches.
Speaker: Shri Alok Prem Nagar
What open‑architecture frameworks and standards should be adopted to ensure long‑term sustainability of AI solutions in government and avoid vendor lock‑in?
Amit emphasized the importance of open architecture, modular APIs, and interoperability for future AI use cases, calling for research into suitable frameworks.
Speaker: Amit Kumar
How can India develop sovereign AI infrastructure that guarantees data residency, model portability, and reduced dependence on foreign technology providers?
Amit discussed the concept of technological sovereignty and the need for designs that allow shifting components while keeping data within India, highlighting a strategic research area.
Speaker: Amit Kumar
What is the optimal balance between human‑in‑the‑loop oversight and autonomous AI decision‑making in rural governance applications?
Amit noted that AI cannot be 100 % autonomous nor fully human‑controlled, suggesting the need for research on governance models for human‑AI collaboration.
Speaker: Amit Kumar
What standardized APIs and modular design principles are needed to enable seamless AI integration across ministries such as Rural Development, Agriculture, and Panchayati Raj?
Amit called for a platform‑approach with API‑based applications to manage multiple AI use cases, indicating a research gap in cross‑ministerial standards.
Speaker: Amit Kumar
What measurable impacts has structured documentation (e.g., Sabha Sar) had on transparency, participation tracking, meeting frequency, and agenda quality in gram panchayats, and what further data is needed?
The moderator asked about structural changes after Sabha Sar, and Alok provided anecdotal evidence, suggesting a need for systematic impact assessment.
Speaker: Moderator, Shri Alok Prem Nagar
How does the introduction of AI tools like Sabha Sar influence behavior change, accountability, and decision‑making among panchayat officials and citizens?
Amit raised the question of behavioral change resulting from structured documentation, indicating a research opportunity to study cultural shifts.
Speaker: Amit Kumar
How can AI systems be scaled to serve India’s massive population while maintaining performance, cost‑effectiveness, and compliance with privacy regulations?
Amit highlighted India’s ability to scale digital services and the need to benchmark AI performance and cost against global standards, pointing to further scalability research.
Speaker: Amit Kumar
What strategies can increase citizen trust and participation in Gram Sabhas through language‑inclusive AI platforms?
The moderator emphasized the role of language AI in building trust and participation, suggesting further investigation into trust‑building mechanisms.
Speaker: Moderator
How effective are AI‑enabled chatbot platforms like Pancham for two‑way communication with sarpanches and panchayat secretaries, and what improvements are needed?
Alok mentioned Pancham as a WhatsApp‑based chatbot for rapid messaging, indicating a need to evaluate its impact and refine its capabilities.
Speaker: Shri Alok Prem Nagar

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Regional Leaders Discuss AI-Ready Digital Infrastructure

Regional Leaders Discuss AI-Ready Digital Infrastructure

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Dr. Saurabh Garg outlining four pillars required to make data AI-ready-discoverability through clear metadata, trustworthiness via quality assessment, interoperability using unique identifiers, and usability grounded in common standards-while stressing that datasets are central to AI infrastructure and must be disseminated responsibly [3-7][8][9]. He also raised concern about the heavy compute and power demands of current models, noting that a single query can consume gigawatts compared with a human’s 100-watt caloric intake, and suggested a need to rethink infrastructure efficiency [11-15].


Arndt Husar framed the panel around the “three S” of solutions, standards and skills and asked each panelist to identify the most critical gap or opportunity for the Global South [22-26][30-33]. Johanna Hill of the WTO pointed to a projected 40 % increase in world trade by 2040 if AI is leveraged, but stressed that this depends on digital infrastructure, skilled labour and policy readiness [40-42][43-44]. She added that SMEs in developing economies are already using AI for market intelligence, highlighting AI’s potential as a game-changer for small firms [45-48].


The Uzbek representative identified unequal access to compute and AI skills as a “triple deficit” and noted a national AI strategy for 2030 that earmarks $300 million for projects, data-center construction and a $1 billion infrastructure fund, with partnerships such as Huawei and venture-fund incentives [56-58][124-131][221-226]. He also described a $50 million allocation for AI startups and a data-lake that will be openly available to innovators [225].


Indonesia’s Hamam Riza described a similar triple deficit, a shortage of localized data centres and talent, and a national roadmap that targets 12 million AI-skilled workers by 2030 through a “penta-helix” ecosystem and a digital academy [95-98][200-207]. He noted that global hyperscalers have established cloud regions in Indonesia but must be integrated into a sovereign AI strategy, including special economic zones for edge computing [129-136].


ADB’s Mio Oka emphasized that reliable power, devices and broadband are foundational, but the bank is prioritising sectoral AI services-such as agriculture and water management-and mobilising private capital through joint planning, capacity building and knowledge sharing [105-114][177-186]. The WTO highlighted that AI can lower trade costs, create new AI-enabled products and services, but fragmented regulations and high compliance costs hinder competitiveness, prompting the Secretariat’s AI Trade Policy Openness Index to guide countries [151-166].


Regional cooperation was presented as a way to achieve economies of scale, with ADB supporting both large economies like India and smaller states through shared infrastructure and skill programmes, while cautioning that AI solutions must align with local employment concerns [255-272][252-254].


The discussion concluded that AI is not a universal remedy; its impact will depend on coordinated investments in skills, infrastructure and appropriate regulation, a point underscored in the closing remarks [274-277].


Keypoints


AI-ready data must be discoverable, trustworthy, interoperable and usable – Dr. Garg outlined four pillars: metadata-driven discoverability, a quality-assessment framework for trust, unique identifiers to enable interoperability, and common standards/classifications for usability across systems [3-7].


Current AI models are extremely compute- and power-intensive, prompting a search for more efficient infrastructure – He noted that every query forces “billions of bytes” through massive compute, consuming gigawatts of power, and contrasted this with a human’s 100 W metabolic rate, questioning whether the sector is “missing something” [10-16].


The Global South faces a “three-S” gap (solutions, standards, skills) that limits AI-driven trade and SME adoption – Arndt introduced the three S framework [24-26]; Johanna Hill highlighted AI’s potential to boost trade by up to 40 % by 2040 but warned that digital infrastructure, skills and policy readiness are essential, and she cited regulatory fragmentation and high compliance costs as barriers [39-45][151-166].


Uzbekistan is pursuing a multi-pronged, state-led AI strategy that balances infrastructure, talent and private capital – The government has earmarked $200 M for a sovereign data centre, $300 M for AI projects, and is partnering with Huawei on 5G/6G and data-lake initiatives; it also creates venture-fund-of-funds and tax incentives to attract domestic and foreign investors [78-82][124-131][221-226].


Indonesia is tackling a “triple deficit” of data, compute and talent through a national AI roadmap and ecosystem platforms – The roadmap calls for massive infrastructure upgrades, a “penta-helix” of government, industry, academia, civil society and media, and a digital academy (Korika) targeting 12 million AI-skilled workers by 2030; it also links AI to climate-health use cases such as disease prediction [95-100][129-138][200-209].


Overall purpose/goal


The panel discussion aimed to diagnose the foundational challenges (data quality, compute infrastructure, skills, standards) that hinder AI adoption in the Global South, share country-level strategies, and explore collaborative pathways-through trade policy, regional cooperation, and financing-to unlock AI’s development and economic impact.


Overall tone


The conversation began with a technical, problem-identifying tone (focus on data and compute constraints). It then shifted to a more optimistic, solution-oriented tone as participants described national roadmaps, public-private partnerships, and regional initiatives. Throughout, the tone remained collegial and forward-looking, ending on a reflective note that acknowledges AI’s limits while emphasizing the need for coordinated skill-building, infrastructure investment, and regulation.


Speakers

Dr. Saurabh Garg


– Expertise: AI-ready data, data discoverability, trustworthiness, interoperability, usability, AI infrastructure efficiency


– Role/Title: Secretary, Ministry of Statistics and Programme Implementation, Government of India


– Affiliation: Government of India [S10]


Arndt Husar


– Expertise: Digital infrastructure, solutions-standards-skills framework, panel moderation


– Role/Title: Moderator / Panel Chair (fireside chat)


– Affiliation: (not specified)


Johanna Hill


– Expertise: Trade policy, AI’s impact on trade, digital trade, AI trade policy openness


– Role/Title: Representative, World Trade Organization (WTO)


– Affiliation: World Trade Organization [S3]


Zuhriddin Shadmanov


– Expertise: AI ecosystem development, infrastructure investment, skills up-skilling, public-private partnership in Uzbekistan


– Role/Title: Representative, Ministry of Digital Technology (Uzbekistan)


– Affiliation: Government of Uzbekistan


Mio Oka


– Expertise: AI applications in agriculture, water, irrigation; regional development financing; private-capital mobilization


– Role/Title: Country Director for India, Asian Development Bank (ADB)


– Affiliation: Asian Development Bank [S5]


Hamam Riza


– Expertise: National AI roadmap, AI talent development, AI-driven public services, climate-health nexus, AI ecosystem coordination


– Role/Title: Professor; Co-Chair, National AI Roadmap Indonesia 2030; President, Collaborative Research and Industrial Innovation in Artificial Intelligence


– Affiliation: Indonesia (government/academic sector) [S6][S7]


Additional speakers:


– None identified beyond the listed speakers.


Full session reportComprehensive analysis and detailed insights

Opening – AI-ready data (Dr Saurabh Garg)


Dr Saurabh Garg opened the session by stating that making data AI-ready rests on four inter-linked pillars: discoverability through a well-defined metadata structure [3-4]; trustworthiness via a quality-assessment framework that validates credibility [5-6]; interoperability enabled by unique identifiers that allow reliable linking of disparate datasets [6-7]; and usability ensured by common standards and classifications [7-8]. He emphasized that these elements must be deployed in ways that preserve individual privacy while retaining business value [8-9]. Garg then turned to the compute side, noting that current AI models are “infrastructure-heavy”, requiring billions of bytes per query and consuming gigawatts of power [10-12]; by contrast, the human body operates at roughly 100 W, prompting him to ask whether the sector is overlooking more efficient approaches [13-15].


Framing the panel (Arndt Husar)


Arndt Husar introduced the “three S” framework – solutions, standards and skills – a taxonomy proposed by the ITU head to broaden the focus beyond data-centres and raw compute [23-26]. He asked each panelist to state their name and institution and to identify the most critical gap or the most exciting opportunity for the Global South to generate positive impact through AI [30-33].


WTO perspective (Johanna Hill)


Johanna Hill (World Trade Organisation) projected a “40 by 40” effect: AI-enabled trade could lift global trade volumes by almost 40 % by 2040 [40-42]. She said this growth hinges on three pre-conditions – robust digital infrastructure, a skilled workforce and policy readiness [43-44]. Citing a WTO-ICC survey, Hill noted that many SMEs in developing economies already use AI for market intelligence, but they face regulatory fragmentation and high compliance costs [45-48]. She also mentioned the WTO’s AI Trade Policy Openness Index, warning that overly lax regulation can undermine competitiveness [158-160].


Uzbekistan – gaps & strategy (Zuhriddin Shadmanov)


Zuhriddin Shadmanov described a “triple deficit” of unequal compute access, limited advanced AI skills, and insufficient sovereign data-infrastructure [56-58]. Uzbekistan’s 2030 human-centred AI strategy prioritises human-skill development, the construction of a sovereign data-centre (US$200 million) and a $5 billion renewable-energy data-centre partnership with Saudi-Arabian firm DataVault [78-82][90-92]. The government has earmarked US$300 million for AI projects across health, education, transport and cybersecurity, allocated US$50 million to AI-focused startups, and introduced tax incentives and a venture-fund-of-funds model to attract US$1 billion of private capital [124-132][221-226].


Balancing priorities (Arndt → Uzbekistan)


Arndt asked Shadmanov how Uzbekistan plans to balance these priorities and finance the strategy. Shadmanov explained the public-private mix, the role of tax incentives, and the partnership with Huawei on 5.5G/6G networks and AI infrastructure [124-132][221-226].


Indonesia – infrastructure & talent (Prof Hamam Riza)


Prof Hamam Riza highlighted a similar “triple deficit” but placed greater emphasis on talent and sovereign AI development [56-58]. Indonesia’s national AI roadmap targets 12 million AI-skilled workers by 2030, addressing a current shortfall of 3-5 million [202-203]. The roadmap is built around a “penta-helix” platform that unites government, industry, academia, civil society and media, and has launched the Korika digital academy together with a Kodika chatbot to up-skill civil servants and the broader workforce [200-208]. Indonesia is preparing a presidential regulation to promote sovereign AI models that reflect local culture, and is designating special economic zones for hyperscalers and edge-computing facilities [132-136]. Climate-health initiatives, such as AI-driven malaria and dengue prediction in partnership with NASA and universities, illustrate the country’s ambition to link AI with societal challenges [208-212].


ADB perspective (Mio Oka)


Mio Oka (Asian Development Bank) stressed that the most fundamental prerequisites for AI deployment are a stable power supply, affordable devices and reliable broadband connectivity [105-107]. He added that ADB’s newly created digital-sector office is already receiving strong demand for data-centre projects [190-192]. While acknowledging the need for foundational infrastructure, ADB prioritises sector-specific AI services-particularly in agriculture, water supply and irrigation-as the immediate entry points for impact [111-114]. Citing the Telangana example, he illustrated the importance of “right-sizing” compute to the problem rather than defaulting to massive data-centre deployments [165-170]. ADB is mobilising private capital through joint master-planning, capacity-building programmes and knowledge-sharing, exemplified by collaborations on water-road projects and AI pilots in agriculture [177-186]. The bank also supports the “democratisation of AI compute” working group, which seeks cross-border sharing of compute resources to avoid duplication and reduce the energy footprint of AI [165-170]. Mio recounted an anecdote about an AI-based fish-feeding system that was rejected by a small-country government, highlighting the socio-economic trade-offs of automation [268-272].


Regional cooperation


The panel repeatedly underscored the need for holistic digital-infrastructure ecosystems that integrate solutions, standards and skills (the three S). Many speakers highlighted that AI can be a catalyst for trade growth when data are discoverable, trustworthy and interoperable [3-8][40-44]. They also converged on blended financing models that combine public funding, tax incentives and private-sector venture capital [124-132][105-107]. Regional cooperation was cited as a way to achieve economies of scale, harmonise standards and share infrastructure: the WTO’s AI Trade Policy Openness Index and ADB’s support for joint projects were presented as concrete tools [158-160][177-186].


Points of divergence


Hardware strategy: Uzbekistan leans heavily on foreign partners such as Huawei and NVIDIA, whereas Indonesia stresses sovereign AI models and aims to limit dependence on external hyperscalers [221-226][132-136].


Financing approach: Uzbekistan foregrounds substantial state allocations and tax incentives; ADB positions itself as a catalyst that mobilises private capital after basic services are in place [124-132][105-107].


SME adoption perception: Arndt noted a large adoption lag for small firms [49-51], while the WTO survey indicated many SMEs are already leveraging AI for market intelligence [45-48].


Feasibility of climate-health projects: Arndt’s skeptical interjection (“I’m not buying any of them”) contrasted with Hamam’s confident description of AI-driven malaria and dengue prediction, revealing differing views on ambitious use-cases [208-212][213-214].


Key take-aways (four pillars)


1. A holistic digital-infrastructure ecosystem that integrates solutions, standards and skills.


2. Large-scale talent development programmes and multi-stakeholder governance (penta-helix, digital academies).


3. Blended financing mechanisms that marry public investment, tax incentives and private-sector capital.


4. Regional cooperation to harmonise standards, share compute resources and reduce regulatory fragmentation.


Open questions / next steps


– How will countries prioritise and balance these pillars within limited budgets?


– What mechanisms can enable cross-border data sharing that respects sovereignty while fostering collaboration?


– Which strategies will effectively curb the energy intensity of large AI models?


– How can the SME adoption gap be closed, for example through right-sized compute and targeted capacity-building?


Conclusion – As Arndt Husar reminded in his closing remarks, AI is not a universal panacea; its benefits will be maximised only through coordinated investments in infrastructure, talent, regulation and regional collaboration [274-277][S69][S70].


Session transcriptComplete transcript of the session
Dr. Saurabh Garg

models or talent, how we can ensure that it works in a federated manner. I think I’ll just, I was discussing and maybe I’ll just focus on one piece, which is on AI -ready data, if I can focus on that and leave it for the esteemed panelists on the large number of issues. Some of the elements that we are focusing on include, one is on how to make it more discoverable. That would be a very basic point to ensure that it’s discoverable. Second is how to ensure that the data sets are trustworthy, and that would be the second element. The third would be on the interoperability, and the fourth would be on the usability across systems.

So on discoverability. On discoverability, the metadata of that structure is extremely important, so that would help to, that’s a first. element of having a metadata structure which is understandable and well defined and can be used across the second on the trustworthy part would be the quality assessment so we’ve developed as kind of a quality assessment framework which focuses on the quality of the data so that to ensure credibility on the data interoperability a lock would depend on whether data can talk to each other what is the unique identifiers that we have which will ensure that the different data sets whether are they talking about the same thing or different we are able to identify that and the fourth would be on the usability across systems would be based on the standards and classifications that we have whether it’s a common definitions and common standards so that two sets of data don’t refer to the same thing and I suppose this really forms the bedrock of making a data AI ready and that That’s something that we’re working with ministries and governments and state governments across the country.

And given the importance of data sets in the AI infrastructure, it has an important part to play. The other aspect on data is also on its dissemination and access, on how we are able to ensure that data sets in themselves have value beyond AI and what kind of dissemination and mechanisms can be there which will make it usable for people to leverage them for business while preserving the privacy aspects of individual data. One other thing, since we are talking about and the panel will be having discussions on AI infrastructure, I just wanted to focus on one thing that I think discussed, over the past couple of days. has also come up that the existing models seem to be extremely data infrastructure, infrastructure heavy, whether compute infrastructure, data infrastructure.

And every time a new query is put out to the model, is it necessary for the billions of bytes to be again run through again and the gigawatts of power that we need? And are there alternative mechanisms available? And I just want to highlight yesterday one comment which stays with me, is what Vishal Sikka had made, that when we talk in terms of AI infrastructure, we talk in terms of gigawatts of power. Compared to that, a human being requires 2 ,000 calories, which is only 100 watts. So are we missing something out there in the infrastructure? And perhaps a greater focus on the models going forward is there. So I’ll stop here. Thank you for inviting me. Thank you.

Arndt Husar

Thank you so much, Secretary. And I’m now going to join the fireside chat here. The discussion that we have planned will cover various different aspects of digital infrastructure. So when you hear digital infrastructure, you might be first thinking of the data centers and the compute. But we actually want to have a conversation that also encompasses the solution side, the skill side, so that we really look at the whole spectrum of infrastructure, even standards. So these three S were introduced yesterday by ITU’s head, the three S of solutions, standards, and skills. Kind of a nice way to open up to the panel. We have different perspectives here today. And we’re going to try to stick to time.

But let me introduce you to this panel by asking the first question. And I would request that each of the panelists then quickly states their name and their institution to shorten the time. What we would like to hear from each of you is that from your vantage point, what do you see as the most critical gap, the most exciting opportunity for the global south in generating positive impact through AI? So we’ve asked each of them to think about a concern or opportunity and then also to maybe link it to strategy or vision. So maybe I’ll first go to the lady on my right from the WTO. May I request you for your perspective on the big challenge or opportunity?

Johanna Hill

Thank you so much to the Asian Development Bank for the invitation and the organization to this interesting conversation. My name is Johanna Hill. And I… I am with the World Trade Organization. And let me start out with the opportunity side of the equation. We really are seeing that AI and trade, when they work together, can offer important opportunities for developing countries and low -income economies. Our projections at the Secretariat have led us to believe that by the year 2040, trade could grow by almost 40%. So that would be the 40 by 40 effect. But then here come the caveats, right? For that to happen, for those opportunities to really be realized, one element that is really important is the digital infrastructure, the skills that you mentioned, and policy readiness.

You know, we’ve heard throughout this conference and before the important opportunities and applications in different sectors, in agriculture, health care, new services being developed as we speak, new services and goods that are becoming more AI -related, more tradable, and we are also seeing that that can have important opportunities. for the smaller firms in developing economies and in the big economies also. We did a survey with the ICC that we published last year on the opportunities for businesses and for small and medium enterprises. And of those respondents, many of them were saying that they’re already using AI, of course, from bigger companies, more developed economies. But even the smaller firms are also seeing opportunities in areas like market intelligence.

So we do see that it can be a game changer.

Arndt Husar

Fantastic. And one of the things that I’ve been hearing a lot at this summit is that specifically the SMEs, the technology has moved so fast that there’s a huge adoption gap and understanding of how they can actually integrate the AI into their business models, into their little shop that takes a picture of a product and uploads it quickly. AI can be super helpful in this but hasn’t yet reached that audience. Maybe I’ll turn it over to you. Maybe I’ll turn to the other side and request our colleague from Uzbekistan to share his perspective.

Zuhriddin Shadmanov

Thanks for the question. Thanks for having me here. Let me talk about the gaps which exist in our country. I think the first one is unequal access to compute capacities and I think advanced AI digital skills. So in that sense, these foundations play a crucial role because if you don’t bridge those gaps, many countries, nations will be just the consumers of AI rather than creators of AI value. So in that sense, Uzbekistan is advancing strategic ideas. First one is developing human skills. So in that sense, Uzbekistan is advancing strategic ideas. First one is developing human skills. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas.

So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. all stratus of our nation, starting from students, professionals, and public servants. So we are not concentrating on the tech sector, but also we try to cover all the spheres of our nation.

And secondly, we are developing our infrastructure. For that reason, our government is allocating around 200 million USD. So to create our own government data center with supercomputers, GPUs, acquiring from NVIDIA. And also we are working with DataVault, a Saudi Arabian company, to create an energy -efficient data center, which is based on renewable energy. It is a very big project. It’s around 5 billion USD. And the data center will be… put into operation within two, three years. hopefully and also we are trying to develop our government strategy we adopted a strategy 2030 last year and this by by the year to 2030 we’re trying to get there early reach the export of AI related products by five 1 .5 billion USD so

Arndt Husar

fantastic so either by coincidence or planning you touched on the 3s the solutions the skills and the standards the policies fantastic thank you so a very comprehensive view with multi -pronged strategy that you didn’t introduce yourself so I just say you with the Ministry of Digital Technology and an institution quite focus the center of the development of AI and the digital economy Fantastic. Okay, let me turn back to this side. So from Indonesia, we have someone who’s actually in this skills domain. Would you like to share with us what you, from your vantage point, perceive as the key opportunity or challenge?

Hamam Riza

All right, thank you. Hi, everyone. I am Professor Hamam Riza. I am the co -chair of the National AI Roadmap Indonesia 2030. And also I am the president of the Collaborative Research and Industrial Innovation in Artificial Intelligence, the organization that was founded in 2020 when we launched our first national AI strategy towards Indonesia 2045. That is the vision. And I think AI will take us there, really. So from my vantage point, I think… We are… we have no we need to move beyond numbers even though we understood that AI economy will create millions of jobs and also potential economy of up to 1 trillion so from my vantage point from the Indonesian perspective global south they are basically triple deficit in terms of what we are going to it is the most challenging one the first one is that certainly about the data and infrastructures, the compute infrastructures we are still lacking the connectivity the networks but as I have marked down here in order for us to smooth up all the AI use cases for public services for health services, for agriculture, and many other things, you need to basically solve this triple deficit.

And that is also regarding how you need to develop the AI talents. There is a significant lack and scarcity of high -quality localized data centers tailored to Indonesia, as well as shortage of AI skilled talents that limits the capacity of long -term innovations capabilities. So our government is addressing these gaps through the national roadmap that I co -chaired. And our primary concern is how the digital divide and how the AI divide, which is the digital divide. Which is created by this. generatively I didn’t think I towards many of the public sectors in Indonesia in an in general in the global self that we can tackle so that while there is nine to two percent of our skill knowledge worker but we are still using you know a very basic AI tools and needs to be aware of all the and all the risks you know applied to the output of this AI tools so those those things are that I think will be my point of view towards closing the gaps for the global south and especially for Indonesia

Arndt Husar

thank you so much and it’s of course one of the most populous countries in Southeast Asia It’s a very young workforce. 270 million. Yes, and startup buzz in Indonesia is also palpable, so lots of potential in Indonesia. Last but not least, I want to go to our Asian Development Bank Country Director for India. Mio, can I request you to share your views?

Mio Oka

Thank you. From ADB’s perspective, of course, foundation is important. We need to have a power supply, stable power supply, and the devices that people have access to, and reliable broadband, even in our office. So that’s a foundation. But we are in India. Do we expect India to put so much money on foundation to have a ground -level impact? But India has a scale. So what we need to focus is, as others already said, is a service. So we work on agriculture sector, water supply, and even irrigation sector where AI is widely applied. Because of the scale of the people that we have in the global south, while we work on the foundational infrastructure, at the same time we really have to work on how AI can be applied at the service level.

And this is where ADB would like to support. Thanks.

Arndt Husar

Thank you, Mio. So we have, as you can see, different perspectives at the same space of how do we get at grappling with this massive development opportunity that AI represents. For this first round of questions after the opening, let’s go into the foundations a little bit. I want to go to Uzbekistan again. And you already mentioned you have… ambitions on infrastructure, policy and skills. Now, how do you actually balance this in terms of priority setting? Can you go for all of them all at once? How do you finance it? Does this keep you up at night, how you balance these three different strategic objectives?

Zuhriddin Shadmanov

It’s a tough question because we are a developing nation and money is always a scarce element for us. So anyway, our government is trying to allocate enough resources so we can cover all the aspects of AI development to create the AI ecosystem. First one, as I already mentioned, it’s a strategy. 2030, which sets our priorities, which is human -centered AI. And secondly, with the government trying to allocate enough resources overall now the government announced about 300 million USD for development of AI and the money goes to first of all implementing projects in the government sector in the social sphere healthcare, education, transportation cyber security and etc and also government is trying to provide necessary infrastructure building data centers acquiring GPUs and also we are now creating a data lake which will be collecting the data of the government sector so SMEs, startups and other who wants the data they can use those data for free or for some money usually free and anyway we’re trying to work with other countries as I already said that we are we have a good project five million AI leaders so United Arab Emirates they helping us to helped us to build this program and it was launched now over 1 million people already registered and go to training certifications there so also we are trying to attract foreign foreign investments and now government announced very good in tax incentives and other incentives to for example if you are want to invest in a you know in Uzbekistan and try want to build a data center which costs over 100 million USD you will get very cheap and take the intensive and customs exams and etc so going trying to balance with cautiously but still providing necessary conditions for the to build a ecosystem

Arndt Husar

very impressive and since I had the opportunity to chat with somebody else from Uzbekistan this week I also know that in your KPI as a public servant AI roll it out has entered that KPI space so that’s always going to make a difference let’s go back to Indonesia now I’m gonna you know of course skills is your comfort zone but can I ask you about the infrastructure side I know that the hyperscalers the big cloud companies international companies have invested significantly in Indonesia now how do you see that now moving into the AI age and is that a big step forward for you? is there a lot of activity on additional infrastructure build out what do you see happening in that

Hamam Riza

yes so I see these questions and I’m really eager to answer this because suddenly our infrastructure is undergoing a transformation really to meet the demands of the AI demand and certainly with the ability of many of these new infrastructures coming out of the government and also from the business I think benchmarking with many other countries including you know in the regional ASEAN take for example the presence of the global hyperscaler in the country have established actually multiple cloud regions in Indonesia. But certainly this needs to be amplified because as you know 10 years in many other technologies, one year in AI, right? That’s what they are saying. So how do you can fulfill this demand of AI compute massive data for training because you need to build up our own for example large language model that can align with our cultures.

So those hyperscalers needs to move beyond just being a single a host for this you know many of the AI models from outside of the country right so and the infrastructure readiness is also being federated by our chief toward the sovereign AI we are now preparing the presidential regulation actually to push forward the innovations the investment and we need to collaborate with many of the hyperscalers and we are ensuring that the physical infrastructures like the GPU data center and localized edge computing yeah is going to be present in the country. And one thing that the Vice Minister of Communication and Digital Affairs mentioned to me yesterday that we are struggling building up the ecosystem. That means there will be special economic zone for these hyperscalers and new data centers being brought forward in order to align and be part of our national AI roadmap, AI journey in Indonesia.

And we are going to prepare ourselves in this AI transformation so that our data… digital consumer… is going to be part of our transformation. The technology is accessible for all. Even, you know, what we are right now, you know, participating in the India AI Summit says about democratizing AI for all. So I think that is a very significant theme that is also part of our national AI roadmap. Thank you.

Arndt Husar

Fascinating. And, again, I think you as a large economy, you have that opportunity similar to how India is also portraying it this week of really wanting to develop your own, you know, language models and really playing in that league. However, there are many countries. also countries we work with who don’t have that kind of scale and who need to look at it quite differently. So the different nuanced strategy that you mentioned of investing into the big AI, the small AI, the edge AI, all these different pieces, very interesting. With this dynamic, can I turn to WTO? How do you see trade competitiveness evolve? That’s really your space where you are at. What are those interesting approaches that are emerging which could help support maybe the cross -border collaboration while you also, of course, respect data sovereignty?

Countries will need to collaborate, right? There’s not enough money to go around for everybody to play in that top league. So trade competitiveness, what do you see there?

Johanna Hill

So I was talking about the opportunities of trade growing by the use of AI. Okay. And if you think about… That growth comes from the lowering of trade costs. It comes from powering AI -enabled goods and services crossing borders. And it’s… Also, new products and services that are going to be invented are being invented by AI. And when you talk to business, when we asked through the survey, some of the constraints that they are having and doubts in the use of AI have to do with competing regulations and having a high cost in trying to comply. And fragmentation is actually an area of data, for example, that can become a problem. And so we developed and published last year in the World Trade Report what the Secretariat calls the AI Trade Policy Openness Index to help regions measure how they’re doing in that space.

And in there, you can see, for example, that some of the lower income economies can seem quite open in that space. But it might be because of the lack of regulation. And when you talk about AI, I think what a lot of countries and customers are saying is that AI is not a good thing. What customers are looking for is, you know, it’s AI. that is responsible, you know, trading AI with trust. So just not having regulation can also be a disadvantage to your competitiveness. So starting to look at those things that way. And then in the part of the solution side, definitely the regional approaches are important, those collaborations, and sharing infrastructure, for example.

When you don’t have those economies of scale, those huge investments come in your way. And then not every single company or every single country is looking to be on the edge of things necessarily, but we do want to adopt AI to boost our economy and our competitiveness.

Arndt Husar

Well, thank you so much. And I don’t know whether people heard about this new initiative that the working group on the democratization of AI, of compute, has come up with. ADB is actually supporting that. Really, this is… This has not yet evolved, right? This collaboration on the infrastructure. How do you share that properly across borders? It’s still new territory and very interesting to see. Can I turn to my colleague, Mio, and request her to talk a little bit about the engagement that we’ve had with member countries. What does demand to ADB actually look like in this space?

Mio Oka

try to invest in the township planning and the implementation. Also, we can have a water supply road project that can be connected to the industrial parks so that private sector can invest in the digital -related facilities. So mobilization of private capital is one. And the second is it’s an application across sectors. We just don’t look at the single sector project. As I said, we can work on road and water at the same time. And while we work on the Agri -AI project, we work with the building capacity of that institution as well so that they can handle the AI. And the third is the knowledge. As I said, we support quite a bit of this master planning or the strategy development at the municipality or the state or even at the regional level.

We see India. And you’ve been coding on science. India. And of course we always bring in the international experts so that India can learn and also this is a good opportunity for India to expand their capacity to outside countries. Thank you.

Arndt Husar

Thanks, Miyu. I actually had a follow -up question for you that would have touched on this de -risking and catalyzing investment topic. Maybe I’ll let you ask you to repeat that now, but let me just add our digital sector office being fairly new. They are getting a lot of demand for guess what? Data centers. And, you know, we welcome that. We have conversations with government but I’m truly impressed with the conversations at the summit here. Earlier this morning I attended one where the state of Telangana was sharing what they’re thinking about and they’re really quite cognizant of the kids to school not many kids fit into a Ferrari milk doesn’t make sense so we need to look at what type of compute is needed for what and I think we in ADB are also learning more and more how to engage in these conversations properly we’re learning alongside everybody else in this room probably and that’s an important distinction to make because it will influence the financing bit how much do we need, what do we actually need and when and how do we make that investment sustainable just wanted to add that it’s an insight from this morning that I couldn’t not retail.

Let me go back to Indonesia and ask you about cutting -edge skills because you’re in that space. I found it very interesting that you’re actually, you said co -president or co -chairing this platform where you bring together private sector, education sector, government. And as you are looking at that, how is your organization doing it in practice? How do you bring these people together and get them into action mode? How do you do that?

Hamam Riza

Okay, thank you. Very important question here, I think. So I would like to say in three pillars that what we are doing, especially that we chair the AI ecosystem in Indonesia where the government, the industry are involved in the AI ecosystem. Within an academia as well as the… civil society as well as media we call this pentahelix platform we discuss about I think three pillars first one is the talent certainly second one is infrastructures and the third one is how basically we can articulate use cases towards all the services public services and businesses as well so Indonesia for talent we have set our target quite ambitious that we want to have at least 12 million talent by 2030 and for us this is something that are uh fairly challenging, considering that we are still lacking around 3 to 5 million talent as of now, right?

So what we are trying to achieve together with the whole ecosystem is to establish an academy, the Korika Academy, where we promote to not only upskilling and reskilling some of the civil servants and other workers, but we are also looking at how we can train the trainers. We work with several of our friends. I will note here that Elevate Indonesia, for example, part of the… Microsoft and many others big tech that are there works together with our ministry to establish this program for Thailand

Arndt Husar

and it’s a digital academy or is it a physical?

Hamam Riza

it’s a digital academy with the LMS learning management system and many other things we also established the Kodika chat actually it’s a chatbot for this training and upskilling program that we do with the government beyond Thailand basically we are aggressively looking at how we can nurture this talent to work in data centers, in many others startups and incubators as well as to establish some of the most diverse demanding use cases So the third one is we try to work on climate health nexus in establishing how we counter and predict the climate sensitive infectious disease such as malaria and dengue. And we have established for the past three years the Climate Smart Indonesia which have attracted many of the universities as well as NASA pollution and air quality programs to look into these use cases.

So we can basically reach out to many of the areas where the… …the health, the disaster prone area because Indonesia is a supermarket for disaster. You can have the hydromelectorological disaster, you can have ecological, you can have many things. So you need to…

Arndt Husar

I’m not buying any of them.

Hamam Riza

Of course, we don’t want to be shopping.

Arndt Husar

So really amazing this focus also on the use cases, right? And prioritizing those that match with your country needs. Yes, thank you. Give the highest impact, right? Super. I’ll turn back to Uzbekistan and just wanted to ask you to elaborate a little bit in terms of private sector capital mobilization. You have all these ambitions you shared across the board really in terms of infra, in terms of skills and so on and so forth. Uzbekistan as an economy has still… a good chunk of traditional economy but also has a very active startup sector that I’m learning more and more about how dynamic people are around the region, Central West Asia going and finding scalable solutions but these are the still growing companies for mobilizing capital for your infra you’re going to need the big ones or you’re not going to need the international partners or what are you thinking about this private capital mobilization, what’s your strategy there?

Zuhriddin Shadmanov

First of all I should mention that according to the documents adopted by year 2030 we are planning to attract around 1 billion USD for investments for creating AI related digital infrastructure and part of this goes to creating data centers and we’re going to need to And also we’re working with our Chinese partners also. It’s the biggest IT company, Huawei. So they’re also involved in creating AI ecosystem in Uzbekistan. Mainly, first one is upskilling public servants to help them to adopt AI adoption and also creating the necessary training programs for the specialists and also creating the AI infrastructure like data centers, data lakes. And also we need to get, we are transferring to 5 .5G and also working on 6G also with Huawei.

So, yes. And also, as you mentioned already about startups, we are developing our own. startup ecosystem and we established many venture funds funds of funds and also there are many emerging private funds so they are now trying to invest in startups attracting private funds, private investments so currently we have allocated around 50 million USD for AI startups so they are already providing services both for public and for businesses so trying to balance and attract all the stakeholders of the ecosystem

Arndt Husar

Fantastic, so you’re mixing also your public funds that you invest for example in the fund of funds and then bringing in more investment domestically from your investors but also from abroad That’s amazing. And then having large industry partners that are interested in the market, bringing them in like Indonesia did with some of the hyperscalers. You are bringing in Huawei and Chinese partners. So basically it’s a mix of different strategies you mentioned. Also, that’s fascinating. Again, Uzbekistan being one of the larger countries in Central West Asia and Indonesia, both fairly large in their region. And then, of course, again, I want to come back to this point about diversity of country context. That’s both a challenge but also an opportunity.

I mean, for us at ADB, it adds, of course, complexity because we need to respond to these different needs. But from the perspective of WTO, is there like a specific area such as maybe interoperability standards or AI talent mobility? Or the shared data set? joint research, where do you see regional cooperation making the biggest difference? I

Johanna Hill

think that it’s a bit of a matter of context, right? At the regional level and at the national level. We’ve talked about the divide in the digital divide and how do we overcome that and the role of infrastructure and skills and the rest. And at the WTO Secretariat, we’ve been very concerned on this issue. And so we partnered with the World Bank and we did a study called Digital Trade in Africa, a general one, and then we did some country pilot studies to look at the situation. And we did see that some of the regional work, like the ACFDA and the digital protocol, really made a difference in how it helped bring them along and to set a certain standard in many of the countries that we studied.

Then we did a similar study with the World Bank in Latin America and the U .S. The Caribbean and the Inter -American Development Bank partnered with us. And we saw there that the situation was a bit different, more diversity in terms of regulation and trade policy, infrastructure needs. So there’s basically not one size fits all. But we have seen regional banks playing a very important role in helping countries that want to go in the regional way. I know ASEAN has done important work in AI policy, for example, and other regions are also working in that sense. And I do think that that brings economies of scale to a certain extent. It helps you resolve questions on electricity sometimes.

And so I think there’s a lot of opportunity and further work to be done at the regional level.

Arndt Husar

Thank you. And I think with the regional cooperation integration agenda being also top of mind for ADB, I just ask my colleague also to… tell me a little bit about her perspective. Of course, she represents ADB in a very large economy in South Asia, but we do have regional cooperation happening around the region. Mio, what do you see as opportunities with regards to regional cooperation integration on this digital infrastructure space?

Mio Oka

Right, thank you. So again, ADB, we support India. I’m in India. Our office covers India. So we are here to support Vixie Bharat so that India can grow at the pace to become a developed country by 2047, and the AI is a necessary means to do that. But again, as everybody knows, we are the regional bank. Nobody around us should be left alone or left behind. So ADB, through this kind of forum, has to be a catalyst. A catalyst for the global south. So we are here. Of course, there are many countries who cannot invest in scale. What are the solutions? So we are here to support the solutions, and also we support big tigers like India to support those countries too.

That’s number one. And number two is the balance approach. When we talk about regional cooperation or the work in a small country, I was quite shocked about five years ago. I went to the small neighbor country here, and I was working in the agriculture sector, and I was proudly introducing, I want to introduce aquaculture using the AI -based fish feeding system. And my negotiation ended in three seconds because the government said, no, we are interested in employment. What are you talking about? What AI -based feeders will just reduce the people who are going to work there? So that is a big lesson learned for me. We need an ecosystem, but even we talk about AI, the solution may be elsewhere.

so as you introduced the skill is super important and since that understanding again going back to India we’ve invested more than like 5 billion in the skill including the PM set and working over 10 states and now AI based skill is the big part of it so we are always mindful that the regional cooperation and we should not forget should not leave any country to be left behind but solution again may not be as direct as we expect thank you

Arndt Husar

thank you Mio and we have one minute left on the clock that throws a spanner into my closing with the thought that AI may not be the solution for everything but I think it’s a fair ending looking for a name We need to understand the problems and see how AI, if it can be deployed, if it can make a difference, how it should be supported through skills development, infrastructure investments, regulation. So I want to thank my panel for a very interesting tour de force of this topic. Also thought I’d take the opportunity to thank the audience and India for hosting this amazing summit. As ADB, we’ve been proud to be a partner of it, and it’s been truly fascinating, and we’re quite proud to have been part of this journey.

Thank you all for attending, and thanks to the panel. Let’s give them a round of applause for sharing their views. Thank you. Thank you. Thank you very much. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Making data AI‑ready rests on four inter‑linked pillars: discoverability through a well‑defined metadata structure; trustworthiness via a quality‑assessment framework; interoperability enabled by unique identifiers; and usability ensured by common standards and classifications.”

The knowledge base highlights the importance of a well-defined metadata structure for discoverability and a quality-assessment framework for trustworthiness, confirming these two pillars [S2]. It also discusses systematic approaches to data readiness that align with the described pillars [S25].

Additional Contextmedium

“These elements must be deployed in ways that preserve individual privacy while retaining business value.”

Discussion in the knowledge base emphasizes balancing privacy with security and business needs, noting that privacy should not be treated as opposed to other objectives [S80] and that responsible innovation must protect individual rights [S79].

Additional Contexthigh

“Current AI models are “infrastructure‑heavy”, requiring billions of bytes per query and consuming gigawatts of power; the human body operates at roughly 100 W.”

Sources project AI-related power consumption reaching tens of gigawatts and stress the challenge of scaling infrastructure, providing context for the claim about high energy use [S69] and the need for more efficient compute approaches [S29] and infrastructure scaling limits [S50].

Confirmedhigh

“Arndt Husar introduced the “three S” framework – solutions, standards and skills – a taxonomy proposed by the ITU head.”

The three-S framework (solutions, standards, skills) introduced by the ITU head is explicitly mentioned in the knowledge base [S4].

Additional Contextmedium

“Growth of AI‑enabled trade hinges on three pre‑conditions – robust digital infrastructure, a skilled workforce and policy readiness.”

The knowledge base stresses that closing the digital divide requires targeted investment in infrastructure, locally relevant applications, and skills development, aligning with the three pre-conditions cited [S36].

Confirmedhigh

“Many SMEs in developing economies already use AI for market intelligence, but they face regulatory fragmentation and high compliance costs.”

Regulatory fragmentation leading to higher compliance costs for enterprises, especially SMEs in developing regions, is documented in the knowledge base [S52] and further illustrated by the challenges faced by Latin American firms [S91].

Additional Contextlow

“Overly lax regulation can undermine competitiveness (WTO’s AI Trade Policy Openness Index warning).”

While the specific WTO index is not cited, the knowledge base notes concerns that insufficient regulation can affect competitiveness and that balanced policy is needed for responsible AI deployment [S52].

External Sources (92)
S1
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-fireside-chat-moderator-mariano-florentino-cuellar — Now we move to a conversation about how artificial intelligence needs to be positioned in the global context. And we hav…
S2
https://dig.watch/event/india-ai-impact-summit-2026/regional-leaders-discuss-ai-ready-digital-infrastructure — And in there, you can see, for example, that some of the lower income economies can seem quite open in that space. But i…
S3
United Nations High-Level Leaders’ Dialogue — – **Johanna Hill** – World Trade Organization (WTO) Johanna Hill: harness? Thank you for the invitation. We are facing …
S4
Regional Leaders Discuss AI-Ready Digital Infrastructure — – Zuhriddin Shadmanov- Hamam Riza Shadmanov emphasizes a broad societal approach covering all sectors beyond technology…
S5
Regional Leaders Discuss AI-Ready Digital Infrastructure — -Mio Oka- Asian Development Bank (ADB) Country Director for India
S6
https://dig.watch/event/india-ai-impact-summit-2026/regional-leaders-discuss-ai-ready-digital-infrastructure — All right, thank you. Hi, everyone. I am Professor Hamam Riza. I am the co -chair of the National AI Roadmap Indonesia 2…
S7
Regional Leaders Discuss AI-Ready Digital Infrastructure — All right, thank you. Hi, everyone. I am Professor Hamam Riza. I am the co -chair of the National AI Roadmap Indonesia 2…
S8
Regional Leaders Discuss AI-Ready Digital Infrastructure — – Dr. Saurabh Garg- Zuhriddin Shadmanov- Hamam Riza- Arndt Husar – Johanna Hill- Mio Oka- Arndt Husar – Zuhriddin Shad…
S9
Legal Notice: — Chief of International Law Studies. He has previously served as Dean of the George C. Marshall Center in Germany and Gen…
S10
The Foundation of AI Democratizing Compute Data Infrastructure — -Saurabh Garg: Secretary in the Ministry of Statistics and Program Implementation in the Government of India
S11
https://dig.watch/event/india-ai-impact-summit-2026/the-foundation-of-ai-democratizing-compute-data-infrastructure — And they could be partly technological and partly policy -based or protocol -based. And a combination of this will ensur…
S12
Democratizing AI Building Trustworthy Systems for Everyone — – Dr. Saurabh Garg- Natasha Crampton – Dr. Saurabh Garg- Natasha Crampton- Justin Carsten
S13
Digital infrastructure and standards in Africa: Continental and regional policies and their international elements — Across regional economic communities (RECs), there are multiple policy initiatives and projects that cover matters relat…
S14
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar — Now we move to a conversation about how artificial intelligence needs to be positioned in the global context. And we hav…
S15
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — Open standards allow other systems to be plugged into them The importance of open standards and interoperability is emp…
S16
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — AI technologies offer immense potential for enhancing cybersecurity; however, they can also introduce risks that may con…
S17
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — Yu Ping Chan: Thank you so much to the organizers for having me here today. So I represent the United Nations Developmen…
S18
AI and the future of digital global supply chains (UNCTAD) — There is a skills gap in these countries
S19
Strategy for the development of artificial intelligence in the Republic of Tajikistan for the period up to 2040 — 125. The implementation of the priorities and activities of the Strategy will be ensured by all types of sources of fina…
S20
REPUBLIC OF BULGARIA MINISTRY OF TRANSPORT, INFORMATION TECHNOLY AND COMMUNICATIONS — In connection with the goal of creating knowledge and skills for the development and use of AI, enshrined in the concept…
S21
Uzbekinstan’s strategy for the development of artificial intelligence technologies until 2030 — Additionally, the Strategy outlines medium- and long-term tasks, including those in scientific and technological develop…
S22
Signature Panel: Building Cyber Resilience for Sustainable Development by Bridging the Global Capacity Gap — Indonesia:Thank you. Moderator, Mr. Robin, good afternoon to all delegations here, allow me this morning to convey three…
S23
© 2019, United Nations — Latin America and Asia present more dynamic entrepreneurship and innovation ecosystems than those found in …
S24
Solomon Islands Rapid eTrade Readiness Assessment — The skills development focus is centered the government’s efforts to develop ICT skills proficiency starting at the p…
S25
Collaborative AI Network – Strengthening Skills Research and Innovation — This comment provides a systematic framework for thinking about data preparation for AI, moving beyond generic discussio…
S26
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — The technical requirements for trustworthy AI emerged through multiple perspectives. Valerian Ghez from photonic quantum…
S27
Safe and Responsible AI at Scale Practical Pathways — Data must be interoperable, contextual, and verifiable/governable to solve key problems
S28
Is AI the key to nuclear renaissance? — There is a direct correlation between the exponential increase in model parameters and the increase in the computational…
S29
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Dr. Saurabh Garg (referencing Vishal Sikka) This comment introduced a completely different perspective on the compute s…
S30
Projecting Digital economy rules on Global South’s AI regulations: what is needed to safeguard human rights? ( Data Privacy Brasil Research Association) — In addition to supporting member states, UNCTAD also aims to bridge gaps in international cooperation. Many challenges f…
S31
WS #82 A Global South perspective on AI governance — Lack of infrastructure and skills in developing countries
S32
WS #100 Integrating the Global South in Global AI Governance — – Lack of computing power and infrastructure in developing countries A fundamental issue underlying many challenges is …
S33
The Global Power Shift India’s Rise in AI & Semiconductors — So the goal of Genesis Project is to really, one, align public and private partnership, two, invest government resources…
S34
Beijing seeks to curb excess AI investment while sustaining growth — China has pledged to rein inexcessive competition in AI, signalling Beijing’s desire to avoid wasteful investment while …
S35
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — Detailed implementation strategies for building the proposed triple helix collaboration between government, academia, an…
S36
Business Engagement Session — Addressing Global Challenges and Inclusivity Addressing inclusivity and accessibility in technological solutions Garza…
S37
Regional Leaders Discuss AI-Ready Digital Infrastructure — Skills development must be comprehensive, covering all sectors and skill levels rather than focusing only on technical s…
S38
Comprehensive Discussion Report: Governance Frameworks for Reducing Digital Divides in African and Francophone Contexts — Development | Legal and regulatory | Economic Implementation and Practical Approaches N’diaye emphasizes that public p…
S39
Facilitating an integrated approach to digital issues — Speed: In a world where communications have become instant, implementation of solutions must be made in phases, so that …
S40
Overcoming the fragmentation of the digital governance: what role for the Global Digital Compact and e-trade rules? (South Centre) — Fragmentation at a local level is seen as beneficial as it allows countries to have their own policy space and introduce…
S41
High-Level sessions: Setting the Scene – Global Supply Chain Challenges and Solutions — Collaboration emerged as a persistent theme with a unanimous agreement on the necessity of cross-sector and regional par…
S42
AI for Democracy_ Reimagining Governance in the Age of Intelligence — “Because what we essentially need is four types of governance.”[22]. “We need a technological governance because whose v…
S43
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — Let’s move this beyond the developed economies. Helping understand how AI is it a broadened the world and help the world…
S44
AI Development Beyond Scaling: Panel Discussion Report — Choi advocates for AI democratization where AI reflects human knowledge and values, serves all humans rather than just t…
S45
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Dr. Saurabh Garg outlined India’s approach through the proposed “Maitri” platform, a collaborative framework designed to…
S46
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Florian Ostmann:Thank you, Matilda. So with that set out in terms of what kinds of standards we are focused on and why w…
S47
The impact of AI on jobs and workforce — The ILO’s webinar was triggered by the recent impact of ChatGPT on our society and jobs. OpenAI’s ChatGPT, in particular…
S48
TechUK urges UK government to prioritise digital adoption among SMEs to boost economy — TechUKcallson the government to address the critical issue of digital adoption among SMEs, which has been identified as …
S49
AI adoption reshapes UK scale-up hiring policy framework — AI adoption is prompting UK scale-ups torecalibrateworkforce policies. Survey data indicates that 33% of founders antici…
S50
From KW to GW Scaling the Infrastructure of the Global AI Economy — NVIDIA’s contribution to India’s AI ecosystem includes sharing reference designs for AI factories, open-sourcing control…
S51
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Unexpectedly, these speakers represent different philosophies toward AI development. Sheth emphasizes building indigenou…
S52
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — The level of consensus among the speakers was relatively high, particularly on the benefits and potential applications o…
S53
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — 2.Infrastructure capacity- having sovereign compute for advanced models If AI is to become electable in our democracies…
S54
Huawei’s dominance in AI sparks national security debate in Indonesia — Indonesia is urgently working tosecure strategic autonomy in AIas Huawei rapidly expands its presence in the country’s c…
S55
UN warns AI poses risks without proper climate oversight — AI can help tackle the climate crisis, butgovernments must regulate itto ensure positive outcomes, says UN climate chief…
S56
New AI strategy aims to attract global capital to Indonesia — Indonesia is moving tocement its position in the global AI and semiconductor landscapeby releasing its first comprehensi…
S57
Climate change and Technology implementation | IGF 2023 WS #570 — Artificial intelligence and improved sensors can provide real-time environmental data, shaping climate research and poli…
S58
AI climate benefits overstated says new civil society report — Environmental groups, including Beyond Fossil Fuels and Stand.earth,have publisheda report challenging claims that AI wi…
S59
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Introduction and Context Setting ## Sectoral Applications: Healthcare Insights Alex Moltzau: I think I just also wa…
S60
Democratizing AI Building Trustworthy Systems for Everyone — Private sector investment is necessary due to the scale of infrastructure needs that cannot be met by governments alone
S61
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Economic | Infrastructure | Development Need for blended financing approaches combining government, private sector, and…
S62
Driving Indias AI Future Growth Innovation and Impact — But you must be aware that, you know, this game is actually, I mean, if you see my context, I mean, I have four diamonds…
S63
Regional Leaders Discuss AI-Ready Digital Infrastructure — Dr. Saurabh Garg opened the discussion by outlining four essential elements for AI-ready data infrastructure. First, dis…
S64
Collaborative AI Network – Strengthening Skills Research and Innovation — This comment provides a systematic framework for thinking about data preparation for AI, moving beyond generic discussio…
S65
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — The technical requirements for trustworthy AI emerged through multiple perspectives. Valerian Ghez from photonic quantum…
S66
Safe and Responsible AI at Scale Practical Pathways — Data must be interoperable, contextual, and verifiable/governable to solve key problems
S67
Is AI the key to nuclear renaissance? — Training large AI models, particularly deep learning, requires vast amounts of computational power. These powerful model…
S68
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Dr. Saurabh Garg (referencing Vishal Sikka) This comment introduced a completely different perspective on the compute s…
S69
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Power consumption will reach 63 gigawatts in coming years, presenting major infrastructure challenges
S70
WS #82 A Global South perspective on AI governance — Lack of infrastructure and skills in developing countries
S71
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Global South Challenges: multilingualism, infrastructure, and capacity
S72
Open Forum #17 AI Regulation Insights From Parliaments — Countries in the Global South face multiple challenges including lack of computational power, data access gaps, and insu…
S73
Uzbekistan positions itself as Central Asia’s new AI and technology hub — Using its largest-ever ICT Week, Uzbekistanis showcasingambitions to become a regional centre for AI and digital transfo…
S74
The Global Power Shift India’s Rise in AI & Semiconductors — So the goal of Genesis Project is to really, one, align public and private partnership, two, invest government resources…
S75
New digital strategy positions Uzbekistan as emerging AI hub — Uzbekistanhas outlinedan extensive plan to accelerate digital development by introducing new measures at major AI forums…
S76
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — Transformation requires a triple helix of government, academia, and industry working together with specific roles for ea…
S77
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion reveals strong consensus on key strategic directions: comprehensive ecosystem development beyond chip man…
S78
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — Shri Sushil Pal:Thank you, Professor Jalasi, and thank you, UNESCO, for inviting me here. I must commend UNESCO on the r…
S79
Keynotes — O’Flaherty emphasizes that technology development carries real risks and threats to democratic institutions and individu…
S80
WS #125 Balancing Acts: Encryption, Privacy, and Public Safety — Boris Radanovic argues for rejecting the framework of conversation that pits privacy against security. He emphasizes the…
S81
Enhancing Digital Resilience: Cybersecurity, Data Protection, and Online Safety — The discussion delved into the crucial role of the private sector in data protection. Ayodeji Rex Abitogun, an IT consul…
S82
Expert workshop on the right to privacy in the digital age — Mr Alessandro Mantelero, associate professor of Law,Polytechnic University of Turin, Italy,told the audience not to forg…
S83
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — Second, resource -efficient AI is not a trade -off. It is a path to inclusion and access. Thirdly, delivering impact at …
S84
https://dig.watch/event/india-ai-impact-summit-2026/leaders-plenary-global-vision-for-ai-impact-and-governance-morning-session-part-2 — Thank you. Thank you so much. Excellency, ladies and gentlemen, I guess I should say good evening. We all recognize arti…
S85
Internet standards and human rights | IGF 2023 WS #460 — On a positive note, advocating for a multi-stakeholder approach has the potential to improve dialogue between standard-s…
S86
What is it about AI that we need to regulate? — Ensuring Better Representation of Developing and Least-Developed Countries in Global Digital GovernanceThe question of h…
S87
High-level ministerial roundtable on digital trade: Do regional trade agreements indicate the way forward for the multilateral trading system? — It is crucial for nations to find common ground and establish partnerships that promote collaborative efforts and mitiga…
S88
Bridging the Digital Divide: Advancing Inclusion in Africa with Affordable Devices (Carnegie Endowment for International Peace) — The analysis concludes on an optimistic note, highlighting the panel’s discussions and their shared belief in the future…
S89
Leaders TalkX: ICT Applications Unlocking the Full Potential of Digital – Part II — Professor Sandra Maximiano from Portugal discussed the role of telecommunications in promoting e-employment and remote w…
S90
Accelerating Structural Transformation and Industrialization in Developing Countries: Navigating the Future with Advanced ICTs and Industry 4.0 — However, Mbang stressed a crucial prerequisite: “The most crucial prerequisite is the capacity to truly master the techn…
S91
Better understanding e-commerce marketplaces: the Africa, Asia and Latin America Marketplace Explorers (ITC) — Furthermore, Latin American firms selling on marketplaces face complexities and high compliance costs. Compared to compa…
S92
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — It champions working with indigenous communities, who represent different worldviews, to ensure that every individual ha…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Saurabh Garg
1 argument143 words per minute569 words238 seconds
Argument 1
AI‑Ready Data Foundations
EXPLANATION
Dr. Garg outlines four pillars needed to make data AI‑ready: discoverability, trustworthiness, interoperability and usability. He stresses that each pillar relies on concrete technical measures such as metadata standards, quality assessment frameworks, unique identifiers and common classifications.
EVIDENCE
He explains that discoverability requires a well-defined metadata structure; trustworthiness is ensured through a quality-assessment framework he helped develop; interoperability depends on unique identifiers that allow datasets to recognise each other; and usability across systems is achieved by adopting common definitions and standards, all of which are being piloted with ministries and state governments across the country [7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S10 and S12 document Dr. Garg’s work on democratizing compute and building trustworthy AI systems, which underpins the four pillars of discoverability, trustworthiness, interoperability and usability. Open‑standard discussions in S15 also reinforce the interoperability pillar.
MAJOR DISCUSSION POINT
Data discoverability, quality, standards and interoperability
AGREED WITH
Johanna Hill
A
Arndt Husar
1 argument130 words per minute1910 words875 seconds
Argument 1
Digital Infrastructure Spectrum (Solutions, Standards, Skills)
EXPLANATION
Arndt frames digital infrastructure as a broader concept that goes beyond data centres and compute hardware. He includes the three “S” – solutions, standards and skills – as essential components of a holistic infrastructure ecosystem.
EVIDENCE
He states that when people think of digital infrastructure they first imagine data centres and compute, but the discussion should also cover the solution side, the skill side and standards, referring to the three S introduced by the ITU head [23-26].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Both S4 and S2 explicitly cite the ITU’s three S framework (solutions, standards, skills) and stress that digital infrastructure extends beyond data centres and compute hardware.
MAJOR DISCUSSION POINT
Broad definition of digital infrastructure
AGREED WITH
Johanna Hill, Mio Oka
J
Johanna Hill
2 arguments161 words per minute861 words319 seconds
Argument 1
AI as a Trade Growth Driver and Need for Responsible Policy
EXPLANATION
Johanna argues that AI can boost global trade by up to 40 % by 2040, but realising this potential hinges on solid digital infrastructure, skilled workforces and trustworthy policy frameworks. She also highlights that responsible, trustworthy AI is essential for competitiveness.
EVIDENCE
She cites WTO projections that AI-enabled trade could grow by almost 40 % by 2040 (the “40-by-40” effect) and notes that this growth depends on digital infrastructure, skills and policy readiness; she references a survey with the ICC showing many SMEs already using AI and seeing opportunities in market intelligence, while also pointing out concerns about fragmented regulations and high compliance costs [39-44][45-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S3 records Johanna Hill’s WTO remarks on AI‑enabled trade growth of up to 40 % by 2040 and the necessity of trustworthy policy frameworks to realise that potential.
MAJOR DISCUSSION POINT
AI‑driven trade expansion and regulatory trust
AGREED WITH
Dr. Saurabh Garg
Argument 2
Regional Cooperation, Standards, and Trade Policy Openness
EXPLANATION
Johanna stresses the importance of regional bodies and shared standards to reduce fragmentation and enable cross‑border AI trade. She points to the WTO’s AI Trade Policy Openness Index as a tool for measuring openness and notes successful regional initiatives in Africa, Latin America and ASEAN.
EVIDENCE
She describes the AI Trade Policy Openness Index published in the World Trade Report, which helps regions gauge their policy openness, and cites examples of regional work such as the ACFDA digital protocol that has helped set standards in African countries, as well as ASEAN’s AI policy work, illustrating how regional cooperation can create economies of scale and address regulatory diversity [158-166].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S13 and S15 discuss regional digital‑infrastructure policies and the role of open standards in enabling cross‑border AI trade, echoing Hill’s call for regional standards and the AI Trade Policy Openness Index.
MAJOR DISCUSSION POINT
Regional standards and policy measurement for AI trade
AGREED WITH
Mio Oka, Arndt Husar
Z
Zuhriddin Shadmanov
3 arguments118 words per minute902 words457 seconds
Argument 1
Compute and Skills Gaps in Developing Nations (Uzbekistan)
EXPLANATION
Zuhriddin highlights that Uzbekistan faces unequal access to compute resources and a shortage of advanced AI skills, which risks turning the country into a mere consumer of AI rather than a creator. He calls for a human‑centred AI strategy to address these gaps.
EVIDENCE
He states that the first major gap is unequal access to compute capacities and advanced AI digital skills, and argues that without bridging these gaps Uzbekistan would remain a consumer of AI value rather than a creator [56-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S21 outlines Uzbekistan’s AI strategy, highlighting unequal access to compute resources and a shortage of advanced AI skills, directly matching Shadmanov’s concerns. S18 also notes a broader skills gap in developing countries.
MAJOR DISCUSSION POINT
Unequal compute access and talent shortage
AGREED WITH
Arndt Husar, Johanna Hill, Hamam Riza, Mio Oka
DISAGREED WITH
Hamam Riza
Argument 2
Financing Priorities and Balancing Public Resources (Uzbekistan)
EXPLANATION
Zuhriddin explains how the Uzbek government is allocating substantial public funds to AI projects, prioritising human‑centred AI, data‑center infrastructure and training programmes, while also offering tax incentives to attract foreign investment.
EVIDENCE
He notes that the government has earmarked about US$300 million for AI projects across sectors such as health, education and cybersecurity, is building data centres and GPUs, creating a data lake for public use, and has introduced tax incentives for investors in large-scale data-centre projects, all aimed at balancing resources across priorities [124-132].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S21 details Uzbekistan’s allocation of roughly US$300 million to AI projects, data‑centre construction and tax incentives, confirming the financing priorities described. S19 adds context on mixed public‑private financing sources for AI strategies.
MAJOR DISCUSSION POINT
Public funding allocation and incentives
AGREED WITH
Mio Oka, Arndt Husar
DISAGREED WITH
Mio Oka
Argument 3
Private Capital Mobilization for AI Ecosystem (Uzbekistan)
EXPLANATION
Zuhriddin outlines Uzbekistan’s strategy to attract private capital, targeting US$1 billion in investments, partnering with Huawei for infrastructure, and establishing venture‑fund‑of‑funds and a US$50 million pool for AI startups.
EVIDENCE
He reports a plan to attract around US$1 billion for AI-related digital infrastructure, collaboration with Huawei on data-centres and 5.5G/6G networks, and the creation of a venture-fund-of-funds that has already allocated US$50 million to AI startups, demonstrating a mix of public and private financing [221-226].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S21 reports Uzbekistan’s target of attracting US$1 billion of private AI investment, partnership with Huawei, and the creation of a venture‑fund‑of‑funds with US$50 million for startups, aligning with the argument.
MAJOR DISCUSSION POINT
Attracting private investment and partnerships
AGREED WITH
Mio Oka, Arndt Husar
H
Hamam Riza
2 arguments92 words per minute1186 words771 seconds
Argument 1
Indonesia’s AI Roadmap: Tackling the “Triple Deficit”
EXPLANATION
Hamam describes Indonesia’s “triple deficit” of data, compute and talent, and presents the national AI roadmap that aims to close these gaps through massive talent targets, sovereign AI model development and a digital academy for up‑skilling.
EVIDENCE
He identifies the triple deficit-shortages in data, compute infrastructure and AI talent-as the main barrier, notes a target of 12 million AI-skilled workers by 2030 (currently 3-5 million short), and explains the creation of a digital academy (Korika Academy) with LMS and chatbot support, as well as efforts to develop sovereign AI models aligned with local culture [95-98][99-100].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S22 identifies Indonesia’s triple deficit of data, compute and talent and describes the national AI roadmap, digital academy and sovereign model initiatives that Riza references.
MAJOR DISCUSSION POINT
Addressing data, compute and talent shortages
DISAGREED WITH
Zuhriddin Shadmanov
Argument 2
Skills and Ecosystem Development (Indonesia)
EXPLANATION
He details a “penta‑helix” ecosystem that brings together government, industry, academia, civil society and media, focusing on talent development, infrastructure and use‑case articulation, including climate‑health nexus projects.
EVIDENCE
He explains the penta-helix platform, the ambition to train 12 million talent, the establishment of the Korika digital academy with LMS and a chatbot, and the Climate Smart Indonesia initiative that partners with universities and NASA to address climate-sensitive diseases [200-210].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S22 further outlines Indonesia’s penta‑helix ecosystem, the Korika digital academy and climate‑smart AI projects, providing concrete context for Riza’s ecosystem description.
MAJOR DISCUSSION POINT
Multi‑stakeholder AI ecosystem and talent pipeline
AGREED WITH
Arndt Husar, Johanna Hill, Zuhriddin Shadmanov, Mio Oka
M
Mio Oka
2 arguments143 words per minute664 words278 seconds
Argument 1
ADB’s Focus on Foundational Services and Sectoral AI Applications
EXPLANATION
Mio outlines ADB’s view that stable power, reliable broadband and device access are foundational, but the immediate priority is applying AI to sectoral services such as agriculture, water supply and irrigation to generate impact at scale.
EVIDENCE
She states that a stable power supply, devices and reliable broadband are basic foundations, then emphasizes that ADB focuses on AI applications in agriculture, water and irrigation, supporting these sectors while also building foundational infrastructure [105-114].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S4 includes ADB’s emphasis on stable power, reliable broadband and device access as foundational, and notes its focus on sectoral AI applications in agriculture, water and irrigation.
MAJOR DISCUSSION POINT
Foundational infrastructure and sector‑specific AI deployment
AGREED WITH
Arndt Husar, Johanna Hill, Zuhriddin Shadmanov, Hamam Riza
DISAGREED WITH
Zuhriddin Shadmanov
Argument 2
Regional Cooperation, Standards, and Trade Policy Openness (ADB perspective)
EXPLANATION
Mio describes ADB’s role as a catalyst for regional integration, helping both large economies like India and smaller countries by facilitating capacity building, private‑capital mobilisation and ensuring no country is left behind.
EVIDENCE
She notes that ADB supports India’s AI skill programme (over US$5 billion), stresses the need to avoid leaving any country behind, and shares experiences of trying to introduce AI-based solutions in a small neighbouring country where concerns about job loss highlighted the importance of ecosystem-wide approaches [255-267].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S13 and S15 discuss the importance of open standards and regional policy coordination for AI, while S4 references ADB’s role in catalysing inclusive regional AI cooperation.
MAJOR DISCUSSION POINT
ADB as catalyst for inclusive regional AI cooperation
AGREED WITH
Zuhriddin Shadmanov, Arndt Husar
DISAGREED WITH
Zuhriddin Shadmanov
Agreements
Agreement Points
Digital infrastructure must go beyond hardware and include solutions, standards and skills
Speakers: Arndt Husar, Dr. Saurabh Garg, Johanna Hill, Mio Oka
Digital Infrastructure Spectrum (Solutions, Standards, Skills) AI‑Ready Data Foundations Regional Cooperation, Standards, and Trade Policy Openness Regional Cooperation, Standards, and Trade Policy Openness
All speakers stress that digital infrastructure is not limited to data centres and compute; it requires well-defined standards, solution-oriented approaches and skilled human capacity, and regional coordination to make it effective [23-26][7][158-166][255-267].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions emphasize that digital infrastructure should integrate standards, solutions and capacity building, not just physical assets, as highlighted in the Business Engagement Session on inclusivity and the need to close the digital divide [S36] and the call for comprehensive skills development across sectors [S37].
Developing AI talent and upskilling is essential for AI adoption
Speakers: Arndt Husar, Johanna Hill, Zuhriddin Shadmanov, Hamam Riza, Mio Oka
Digital Infrastructure Spectrum (Solutions, Standards, Skills) AI as a Trade Growth Driver and Need for Responsible Policy Compute and Skills Gaps in Developing Nations (Uzbekistan) Skills and Ecosystem Development (Indonesia) ADB’s Focus on Foundational Services and Sectoral AI Applications
Each speaker highlights the need to build digital skills, from basic AI literacy to advanced talent pipelines, as a prerequisite for leveraging AI in trade, public services and economic growth [23-26][39-44][56-58][95-98][255-267].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple forums stress the importance of AI talent pipelines and continuous upskilling to enable adoption, noting that workforce development must span technical and non-technical roles [S37] and that AI’s impact on jobs requires proactive reskilling strategies [S47].
Financing AI infrastructure requires a mix of public funds, private investment and incentives
Speakers: Zuhriddin Shadmanov, Mio Oka, Arndt Husar
Financing Priorities and Balancing Public Resources (Uzbekistan) Private Capital Mobilization for AI Ecosystem (Uzbekistan) Regional Cooperation, Standards, and Trade Policy Openness (ADB perspective)
Speakers agree that building AI ecosystems needs substantial public allocation, tax incentives and venture-fund-of-funds, complemented by private capital and international partners to achieve scale [124-132][221-226][189-194].
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence from high-level sessions shows consensus on blended financing models that combine government seed funding, private capital and international support to build AI ecosystems [S41], while India’s “Maitri” platform exemplifies a public-private collaborative financing approach for compute and data as digital public goods [S45]; similar blended-financing recommendations appear in regional AI policy roadmaps [S61].
AI can drive trade and economic growth but only if supported by trustworthy data and responsible policy frameworks
Speakers: Johanna Hill, Dr. Saurabh Garg
AI as a Trade Growth Driver and Need for Responsible Policy AI‑Ready Data Foundations
Both speakers link AI’s potential to boost trade (up to 40 % by 2040) with the necessity of data discoverability, quality assessment and interoperable standards to ensure trust and regulatory confidence [39-44][5-6][7].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for trustworthy data and responsible AI governance is reflected in multi-stakeholder governance frameworks that define technological, institutional and civic layers of oversight [S42] and in UN statements warning that AI benefits for climate and trade depend on robust regulatory safeguards [S55]; broader analyses also link AI’s growth potential to inclusive, evidence-based policy design [S59].
Regional cooperation and shared standards are key to overcoming fragmentation and achieving economies of scale
Speakers: Johanna Hill, Mio Oka, Arndt Husar
Regional Cooperation, Standards, and Trade Policy Openness Regional Cooperation, Standards, and Trade Policy Openness Digital Infrastructure Spectrum (Solutions, Standards, Skills)
All three emphasize that regional bodies, joint standards and cross-border collaboration can reduce regulatory fragmentation and enable smaller economies to benefit from AI investments [158-166][255-267][145-150].
POLICY CONTEXT (KNOWLEDGE BASE)
Fragmentation concerns are addressed in discussions on the Global Digital Compact and e-trade rules, which call for coordinated regional standards to unlock scale economies [S40]; multistakeholder initiatives on AI standards further underline the importance of shared normative frameworks [S46], while cross-sector partnerships are repeatedly cited as essential for overcoming digital silos [S41].
Similar Viewpoints
Both identify a three‑fold gap—lack of compute resources, quality data and skilled talent—as the main barrier to AI development in their countries and propose national roadmaps to address them [56-58][95-98].
Speakers: Zuhriddin Shadmanov, Hamam Riza
Compute and Skills Gaps in Developing Nations (Uzbekistan) Indonesia’s AI Roadmap: Tackling the “Triple Deficit”
Both stress the importance of open, trustworthy, and interoperable data as a foundation for AI, and link this to broader ecosystem and talent development efforts [7][200-204].
Speakers: Dr. Saurabh Garg, Hamam Riza
AI‑Ready Data Foundations Skills and Ecosystem Development (Indonesia)
Unexpected Consensus
Democratization of AI beyond large economies
Speakers: Dr. Saurabh Garg, Hamam Riza
AI‑Ready Data Foundations Skills and Ecosystem Development (Indonesia)
While Dr. Garg raises concerns about the heavy compute and power demands of current models and the need for federated, low-power approaches [12-15], Hamam explicitly mentions the goal of “democratizing AI for all” and building inclusive ecosystems [138-139]; both converge on the idea that AI must be made accessible to a broader set of actors, a point not explicitly raised by other participants.
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for AI democratization stress open-source models and shared resources to ensure AI benefits reach all countries, not just major economies, as argued in panel discussions on AI beyond scaling [S44] and reports on AI’s transformative potential for global growth beyond developed nations [S43]; initiatives like India’s public-good compute platform further illustrate democratizing approaches [S45].
Overall Assessment

The panel shows strong convergence on four core themes: (1) digital infrastructure must be holistic, integrating standards, solutions and skills; (2) capacity development and talent pipelines are critical; (3) financing models need blended public‑private approaches with incentives; (4) AI’s economic potential hinges on trustworthy data, responsible policy and regional cooperation to reduce fragmentation.

High consensus across speakers and regions, indicating a shared understanding that AI deployment in the Global South requires coordinated infrastructure, skill building, financing and governance frameworks. This consensus suggests that future policy initiatives can build on these common pillars to design inclusive, scalable AI strategies.

Differences
Different Viewpoints
Approach to AI infrastructure development – reliance on foreign hyperscalers versus building sovereign, locally‑controlled AI models and ecosystems
Speakers: Zuhriddin Shadmanov, Hamam Riza
Compute and Skills Gaps in Developing Nations (Uzbekistan) Indonesia’s AI Roadmap: Tackling the “Triple Deficit”
Uzbekistan plans to partner with Huawei and attract foreign investment to build data-centres, 5.5G/6G networks and a venture-fund-of-funds for AI startups, emphasizing external hardware and capital ([221-226]). Indonesia, by contrast, stresses the need to develop sovereign AI models that reflect local culture and to reduce dependence on external hyperscalers, focusing on a home-grown talent pipeline and domestic AI ecosystems ([132-134]).
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on sovereignty versus dependence on foreign hyperscalers are evident in case studies of NVIDIA’s partnership model for local inferencing [S50], contrasting philosophies between indigenous model development and open-source collaboration in France-India dialogues [S51], and concerns over strategic autonomy highlighted by Indonesia’s response to Huawei’s expanding role [S54]; broader geopolitical analyses also flag the strategic implications of foreign AI infrastructure reliance [S62].
Public‑funded versus private‑capital‑driven financing of AI ecosystems
Speakers: Zuhriddin Shadmanov, Mio Oka
Financing Priorities and Balancing Public Resources (Uzbekistan) ADB’s Focus on Foundational Services and Sectoral AI Applications Regional Cooperation, Standards, and Trade Policy Openness (ADB perspective)
Uzbekistan allocates roughly US$300 million of public money to AI projects, creates tax incentives and a fund-of-funds to attract private investors, and stresses a balanced public-private mix ([124-132]). ADB stresses that the basic foundations are stable power, devices and broadband, and that the bank’s role is to catalyse private-capital mobilisation for sector-specific AI deployments, while warning against leaving any country behind ([105-114][255-267]). The two positions differ on the relative weight of state spending versus market-driven investment.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy literature presents divergent views on financing models: India’s “Maitri” platform showcases a public-driven collaborative funding structure for compute and data [S45], while other analyses argue that the scale of AI infrastructure necessitates substantial private sector investment to complement public seed funding [S60]; blended-financing recommendations further bridge the two approaches [S61].
Perceived level of AI adoption among SMEs
Speakers: Arndt Husar, Johanna Hill
SME adoption gap (implied in Arndt’s remarks) AI as a Trade Growth Driver and Need for Responsible Policy (WTO)
Arndt notes a huge adoption gap for SMEs, saying the technology has moved so fast that many small firms cannot yet integrate AI into their business models ([49-51]). Johanna, however, cites a WTO-commissioned survey showing that many SMEs are already using AI for market intelligence and view it as a game-changer ([45-48]). The speakers therefore disagree on how far AI penetration has progressed in the SME sector.
POLICY CONTEXT (KNOWLEDGE BASE)
Surveys from the UK indicate that a significant share of SMEs still lack basic digital tools, highlighting a gap between perceived and actual AI adoption [S48]; related research on scale-up hiring shows cautious AI integration among smaller firms, underscoring the perception challenge [S49].
Credibility of reported climate‑health AI initiatives
Speakers: Arndt Husar, Hamam Riza
Skills and Ecosystem Development (Indonesia) – climate‑health nexus Arndt’s skeptical interjection
Hamam describes Indonesia’s Climate Smart initiatives, linking AI to malaria and dengue prediction and collaborating with NASA and universities ([208-212]). Arndt responds with a skeptical remark, “I’m not buying any of them,” indicating doubt about the feasibility or impact of those claims ([213-214]).
POLICY CONTEXT (KNOWLEDGE BASE)
UN climate officials warn that AI applications without proper oversight risk overstated benefits, raising questions about the credibility of climate-health AI claims [S55]; civil-society reports further argue that AI’s touted climate impact is often exaggerated and lacks rigorous validation [S58]; broader discussions on climate-tech implementation note the need for robust verification mechanisms [S57].
Unexpected Differences
Skepticism about Indonesia’s climate‑health AI projects
Speakers: Arndt Husar, Hamam Riza
Skills and Ecosystem Development (Indonesia) – climate‑health nexus Arndt’s skeptical interjection
Arndt’s blunt comment “I’m not buying any of them” ([213-214]) was not anticipated given the generally collaborative tone of the panel, and directly challenges the credibility of Hamam’s described climate-health AI initiatives ([208-212]).
POLICY CONTEXT (KNOWLEDGE BASE)
Indonesia’s rapid AI expansion, especially involving foreign vendors like Huawei, has sparked national-security debates and skepticism regarding the authenticity and effectiveness of its climate-health AI initiatives [S54]; the country’s new AI strategy aims to attract global capital while confronting credibility concerns about project outcomes [S56]; similar credibility issues are highlighted in broader climate-tech assessments [S57].
Overall Assessment

The panel shows broad consensus that AI can drive development, but key disagreements surface around (i) whether AI infrastructure should be built through foreign partnerships or sovereign, locally‑controlled models; (ii) the balance between state‑led public financing and market‑driven private capital; (iii) the actual level of AI uptake among SMEs; and (iv) the plausibility of ambitious climate‑health AI programmes. These divergences reflect differing national contexts and strategic priorities.

Moderate – while there is shared recognition of AI’s importance, the varied viewpoints on financing, priority setting, and implementation pathways could lead to fragmented policies unless coordinated mechanisms are established. The disagreements highlight the need for flexible, context‑sensitive frameworks that align infrastructure, talent development, and regulatory standards across the Global South.

Partial Agreements
All three agree that a robust AI ecosystem is essential for development, but they differ on what should be prioritised first: Uzbekistan stresses immediate compute capacity and foreign‑partnered data‑centres ([56-58]), Indonesia stresses closing the talent and data deficits and building sovereign models ([95-98]), while ADB stresses stable power, broadband and sector‑specific AI services as the foundation before scaling up ([105-114]).
Speakers: Zuhriddin Shadmanov, Hamam Riza, Mio Oka
Compute and Skills Gaps in Developing Nations (Uzbekistan) Indonesia’s AI Roadmap: Tackling the “Triple Deficit” ADB’s Focus on Foundational Services and Sectoral AI Applications
All concur that regional cooperation is vital for AI‑driven growth, yet they propose different levers: WTO highlights standards, policy openness indices and regulatory harmonisation ([158-166]), ADB stresses a catalytic role that mobilises private capital and capacity‑building while avoiding exclusion ([255-267]), and Arndt frames the discussion around the three‑S framework (solutions, standards, skills) as a holistic infrastructure approach ([23-26]).
Speakers: Johanna Hill, Mio Oka, Arndt Husar
Regional Cooperation, Standards, and Trade Policy Openness (WTO) Regional Cooperation, Standards, and Trade Policy Openness (ADB perspective) Digital Infrastructure Spectrum (Solutions, Standards, Skills)
Takeaways
Key takeaways
AI‑ready data requires four pillars: discoverability (metadata), trustworthiness (quality assessment), interoperability (unique identifiers), and usability (common standards). Digital infrastructure is broader than compute; it includes solutions, standards, and skills (the “three S” framework). AI is projected to boost global trade by up to 40% by 2040, but this depends on reliable digital infrastructure, skilled workforce, and responsible trade policies. Developing nations face a triple deficit: insufficient data, compute capacity, and AI talent, exemplified by Uzbekistan and Indonesia. Uzbekistan is pursuing a human‑centred AI strategy with $300 M public funding for AI projects, $200 M for data‑center infrastructure, tax incentives, and a goal to attract $1 B private investment. Indonesia’s AI Roadmap targets 12 million AI‑skilled workers by 2030, addresses data/compute/talent gaps, promotes sovereign AI models, and establishes a digital academy (Korika) with AI‑enabled training tools. ADB emphasizes foundational services (stable power, broadband, devices) and sector‑specific AI applications (agriculture, water, irrigation) while supporting infrastructure financing and capacity building. Regional cooperation, shared standards, and policy tools such as the WTO’s AI Trade Policy Openness Index are seen as essential to reduce fragmentation and enable cross‑border AI collaboration. Public‑private partnership models (tax incentives, venture‑fund‑of‑funds, special economic zones) are highlighted as mechanisms to mobilize capital for AI ecosystems.
Resolutions and action items
Uzbekistan will allocate US$300 M for AI projects and US$200 M for data‑center/GPU infrastructure, and pursue US$1 B of private investment through incentives and venture‑fund‑of‑funds. Uzbekistan will continue building a national AI data lake and provide open or low‑cost access to SMEs and startups. Indonesia will launch the Korika digital academy, deploy the Kodika chatbot for up‑skilling, and create special economic zones for hyperscalers and edge‑computing facilities. Indonesia will prepare a presidential regulation to promote sovereign AI model development and attract investment in localized GPU/edge infrastructure. ADB will support democratization of AI compute, fund sectoral AI pilots (agriculture, water, irrigation), and assist member countries with master‑planning and capacity‑building initiatives. WTO will continue publishing and updating the AI Trade Policy Openness Index to guide countries on responsible AI trade policies. All participants agreed to pursue multi‑pronged public‑private financing strategies and to engage regional bodies (ASEAN, ACFDA, etc.) for standards harmonization.
Unresolved issues
How to efficiently balance and prioritize the three pillars (skills, infrastructure, policy) within limited budgets in each country. Specific mechanisms for cross‑border data sharing that respect data sovereignty while enabling AI collaboration. Ways to reduce the high energy consumption of large AI models and explore alternative, lower‑power inference architectures. How to close the adoption gap for SMEs that lack clear pathways to integrate AI into their business models. Details on how regional standards and protocols will be operationalized and enforced across diverse regulatory environments. Long‑term sustainability of private‑capital‑driven AI ecosystems without creating dependence on a few large technology providers.
Suggested compromises
Adopt a mixed financing approach: combine public funding, tax incentives, and private‑sector venture capital to spread risk and leverage expertise. Use regional cooperation to share infrastructure (e.g., joint data‑centers, cloud zones) while allowing each country to retain control over critical data through standardized interoperability frameworks. Balance AI ambition with realistic expectations: recognize AI is not a solution for every problem and prioritize use cases with high societal impact before scaling. Implement incremental standards (common metadata, unique identifiers) that can be adopted gradually, reducing fragmentation without imposing a one‑size‑fits‑all regime.
Thought Provoking Comments
When we talk in terms of AI infrastructure, we talk in terms of gigawatts of power. Compared to that, a human being requires 2,000 calories, which is only 100 watts. So are we missing something out there in the infrastructure?
This observation reframes the AI‑infrastructure debate from a purely technical issue to a sustainability and efficiency challenge, highlighting the massive energy gap between biological cognition and current AI models.
It shifted the conversation from describing data‑readiness and metadata to questioning the fundamental design of AI systems. Several panelists later referenced infrastructure efficiency (e.g., Arndt’s focus on the ‘three S’s’ and Mio’s discussion of balancing foundation vs service‑level AI), prompting a deeper look at power‑aware model design and the need for greener AI.
Speaker: Dr. Saurabh Garg
The three S’s of digital infrastructure – Solutions, Standards, and Skills – should frame our discussion.
By explicitly naming the three pillars, Arndt provided a clear analytical framework that guided the rest of the panel, ensuring that each contribution touched on a distinct but interrelated dimension.
All subsequent speakers organized their remarks around these themes (e.g., Johanna on standards and policy openness, Zuhriddin on skills and infrastructure, Hamam on a penta‑helix platform). This framing turned a potentially scattered dialogue into a structured, multi‑angle exploration.
Speaker: Arndt Husar
Our projections at the WTO Secretariat suggest that by 2040 AI‑enabled trade could grow by almost 40% – the ‘40 by 40’ effect.
The quantitative forecast gave a concrete, ambitious target that anchored the abstract benefits of AI in trade to a measurable outcome, making the stakes of the discussion tangible.
It prompted other panelists to link their national strategies to trade outcomes (e.g., Hamam’s emphasis on sovereign LLMs for local markets, Mio’s focus on service‑level AI for agriculture). The figure also sparked a brief debate about the conditions needed to achieve such growth, deepening the analysis of policy and infrastructure gaps.
Speaker: Johanna Hill
We have created a ‘penta‑helix’ platform that brings together government, industry, academia, civil society and media to build AI talent, infrastructure and use‑cases, including a digital academy (Kodika) and a climate‑health nexus.
This description introduced an innovative, multi‑stakeholder governance model that goes beyond the usual government‑industry partnership, showing how coordinated ecosystems can accelerate AI adoption while addressing societal challenges.
The comment expanded the conversation from isolated national policies to collaborative ecosystem design. It led Arndt to probe how Indonesia operationalises the platform, and inspired comparisons with Uzbekistan’s fund‑of‑funds approach and the WTO’s regional standards work.
Speaker: Prof. Hamam Riza
We need to look beyond building massive data‑centres and instead match compute to the specific problem – not every AI use‑case requires gigawatts of power.
Mio’s practical reminder about right‑sizing compute resources introduced a nuanced perspective on infrastructure investment, emphasizing efficiency and relevance over sheer scale.
This comment resonated with Dr. Garg’s earlier power‑consumption point and steered the dialogue toward smarter, demand‑driven infrastructure planning. It also reinforced the panel’s recurring theme of balancing foundational investments with service‑level applications.
Speaker: Mio Oka
When I tried to introduce an AI‑based fish‑feeding system in a small neighboring country, the government rejected it in seconds because it would reduce employment.
The anecdote highlighted the socio‑economic trade‑offs of AI deployment, reminding the group that technology adoption must be aligned with local labor concerns and political realities.
It caused a brief shift in tone, from optimistic infrastructure talk to a more grounded discussion of AI’s societal impact. Arndt used it to underscore the importance of skills development and inclusive policy, and it prompted other speakers to acknowledge the need for responsible AI that safeguards jobs.
Speaker: Mio Oka
We have launched an AI Trade Policy Openness Index to measure how open economies are to AI‑driven trade, revealing that low regulation can be both an opportunity and a competitiveness risk.
Introducing a concrete measurement tool added analytical depth, allowing participants to discuss not just qualitative gaps but also quantitative benchmarks for policy progress.
The Index became a reference point for later discussions on regional standards and interoperability. It encouraged other panelists to consider how their national strategies could be evaluated against such metrics, enriching the conversation with a data‑driven perspective.
Speaker: Johanna Hill
Overall Assessment

The discussion was shaped by a handful of pivotal remarks that moved the dialogue from abstract descriptions of AI infrastructure to concrete challenges, frameworks, and solutions. Dr. Garg’s power‑consumption analogy and Arndt’s three‑S framework set the thematic boundaries, while Johanna’s 40% trade growth forecast and the WTO’s Openness Index supplied measurable goals. Indonesia’s penta‑helix model and Uzbekistan’s mixed public‑private financing illustrated innovative governance approaches, and Mio’s anecdotes about energy‑efficient compute and employment concerns injected practical realism. Collectively, these comments redirected the conversation toward a balanced view that intertwines technical capacity, sustainability, policy openness, and socio‑economic impact, leading the panel to explore nuanced, actionable pathways for AI development in the Global South.

Follow-up Questions
Are there alternative mechanisms to reduce the massive compute and power requirements of AI models?
Seeks sustainable, energy‑efficient AI infrastructure solutions beyond current gigawatt‑scale consumption.
Speaker: Dr. Saurabh Garg
How can AI‑ready data be made discoverable, trustworthy, interoperable, and usable across systems while preserving privacy?
Identifies core challenges for data infrastructure that underpin effective AI deployment.
Speaker: Dr. Saurabh Garg
What mechanisms can ensure data sets have value beyond AI and can be leveraged for business while preserving individual privacy?
Calls for policies or technical approaches that balance data utility with privacy protection.
Speaker: Dr. Saurabh Garg
How should countries balance priority setting among AI infrastructure, skills, and policy, and how can they finance these simultaneously?
Addresses strategic allocation of scarce resources in developing economies.
Speaker: Arndt Husar
What is the role of hyperscalers in supporting sovereign AI development and localized large language models?
Explores dependence on global cloud providers versus building national AI compute capacity.
Speaker: Arndt Husar
How can trade competitiveness be enhanced through AI while respecting data sovereignty and addressing regulatory fragmentation?
Seeks policy and regulatory frameworks that enable cross‑border AI‑driven trade without compromising sovereignty.
Speaker: Arndt Husar
What does demand for AI infrastructure look like from ADB member countries, and how can it be met?
Aims to map member‑country needs to shape financing and technical assistance.
Speaker: Arndt Husar
How can private‑sector capital be mobilized effectively for AI infrastructure and startups in Uzbekistan?
Looks for financing models that combine public funds, foreign partners, and domestic venture capital.
Speaker: Arndt Husar
What specific regional cooperation areas (e.g., interoperability standards, talent mobility, shared datasets, joint research) can most improve AI development in the Global South?
Identifies leverage points for regional collaboration to overcome scale and resource constraints.
Speaker: Arndt Husar
What are the opportunities and challenges of regional cooperation integration on digital infrastructure in South Asia?
Seeks to understand how South Asian countries can jointly develop digital and AI infrastructure.
Speaker: Arndt Husar
How effective is the AI Trade Policy Openness Index in measuring openness and influencing AI‑driven trade growth in low‑income economies?
Calls for empirical research to validate the index and its policy implications.
Speaker: Johanna Hill
What is the impact of AI‑driven climate‑health nexus projects (e.g., malaria and dengue prediction) in Indonesia?
Requests evaluation of AI applications for public health and climate resilience.
Speaker: Hamam Riza
What are the socio‑economic effects of AI automation on employment in sectors such as agriculture, and how can potential job losses be mitigated?
Highlights the need to study AI’s impact on labor markets and develop inclusive policies.
Speaker: Mio Oka
What frameworks can be developed to de‑risk AI investments and catalyze private‑sector participation?
Seeks mechanisms to lower investment barriers and attract private capital to AI projects.
Speaker: Arndt Husar
How can AI compute resources be democratized across borders, as envisioned by the ADB working group on compute democratization?
Explores models for shared, cross‑national compute infrastructure to reduce duplication and cost.
Speaker: Arndt Husar
How scalable and effective are digital academies like the Korika Academy in building AI talent pipelines?
Calls for assessment of training programs, their reach, and outcomes for workforce development.
Speaker: Hamam Riza

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Policymaker’s Guide to International AI Safety Coordination

Policymaker’s Guide to International AI Safety Coordination

Session at a glanceSummary, keypoints, and speakers overview

Summary

Nicolas Miailhe opened by noting that the AI race has moved from theory to massive investment, with billions-trillions being poured into development while safety research lags behind [1-3]. He explained that AI Safety Connect was created to mobilise global-majority engagement, convene semi-annual summits at AI conferences and the UN, and run capacity-building and closed-door trust exercises [6-8][11-15].


Stuart Russell described the International Association for Safe and Ethical AI as a worldwide scientific society of thousands of members that aims to ensure AI systems operate safely and ethically, and he highlighted that achieving this requires both technical solutions and coordinated governance [33-38][40-44]. He stressed that AI-related harms cross borders, making global coordination essential, and pointed to India’s summit as an example of inclusive international dialogue [44-46].


Eileen Donahoe framed the panel by stating that rapid AI progress is outpacing minimal guardrails, creating a fragmented, non-binding governance landscape, and argued that middle-power and majority states can leverage pooled resources and normative influence to shape global AI safety [56-61][62-66]. She added that the panel would identify present coordination gaps and propose practical steps for policymakers in the coming months [66-68].


Mathias Cormann of the OECD identified inclusion of all stakeholders and evidence-based trust as key lessons, and warned that policy cycles are too slow for the pace of AI innovation, urging occasional pauses for testing and auditing [77-84][85-88]. He argued that the most critical frontier-AI safety infrastructure is coordinated transparency and incident reporting, citing the Hiroshima Code of Conduct and the emerging Global Partnership on AI incident-reporting framework as steps toward an international response centre [91-96].


Singapore’s Minister Josephine Teo noted that smaller states depend on foreign AI technologies, so translating scientific knowledge into effective policy requires rigorous testing, standards and international collaboration through bodies such as the OECD, AI Safety Connect and ICI [103-110][111-119][140-144]. Malaysia’s Gobind Singh Deo highlighted the ASEAN AI Safety Network and a forthcoming AI Governance Bill, emphasizing that without enforceable agencies, standards and regulations remain ineffective and must be institutionalised across the region [152-158][162-166][167-173].


World Bank Vice-President Sangbu Kim said the Bank can help Global South countries embed safety architecture from the design stage by partnering with advanced economies and firms to share red-team practices and build capacity [178-184][185-200]. Jann Tallinn warned that the most pressing risk is the unchecked race for superintelligence, calling for a slowdown supported by transparency and noting that private investors now have little influence over the leading AI firms [210-218][221-227][231-235].


Nicolas concluded that the coordination gap in frontier AI safety is real and urgent but can be closed, inviting participants to the next AI Safety Connect at the UN General Assembly to continue collective action [260-264].


Keypoints

Major discussion points


Rapid AI progress outpaces safety and policy, demanding urgent global coordination.


Nicolas opens by noting the “race towards artificial intelligence is no longer a theoretical pursuit” and that “safety is not keeping pace” [1-4]. Stuart Russell stresses that AI harms “cross borders” and require coordinated governance [44-46]. Eileen Donahoe describes a “fragmented…risk-management landscape” that fails to shape incentives [57-59]. Mathias Cormann adds that “AI is moving much faster than policy cycles” creating gaps [82-84].


Middle-power and global-majority states can lead AI governance through pooled resources, normative influence, and regional networks.


Donahoe argues that “middle powers…can shape the direction of global AI practices” and that their collective power will determine whether governance moves beyond rhetoric [62-66]. Cormann highlights the need for “inclusion…objective evidence” and notes the OECD’s success in building consensus among many countries [77-80]. Singapore’s Minister Teo stresses translating science into policy and the importance of interoperable standards, while Malaysia’s Minister Gobind points to the ASEAN AI Safety Network as a model for regional coordination [103-110][152-156].


Concrete infrastructure proposals: transparent incident reporting, an international incident-response centre, and open-source safety tools.


Cormann identifies “coordinated transparency and incident reporting” as the most critical frontier-AI safety infrastructure [91-92]. He describes the GPI AI Common Framework for Incident Reporting and the prospect of an international Incident Response Center [95-97]. He also mentions the OECD’s open-source safety-tool catalogue to make trustworthy AI easier to implement [98-99].


Building institutional capacity, standards, and enforcement mechanisms is essential.


Teo uses the aviation-safety analogy to illustrate the need for rigorous testing, standards, and long-term research before policies are set [110-119][132-138]. Gobind emphasizes that standards and regulations must be backed by agencies capable of enforcement, and that ASEAN needs sustained political will and technical resources [162-166][172-173].


Calls for a slowdown or even a provisional prohibition on super-intelligence development, and discussion of investors’ limited influence.


Cormann suggests occasional “pause, test, monitor, audit” to build public trust [84-86]. Jann Tallinn warns that the “cut-throat race” in labs is the biggest risk and cites the Future of Life Institute’s call for a prohibition until broad scientific consensus and public buy-in are achieved [207-214][226-227]. He later notes that investors now have little leverage over the leading AI firms [231-235].


Overall purpose / goal of the discussion


The panel was convened to diagnose the current “coordination gap” in frontier AI safety, highlight why middle-power and global-majority engagement is crucial, and outline concrete, near-term actions (incident-reporting frameworks, standards, institutional capacity, and possible slowdown measures) that policymakers can take within the next 12-24 months to make AI development safer and more trustworthy [57-66][91-99][240-250].


Overall tone and its evolution


The conversation begins with an urgent, almost alarmist tone about the speed of AI development and the lag in safety [1-4][57-59]. It quickly shifts to a collaborative, solution-focused tone as participants emphasize inclusive coordination, shared lessons, and concrete infrastructure [77-84][91-99]. Mid-discussion, the tone becomes more pragmatic, using analogies (aviation safety) and regional examples to stress the need for standards and enforcement [110-119][158-166]. Towards the end, a more cautionary and even admonitory tone emerges, calling for pauses, possible prohibitions, and highlighting the limited role of investors [84-86][207-214][256]. The closing remarks return to a hopeful yet urgent tone, reaffirming that the coordination gap is “real, urgent, and closable” [262-264].


Speakers

Speakers (from the provided list)


Gobind Singh Deo – Minister (Malaysia), leading Malaysia’s 2025 ASEAN chairmanship; involved in AI governance and ASEAN AI Safety Network. [S1]


Jann Tallinn – AI investor; founding engineer of Skype; co-founder of the Future of Life Institute. [S3]


Mathias Cormann – Secretary-General of the Organisation for Economic Co-operation and Development (OECD). [S5]


Sangbu Kim – Vice President for Digital and AI at the World Bank. [S6]


Stuart Russell – Professor of Computer Science, University of California, Berkeley; Director of the International Association for Safe and Ethical AI (ICI). [S8]


Nicolas Miailhe – Founder/CEO of AI Safety Connect; organizer of AI safety convenings and capacity-building initiatives.


Eileen Donahoe – Founder and Managing Partner of Sympathico Ventures; former U.S. Special Envoy for Digital Freedom and Ambassador to UNHCR. [S14]


Osama Manzar – Co-organizer (Digital Empowerment Foundation) for AI Safety Connect; involved in grassroots outreach. [S18]


Josephine Teo – Minister for Digital Development and Information, Government of Singapore. [S20]


Additional speakers (not in the provided list)


Cyrus – Host/moderator who introduced the session (mentioned in the opening remarks).


Dick Schuh – Prime Minister of the Netherlands (mentioned as a guest speaker delivering a special address).


Matthias Korman – (Same person as Mathias Cormann; already listed).


Stuart Russell – (already listed).


Other brief mentions: “Professor Stuart Russell” (already listed), “Osama Manzar” (already listed).


Full session reportComprehensive analysis and detailed insights

The session opened with Nicolas Miailhe warning that the “race towards artificial intelligence is no longer a theoretical pursuit” and that “billions and maybe trillions now of dollars are getting deployed to push the frontier of artificial intelligence” while “safety is not keeping pace with it” [1-4]. He noted that AI Safety Connect was created to “help shape the frontier AI safety and secure agenda towards what I would frame as commonsensical AI risk management” and to “encourage global majority engagement into frontier AI safety” [6-8]. The event was co-hosted by the International Association for Safe and Ethical AI (ICI) and the Digital Empowerment Foundation, represented by Osama Manzar [11-15][12-13], and featured a special address by Prime Minister Dick Schuh of the Netherlands [9-10]. To achieve its aims, the organisation convenes semi-annual gatherings at major AI summits (Paris, India, upcoming Switzerland) and at the UN General Assembly, and also runs capacity-building and closed-door trust-building exercises [11-15].


Stuart Russell introduced the International Association for Safe and Ethical AI (ICI), describing it as “a global, democratic, scientific and professional society” with “several thousand members and approaching 200 affiliate organisations” [33-35]. He also joked that ICI is “the world’s worst acronym.” Russell framed AI safety as both a technical challenge (“how do we even build systems that have that property?”) and a governance challenge (“how do we ensure that those are the systems and only those systems get built?”) [40-42]. He stressed that harms such as psychological damage or loss of human control “cross borders” and therefore “global coordination is essential” [44-46].


Eileen Donahoe set the agenda by observing that the “race to AGI and superintelligence intensifies” while “the technology is advancing rapidly and being deployed with minimal guardrails” [56-57]. She argued that existing risk-management processes are “ill-adapted to the magnitude of the risk, fragmented across jurisdictions, or insufficiently binding on developers, deployers, investors, and regulators” [58-60]. Donahoe highlighted the strategic potential of “middle-power and global-majority states” to “leverage pooled resources, market leverage, normative influence and regulatory innovation” to shape AI safety, asserting that “leading from the middle may turn out to be a more powerful approach than previously anticipated” [62-65]. The panel’s purpose, she said, was to “identify present-day coordination gaps in the global AI practice and the global market” and to propose “practical steps policymakers can take in the coming months” [66-68].


Mathias Cormann (OECD) reflected on lessons learned from building consensus. He stressed that “trust is built through inclusion and on the basis of objective evidence” and that bringing together governments, companies, civil society and technical experts is essential because each “has a different perspective and different imperatives” [77-80]. He warned that “AI is moving much faster than policy cycles have traditionally moved,” creating gaps between innovation and necessary oversight [82-84]. Cormann advocated occasional “pause, test, monitor, audit, share information” to build confidence that systems respect fundamental rights [85-86]. Regarding infrastructure, he identified “coordinated transparency and incident reporting” as the most critical piece, citing the Hiroshima Code of Conduct and the emerging Global Partnership on AI (GPI) Common Framework for Incident Reporting, which already has 25 organisations submitting detailed risk-management reports [91-96]. He suggested that this framework could evolve into an “international AI Incident Response Center” that shares alerts without penalising reporters [95-97]. Cormann also announced an OECD open-call for open-source safety and evaluation tools, to be catalogued on the OECD.ai platform, thereby making trustworthy AI “easier to implement in practice” [98-99].


Singapore’s Minister Josephine Teo explained that smaller states “cannot set the rules” because the AI technologies they rely on “do not originate from our shores” [104-107]. Nevertheless, she argued that policymakers must “translate what we know from science into policy” through rigorous testing, simulations and interoperable standards. Using an aviation-safety analogy, she described how determining safe runway separation for A380s required “invest[ing] in the research… in the tests… in the simulations” and warned that differing national standards would create operational difficulties [110-119][132-138]. Teo concluded that “international collaboration through bodies such as the OECD, AI Safety Connect and ICI” is required to develop standards that are both scientifically sound and globally interoperable [140-144].


Minister Gobind Singh Deo (Malaysia) highlighted the ASEAN AI Safety Network as a concrete regional mechanism and noted Malaysia’s “dual-track approach of building national capacity while leading regional coordination” [152-156]. He warned that standards, regulations and legislation are ineffective without an “agency that can enforce it,” otherwise they remain “strong on paper but … not … have that impact” [162-166]. Deo called for sustained political will, technical capacity and resources to operationalise the network, and argued that ASEAN must first strengthen domestic institutions before moving to a collective regional framework [167-173].


Sangbu Kim, Vice-President for Digital and AI at the World Bank, described how the Bank can help Global South countries embed safety “from the design stage” by “partnering with advanced economies… and very high-end examples” to share red-team practices and build capacity [178-184][185-200]. He noted the paradox that AI is “the sphere” capable of penetrating any shield, yet “we also can build strong protective systems by fully utilizing AI,” underscoring the need for close collaboration between developing and advanced economies to stay ahead of emerging threats [196-199][200].


Jann Tallinn, co-founder of the Future of Life Institute, warned that the “cut-throat race” in top AI labs poses the greatest danger and called for a “slowdown” until two conditions are met: a broad scientific consensus that superintelligence can be developed safely, and strong public buy-in [210-214]. Tallinn illustrated the competitive climate with a recent photo of Narendra Modi, Dario Amadei and Sam Altman standing apart without linking hands, and noted that Amadei and Demis Hassabis had called for a slowdown at Davos [215-218]. He argued that massive funding streams could be leveraged as a lever for safety if public pressure is sufficient, but observed that “investors don’t play much of a role anymore because the leading AI companies now are kind of above the level where private investors can influence them” as they head toward IPOs [221-227][232-235]. Tallinn reiterated the need to “slow down” and suggested that greater transparency about what AI leaders know would help create the political pressure required for a slowdown [256-257].


When asked to prioritise actions for the next 12-24 months, Minister Teo said the “AI safety research priorities need to be refreshed” because the field moves quickly, and that “we need to introduce better testing tools” to give developers practical assurance [240-249]. Cormann added that there is “no one thing that will make us all safe” and called for a “comprehensive” effort that catches up with innovation while deepening coordination [251-254]. Deo stressed the need to “institutionalise” AI-safety governance structures so they can keep pace with rapid technological change [253-255]. The panel collectively agreed that coordinated transparency, incident-reporting frameworks and the development of open-source safety tools are immediate priorities, while recognising that enforcement mechanisms and sustained institutional capacity remain open challenges [91-99][162-166][236-239].


Nicolas Miailhe closed by reaffirming that “the coordination gap frontier in AI safety is real, and it is urgent” yet “closable” [262-264]. He invited participants to the forthcoming UN General Assembly session in New York, where the fourth edition of AI Safety Connect will be hosted, hoping to continue the collective effort [265]. Osama Manzar concluded with a broader moral framing, urging that “the entire safety aspect of AI should be more from ‘please save people from AI’… we have to save human intelligence from artificial intelligence” and calling for strong safety guards and policy playbooks to be built into AI systems [266-276].


Overall, the discussion revealed strong consensus that AI risks are global and demand coordinated governance, inclusive evidence-based consensus-building, and robust capacity-building. Middle-power and regional actors were identified as pivotal levers for shaping standards, while concrete infrastructure proposals-transparent incident reporting, an international response centre, and open-source safety tool catalogues-were widely endorsed. Points of contention included the extent to which private investors can influence safety incentives, whether voluntary reporting or mandatory enforcement should dominate, and the preferred mechanism for slowing development (periodic pauses versus a provisional prohibition). These disagreements underscore the complexity of aligning diverse stakeholder interests into a coherent global AI-safety strategy.


Session transcriptComplete transcript of the session
Nicolas Miailhe

that the race towards artificial intelligence is no longer a theoretical pursuit. As billions and maybe trillions now of dollars are getting deployed to push the frontier of artificial intelligence, the technology is now advancing rapidly. And safety is not keeping pace with it. There are wonderful opportunities on the other side of this quest. There are also big risks. And so that’s the purpose, that’s the reason AI Safety Connect was founded. AI Safety Connect is there to help shape the frontier AI safety and secure agenda towards what I would frame as commonsensical AI risk management. AI Safety Connect has been founded to encourage global majority engagement into frontier AI safety. And AI Safety Connect, has been connected to showcase Concrete’s governance coordination mechanisms, tools, and solutions.

So how we do this? We convene at each AI summit. So last year we started in Paris, this year in India, next year we’re going to be in Switzerland. But we also convene at the UN General Assembly, right? We need a faster tempo for these safety discussions, so every six months we have this global convening. We also do capacity building, and we also do trust building exercises at times behind closed doors. Well, this week in New Delhi has been an intense one, an impactful one. On Tuesday we had a full day of panels, conference, solution demonstrations, and closed -door workshop discussions on some specific nuts to crack to advance AI safety. We, for example, at the privilege of, hosting Prime Minister Dick Schuh from the Netherlands on stage to deliver a special address on the role of top leadership in advancing AI safety.

We also engage with industry, engage with academia. of India and abroad. So we’re an extremely busy week beside our main event. We had this closed -door discussion that I was mentioning yesterday and today, this closed -door scientific dialogues. We’re going to publish the results soon that brought together senior industry leaders to discuss shared responsibility for AI safety. Well, obviously, none of this would happen without partnership. And we want to thank our co -hosts, the International Association for Safe and Ethical AI and its director, Professor Stuart Russell, to whom I will hand over the floor in a few minutes, and the Digital Empowerment Foundation who is anchoring us at the grassroots here with Osama Manzar,. We’ll close the session later on.

And we obviously want to thank our sponsors and supporters, starting with Sympathico Ventures. Eileen Donahoe will moderate that panel and we’re thankful for that. The Future of Life Institute, Ima and Yann, who’s been supporting this effort, and the Mindero Foundation, whose team is here as well with team. And it’s great to have your support and we are thankful for that. So today we’re about to hear from His Excellency Matthias Korman who’s the Secretary General of the OECD We’re going to hear from Her Excellency Minister Josephine Theo who’s the Minister for Digital Development and Information at the Government of Singapore. Thank you for your continuous support, really appreciate that Same for Jann Tallinn who’s the AI investor but also a founding engineer at Skype and the co -founder of the Future of Life Institute And last but not least, we also have Minister Teo who’s going to be with us from Malaysia Minister for Digital Development and Information Thank you Minister as well as Vice President Kim for Digital and AI at the World Bank So an extremely important conversation to have And before we welcome you to the stage I would like to hand over the floor to Professor Stuart Russell to say a few words and to speak about also what’s happening next week in Paris Thank you so much.

Stuart Russell

Thank you very much, Cyrus and Nico. So as Nico mentioned, the International Association for Safe and Ethical AI, or ICI, the world’s worst acronym, is a global, democratic, scientific and professional society. We have several thousand members and approaching 200 affiliate organizations. Our mission is to ensure that AI systems operate safely and ethically for the benefit of humanity. And as Nico mentioned, our second annual conference will take place in Paris starting on Tuesday. It’s still, I think, possible to register, but we’re already up over 1 ,300 people coming. It’s at UNESCO headquarters in Paris. Thank you. So achieving this mission of ensuring… that AI systems operate safely and ethically is partly a technical challenge. How do we even build systems that have that property?

But also a governance challenge. How do we ensure that those are the systems and only those systems get built? And this panel is mainly about this second challenge. And I think it’s one on which global coordination is essential because the harms, whether it’s psychological damage to the next generation or loss of human control altogether, those harms cross borders. And we must coordinate to make sure that they don’t happen or they don’t originate anywhere. And it’s, I think, fitting that we are having this summit here in India, which has really, among other things, championed the idea that everyone on Earth should have a say. And so with that, I will hand over to Eileen. Thank you very much.

Nicolas Miailhe

Thank you, Stuart. So Dr. Eileen Donahoe is the founder and managing partner of Sympathico Ventures. She’s also the former U .S. Special Envoy and Coordinator for Digital Freedom and Ambassador to the UNHCR. Eileen? Welcome the speaker on the floor. Please, Your Excellency, Mr. Mattias Korman, Mr. Gobind Singh Deo, Mr. Josephine Teo, and Mr. Jann Tallinn, as well as Mr. Sangbu Kim, join us on stage.

Eileen Donahoe

Okay. Given this remarkable panel and the very short time we have, let me very briefly frame our discussion and get right to our speakers. So we’re here to share. Views on the opportunity for policymakers to impact international AI governance. As the race towards AGI and superintelligence intensifies, AI safety advocates face a compounding challenge. The technology is advancing rapidly and being deployed with minimal guardrails, while the risk management processes that do exist are either ill -adapted to the magnitude of the risk, fragmented across jurisdictions, or insufficiently binding on developers, deployers, investors, and regulators. The result is an unharmonized governance landscape that fails to shape the behavioral incentives. Of those building and funding frontier AI. Economies, governments, and societies do not respond well to such mixed signals.

While much of the discourse on frontier AI safety has focused on AI superpowers, there’s an urgent need for deeper international diplomacy on the most… extreme risks. At this juncture, middle powers and global majority states can’t be seen as peripheral actors in this landscape. Through pooled resources, market leverage, normative influence, and regulatory innovation, they can shape the direction of global AI practices and safeties. Leading from the middle may turn out to be a more powerful approach than previously anticipated. Whether or not that collective power is exercised now will determine whether international AI governance moves from the rhetorical level to the real -world impact on safety. This panel will aim to identify present -day coordination gaps in the global AI practice and the global market.

We will also look at the role of global AI in international AI safety and highlight practical steps policymakers can take in the coming months to close them. So to our panel, I’ll start with Secretary General Corman. The OECD has done remarkable work over the past decade, developing consensus on the OECD principles, providing a definition of AI systems that has resonated internationally, and playing an international role in operationalizing the Hiroshima International Code of Conduct. Along with those foundations, we now have the International AI Safety Report and the Singapore Consensus on Global AI Safety Research Priorities. With these principles, definitions, and frameworks in mind, two -part question for you. First, what are the key lessons learned from the process of building consensus and then implementing these frameworks?

And then second, looking ahead, what’s the most critical? What’s the most critical piece of coordinated frontier AI safety infrastructure we should be building now? Some have called for an international incident response center, and we’re all curious whether you think that should be a priority and achievable. Just some small, easy questions.

Mathias Cormann

In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is built through inclusion and on the basis of objective evidence. And, you know, I think what we’ve learned over the last few years is that bringing together all the relevant actors, governments, companies, civil society, technical experts, is what we need to do. I mean, each has a different perspective and different imperatives. I mean, markets reward the private sector for speed, scale, and innovation. While governments must manage risk and protect the public interest without stifling progress. But a challenge, and it’s been mentioned in some of the opening remarks, a challenge for policymakers in this context is that AI is moving much faster than policy cycles have traditionally moved, which easily then creates gaps between innovation and progress and opportunity, but necessary oversight, mitigation and management of risk.

But all sides in this conversation do share an essential common interest, and that is to ensure that the systems that are developing are trustworthy, because without public trust in the end, even the most powerful AI tools will struggle to gain broad adoption. So that means that occasionally, and it’s not always popular with everyone, but occasionally we should slow down. Occasionally we should actually pause. Pause, test, monitor, audit, share information, and take the time and invest in building confidence that these systems can work as intended and respect fundamental rights. So that’s sort of, I guess, the first point. another critical lesson involves international consistency and this is part of the reason why these sorts of summits are so important is to really facilitate these conversations among countries and among different jurisdictions because national priorities can vary quite widely and there’s of course fragmentation and compliance cost related risks and at the OECD really what we’ve been doing for six decades now across different policy areas is to try and reduce fragmentation and by achieving alignment around key principles, building shared evidence and facilitating the necessary conversations to develop a more coherent better coordinated approach moving forward and on AI I mean we’ve developed the OECD principles which were first adopted in 2019 and which are now adhered to by 50 countries around the world and that was really the first globally recognized baseline for trustworthy AI The OECD’s lifecycle definition of an I .I.

system has since shaped policy frameworks from the EU I .I. Act to U .S. executive orders. And we’ve had just earlier the meeting of the Global Partnership on I .I. co -chaired by Korea and Singapore. We’ve got the OECD I .I. Policy Observatory, which is sort of essentially the broad gamut of all of the different policy approaches around the world to provide countries and industries with data and evidence on what’s being done, facilitating peer learning, and trying to take some of the politics and the rhetoric out of it, but really looking at the facts. Now, looking ahead, and you sort of ask a question here about what to do about the risk. I mean, the most critical piece of frontier I .I.

safety infrastructure is coordinated. transparency and incident reporting. I mean, the Hiroshima I .I. Process Code of Conduct and its reporting framework launched at the I .I.’s Action Summit in Paris last year. You know, that’s a promising step, and we’ve got to continue to develop that. Since their publication, 25 organizations across nine countries have already submitted detailed reports on how they manage I .I. risks, offering for the first time a comparable view of developer practices across jurisdictions. The next stage is to strengthen information sharing on I .I. failures and near misses. The GPI I .I. Common Framework for Incident Reporting aims to help us collectively learn from mistakes before they scale globally, and over time, this could evolve into an international I .I.

Incident Response Center, coordinating alerts between governments and labs without exposing companies to commercial or legal penalties for reporting in good flight. Finally, we do need to scale access to practical safety tools. With global partners, the OECD recently launched an open call for open source safety and evaluation tools hosted in the OECD .ai catalog of tools and metrics to make a trustworthy AI easier to implement in practice. I mean, these are some initiatives to form the foundation of a more transparent, data -driven, and interoperable AI governance ecosystem, and

Eileen Donahoe

Excellent. Minister Teo, a number of questions for you, but let me start with the fact that Singapore occupies a very distinctive position in the global geostrategic landscape as a pro -innovation, advanced knowledge economy, with deep commercial and diplomatic ties to both the U .S. and China. Thank you. As the race to AGI intensifies and bilateral tensions mount, is there a role for Singapore and other middle powers to play in bridging the coordination gap to keep scientific and safety channels open? And also, what’s the most important step middle powers can take in the next 12 months to help establish a shared minimum understanding of frontier safety?

Josephine Teo

Well, thank you very much for that question. I think there is no running away from the fact that for smaller states, and that includes Singapore, the technology that our companies, our citizens are going to rely on do not originate from our shores. So they don’t necessarily come within our jurisdictions. We don’t always get to set the rules. Having said that, I do believe that we’re not without. Thank you. agency. It doesn’t mean that we take a step back and just let things happen to us. There are still things that we can do. One of the most important things I think as policymakers is for us to think about what it takes to translate what we know from science into policy.

And I wanted to just say why this is so important. In our case, as policymakers, the key questions will always be, are the policies that we make effective? And also, policies always come with trade -offs. With the question of effectiveness, there is always a need to understand what actually works, as opposed to what looks good on paper. With the question of trade -offs, it’s about understanding what we lose as a result of whatever safety aspects it is that we choose to put in place. And whether we can minimize them, can we mitigate them? Now, in areas where safety is the objective, we can’t just go with gut. We can’t just go with speculation. You take, for example, in my previous life, I was working on promoting Singapore’s Air Hub.

And we had to deal with a question of aviation safety. We were expanding our airport. It was going to carry many more passengers in and out of the country. But we are limited by number of runways. And in landscape Singapore, you can’t just click your finger and say, let’s build a new one. It’s a long runway. It’s very expensive anyway. Then there is the question of what do you do when you have these jumbo jets like A380s? Because each time an A380 hits the runway. It creates so much of a blast that you really need to create more distance between the A380 taking off and the next aircraft that is scheduled to take off.

Now, this is not a question that the transport minister can just decide on a whim. The air traffic management has to decide on its policy of how much distance is considered safe between landings or rather between takeoffs. And to answer this question, you really need to invest in the research. You need to invest in understanding the tests. So the science is one part of it. But between science to policy, you are actually going to need a lot of time. You are going to need a lot of tests. You are going to need a lot of simulations. you need to understand whether the distances that you decide are safe works well in a thunderstorm, a tropical thunderstorm.

Does it work just as well in a snowstorm? Well, we don’t have snow in Singapore. But you think about the airline that operates this. If each country that they fly into has a different safety distance, that creates some difficulty. So we therefore think that not only is there a need to invest in understanding the science, not only is there a need in understanding what testing looks like, what good testing looks like, there is also a need for us to think about what standards that will eventually be interoperable, what do they look like, which is why we think that international efforts, the collaboration that… that is being carried forward by the OECD through the Global Partnership on AI, the AI Safety Connect effort, and also ICI.

Where is Stuart now? Those kinds of efforts, you can’t do away without. At the outset, there is likely to be a bit of a fragmentation. And the trade -off with not having these conversations is that we are not even going to make advances in AI safety. And I don’t think that that’s a very good place for us to be in. It doesn’t give us the assurance that we can deliver to our citizens. And it does not create a foundation of trust that will eventually help us to push ahead with the use of this technology on a wider scale. So that’s how we are thinking about it, Aileen. Thank you.

Eileen Donahoe

So let me turn to Minister Gobind from Malaysia. and note that under your leadership and Malaysia’s 2025 ASEAN chairmanship, Malaysia succeeded in placing AI at the center of ASEAN’s agenda by establishing the ASEAN AI Safety Network. Malaysia is now finalizing its own AI National Action Plan, and Malaysia’s AI Governance Bill is expected in Parliament in 2026. So this dual -track approach of building national capacity while leading regional coordination represents a model of middle power agency that other countries are watching closely. So what lessons do you think other middle powers can draw from Malaysia’s experience? And on the ASEAN AI Safety Network, we have to note that operationalize and it will require sustained political will. technical capacity and resources.

So what concrete steps must ASEAN take in the next 12 to 18 months to ensure that this isn’t just aspirational?

Gobind Singh Deo

Online fraud, for example, scams, you have deepfakes today, you have huge concerns about certain vulnerable groups that are going to be impacted, children, older folk and so on and so forth. So this is something that stretches across the region. How do we deal with it in a coordinated way and ensure that the conversation doesn’t just stop with the government of the day, but it’s a conversation that expands over a period of time with clear policies that we can actually execute. The second layer that I think we need to think about is in the event there’s a need for execution. When we speak about risks in AI and we speak about how we’re going to govern these risks, we often talk about standards.

We often talk about regulation. We even speak about legislation at times for areas that pose higher risks. But ultimately, it really comes back down to you making sure you have an agency that can enforce it, because you can have the best standards. regulations and legislation but if there is no institution that’s really able to implement those standards to ensure that they are properly implemented and also to ensure that rules for failure to implement are enforced then those standards regulation and policies are really going to be just strong on paper but they’re not going to really have that impact that you need. So again, how do you build this mechanism across ASEAN where every country strengthens themselves domestically first and then moves across to the ASEAN member states and hopes to learn from their experiences so that we can together move ahead in this new world of AI and I think the threats that we anticipate in future.

Now the third part which is really important is also ensuring that whilst this goes on, you create those policies, you have institutions that enforce and the discussions persist at an ASEAN level. I think what is important is also to have that expertise looking at what comes next. We must make sure that our countries are prepared for the risks that are to come with the next generation technology. This is important because you don’t want a situation where new technology is adopted and there are risks that come with this new technology, you’re not prepared. I think that’s something we want to avoid and that’s the reason why I come back to where I started off. We really need to look at building institutions that have the expertise and of course are able to sustain as we go along and to build and deliver something that’s impactful.

Sorry, but that’s in short what we’re doing in Malaysia today.

Eileen Donahoe

Excellent. Thank you so much. Okay. Let me turn to Vice President Kim and talk about the World Bank, which has been at the forefront of digital public infrastructure, helping countries leapfrog legacy systems. We note that frontier AI systems, though, are arriving in the global south under very different conditions from previous waves of technology and governments are under pressure to deploy AI systems quickly. often using models that haven’t been adequately tested, let alone certified for their context, languages, or risk tolerances. So how can the World Bank help Global South countries move from being passive recipients of frontier AI to active shapers of safety and reliability requirements before the systems are deployed at scale?

Sangbu Kim

Thank you. In one word, definitely we need to make our clients well prepared from the scratch. When they design the AI systems, definitely they need to design the safety architecture within the system. That’s very, in general, that’s very correct. But real challenge is that… nobody can really expect a new type of new threat especially our some countries in a low capacity it is really hard to figure out what that will be so that’s the in order to tackle that type of irony and dilemma we need to very closely working with very developed economies company and government and very high end examples so that we can really well connect those good examples to the developing world so one partnership is one of the good examples we are helping our country for example some big tech company who is running some red teams so that you they are trying very hard to attack their system in advance by fully utilizing AI.

So through that type of practice and experiment, they can learn how to prevent the AI attack in the future, which is pretty much possible. So in this way, it is inevitable for our developing countries to keep track on the new trend and new innovation, even in this safety protection area. It is the only way. So I have to admit that this constraint. But think about this. Some anecdotal story in East Asia, in China and in Korea, there’s two models. Merchant who is selling two products. Number one is. sphere. And then they keep saying that this sphere is so strong so it can get through any kind of shield. So this is one vendor. The other vendor is selling shield.

And then they are saying that this shield is one of the most safe and strong shields. No sphere can get through this shield. This is exactly an ironical situation. If you think about AI, AI attack is the sphere. AI is so strong and smart and really capable so it can get through and hack any system with high -end intelligence and knowledge. But good news is that on the other hand, we also can build strong protective systems. by fully utilizing AI. So this is one good news, but the constraint is that we do not clearly know how AI can really evolve to fully protect those big attacks in the future. So in order to solve this type of ironical situation from the developing world point of view and from the World Bank point of view, this is the only way to very closely work and collaborate and learn from the advanced technology and advanced company and advanced country.

Eileen Donahoe

Thank you so much. Last but not least, Mr. Jan Tallinn, you occupy a very rare position in this landscape as a founding engineer of Skype, an early investor in DeepMind and Anthropic, and you’re also the co -founder of the Future of Life, which last October released a statement on superintelligence. calling for prohibition on superintelligent development until two conditions are met. Number one, broad scientific consensus that it can be done safely and controllably, and second, strong public buy -in. Let’s just ask the hard question. What would an effective prohibition look like in practice? How could that work?

Jann Tallinn

Thank you very much. So I think I’m kind of like a little bit different from the people on this panel. And that too, I guess. That I’m kind of, my main kind of threat vector about, my main worries about future are less about like how AI is being deployed and diffused and taken into practice. And I’m way more worried about what is happening in the labs, in the top AI companies. I’m not sure what the future is going to look like. because they are now in a cutthroat race to build something that is smarter than they are. They are in a cutthroat race to build superintelligence. And, like, I mean, we just saw yesterday the picture where, with a photo of it, Narendra Modi, Dario Amadei, and Sam Altman refused to link hands.

I mean, this is, like, indicative. We also saw both Dario and Demis Hassabis call for a slowdown in Davos last month. They just can’t do it alone. And I think there are, like, two reasons why it’s, like, an unfortunate situation. One is that the U .S. as a country is conflicted. They basically rely on AI for their economic and competitive power. So they are, like, very hesitant to, kind of, meddle with now. cutthroat situation in AI companies and the rest of the world really doesn’t understand how big danger they are now. So it’s part of the reason why we did the superintelligence statement is to create awareness that there is increasing political demand to do something about this situation.

We now have more than 130 ,000 signatures which is like many times more than we had done our original six months post letter had in 2023. So yeah, that’s like if there was enough pressure, I think clearly like the rest of the world is still kind of more powerful than the kind of leading AI countries. There are more people, there’s more economic power, etc. So if there was like enough pressure this could be solved. Like the way I put it is that it’s super hard to do like a $10 billion project. it’s impossible to do it if it’s illegal. So having these trillions flow into AI actually makes it easier to govern than harder.

Eileen Donahoe

So I’m tempted to follow up with a question about investors and their potential role in this. They are obviously playing a decisive role in shaping the incentives, but they’re largely absent from the governance conversation. So what would it take to bring investors meaningfully into the safety conversation?

Jann Tallinn

So, yeah, I think the answer is kind of simple. I don’t think investors play much of a role anymore because the leading AI companies now are kind of above the level where private investors can influence them. They will now IPO soon. And if you are like an IPO market, there is… like, like, so level playing field, which means that like, if somebody’s not funding, somebody else will. So I don’t think investors, investors could have affected things, but like, five, 10 years ago.

Eileen Donahoe

Great. Okay, so since we’re running short on time, I’m going to ask one question, and ask you all to answer it, which is about the 12 month window. Oh, the very shortly, each shortly. Many in the AI safety community believe we have a narrow window, perhaps 12 to 24 months before frontier AI capabilities advance beyond our ability to evaluate and govern them. So what would you recommend is prioritized between now and we’re basically in the next year to two years, each of you to enhance safety? and security?

Josephine Teo

I think there are two, really. I think the AI safety research priorities need to be refreshed because the field has moved so quickly. The Singapore consensus identified a set, but as soon as they are published, we recognize that they will be out of date. So we need to refresh it. That’s why we’re going to have the second edition, you know, worked on. Hopefully in a few months. The second thing I think is that we can’t just keep thinking about frameworks, you know, and guidelines. At some point, we need to be able to introduce better testing tools. And until we are able to do so, the companies that are developing and deploying AI models, they also don’t have a very practical way of giving assurance.

So I’d like to see in the next 12 months some further advancements. In those two areas.

Mathias Cormann

I’ll be really quick I know there’s always a temptation in these sorts of conversations, what is the one thing that can sort of fix it all and the truth is there’s not one thing we’ve got to go as fast as we can to play catch up to a degree but we’ve also got to go as comprehensive and as deep as we can there’s just no alternative, there’s catch up to be played, we’ve got to put a real effort and it’s got to be right across the board and I don’t think that you can just say there’s the one thing that will make us all safe and it’s going to be okay.

Eileen Donahoe

Minister Gobind?

Gobind Singh Deo

I think as I said earlier, we need to start thinking how we can build structures and perhaps institutionalize this entire conversation about building security around AI and its governance in this regard, we have to understand that things are going to move very quickly and you’re going to see new technology develop very fast which brings new risks as well, so in that regard, you’ve got to build something that’s sustainable and I think in order to do that, institutionalizing it should be a priority.

Sangbu Kim

everyone is really rushing for ai system development ai solution development that means ai is currently ai safety measures currently under invested so i really like to urge all of us to think about this is not free you know things we need to spend some money to protect the system in advance from the scratch when you design the system so that means we should allocate some money to fully invest in in the

Eileen Donahoe

Jann Tallinn?

Jann Tallinn

so slow down we really need to slow down that the companies are asking for it and if we like instrumental to that would be basically transparency like more people should know what the leaders of ai companies know in order to basically understand how crucial the slowdown now is

Eileen Donahoe

okay great well i believe we have a little bit of a close coming and thank you all so much i wish we had had a day to talk about all of these issues. But thank you so much. Thank you very much.

Nicolas Miailhe

Thank you very much, Eileen, and this fantastic panel, excellencies, colleagues, friends. What we’ve heard today confirms something important. The coordination gap frontier in AI safety is real, and it is urgent. And as we’ve discussed today, it is closable. And before I hand over the floor to Osama Manzara to close off for a few minutes of remarks and reflection, I’d like to invite you all to the United Nations General Assembly next edition in New York, where we hope to organize the fourth edition of AI Safety Connect, and hopefully with many of the great policymakers and leaders we have heard from today, to carry forward that collective effort. Osama, the floor is yours.

Osama Manzar

Well, thank you very much. And we are one of those absentee co -organizer in this one. So, you know, because being a local, but I just want to I mean, apart from thanking each one of you who didn’t get up and, you know, go out of the room. And every one of you who gave all the safety remarks before usage of AI on behalf of 40 million people that we have reached out in the last 23 years. And billions of the other people whom we are going to work for. I want to suggest that the entire safety aspect of AI should be more from please save people from AI. Right. Because that’s the safety like it’s a car on the road.

You know, we have to save people before you teach people how to think. So we also have to keep a very, very strong thing. How do we save human intelligence from artificial intelligence? And how do we inbuilt in the safety guards and all the ethics and all the all the, you know, policy playbooks? Thank you very much. Thank you. Bye. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (19)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“The race towards artificial intelligence is no longer a theoretical pursuit; billions and maybe trillions of dollars are being deployed to push the AI frontier, and safety is not keeping pace with it.”

The knowledge base states that the race toward AI is no longer theoretical, that billions-trillions of dollars are being invested, and that safety is lagging behind the rapid technological advance [S1].

Confirmedhigh

“The coordination gap frontier in AI safety is real, urgent, and can be closed.”

A stakeholder’s opening remarks explicitly note that the coordination gap in AI safety is real and urgent, echoing the panel’s assessment [S11].

Additional Contextmedium

“Artificial intelligence is advancing at a rapid pace.”

An open-forum primer describes AI as advancing rapidly, providing broader context for the claim about fast technological progress [S108].

Additional Contextlow

“Technological development in AI is not without risk.”

Discussion notes highlight that AI development carries risk, adding nuance to the safety concerns raised in the report [S96].

External Sources (109)
S1
Policymaker’s Guide to International AI Safety Coordination — -Gobind Singh Deo- Minister from Malaysia (leading Malaysia’s 2025 ASEAN chairmanship)
S2
Malaysia: Fake News Act — The newMalaysian Minister of Communications and Multimedia, Gobind Singh Deo, said on 21 May that the Fake News Act in M…
S3
S4
TALLINN MANUAL 1.0 INT — 2 Af fi liations during participation in the project. 978-1-107-17722-2 – Tallinn Manual 2.0 on the International Law A…
S5
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — -Mathias Cormann- Secretary General, OECD (Organisation for Economic Co-operation and Development) -Moderator- Role: Ev…
S7
S8
Driving U.S. Innovation in Artificial Intelligence — 13. Stuart Appelbaum – President, Retail Wholesale and Department Store Union 14. Stuart Ingis – Chairman, Venable 15. …
S9
S10
Acknowledgements — In addition to coordinating simultaneous attacks on a single target, such UAVs could disperse to find and attack a la…
S11
https://dig.watch/event/india-ai-impact-summit-2026/policymakers-guide-to-international-ai-safety-coordination — And we obviously want to thank our sponsors and supporters, starting with Sympathico Ventures. Eileen Donahoe will moder…
S12
Policymaker’s Guide to International AI Safety Coordination — – Nicolas Miailhe- Eileen Donahoe- Jann Tallinn- Josephine Teo – Nicolas Miailhe- Mathias Cormann- Stuart Russell- Jose…
S13
IGF 2023 Global Youth Summit — Nicolas Fiumarelli:Thank you, Lily. My name is Nicolas Fiumarelli. Hello everyone. Today I am here in place of Umut, who…
S14
Policymaker’s Guide to International AI Safety Coordination — And we obviously want to thank our sponsors and supporters, starting with Sympathico Ventures. Eileen Donahoe will moder…
S15
https://dig.watch/event/india-ai-impact-summit-2026/policymakers-guide-to-international-ai-safety-coordination — And we obviously want to thank our sponsors and supporters, starting with Sympathico Ventures. Eileen Donahoe will moder…
S16
The Declaration for the Future of the Internet: Principles to Action — A key figure tackling this connectivity challenge is Zeyna Bouharb, serving as head of international cooperation at Oger…
S17
Hack the Digital Divides | IGF 2023 Day 0 Event #19 — Moderator – Peter A. Bruck:Can I ask the technical support to see if we can put the slides in? Is that good? Hello, good…
S18
S19
WS #211 Disability & Data Protection for Digital Inclusion — Osama Manzar emphasizes focusing on abilities and involving persons with disabilities in service provision, while Maitre…
S20
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Josephine Teo- Role/title not specified (represents Singapore)
S22
S23
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — This comment addresses the fundamental challenge that cybersecurity threats are global while responses are often nationa…
S24
Towards a Safer South Launching the Global South AI Safety Research Network — Dr. Balaraman Ravindran from IIT Madras raised important questions about coordination, noting that multiple AI safety ne…
S25
State of play of major global AI Governance processes — Its flexibility and adaptability are praised for bridging institutional, cultural, and regional practices. A cooperative…
S26
Are AI safety institutes shaping the future of trustworthy AI? — As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunit…
S27
Dedicated stakeholder session (in accordance with agreed modalities for the participation of stakeholders of 22 April 2022) — Arab Association of Cybersecurity: Honorable Chair, distinguished delegates, esteemed colleagues and stakeholders, it’s …
S28
Digital Cooperation and Empowerment: Insights and Best Practices for Strengthening Multistakeholder and Inclusive Participation — Capacity Building Initiatives Capacity building and support mechanisms are crucial for meaningful stakeholder engagemen…
S29
Closing plenary: multistakeholderism for the governance of the digital world — Min Jiang:Developing such working methods should strive to avoid conflicts with or duplication of existing processes or …
S30
Towards 2030 and Beyond: Accelerating the SDGs through Access to Evidence on What Works — The level of disagreement among the speakers was minimal. This high level of agreement implies a strong consensus on the…
S31
Multistakeholder Model – Driver for Global Services and SDGs | IGF 2023 Open Forum #89 — At the heart of ICANN’s work lies the multi-stakeholder model, which shapes policies and manages unique identifiers. Thi…
S32
Building a Global Partnership for Responsible Cyber Behavior | IGF 2023 Launch / Award Event #69 — Eugene EG Tan:the misuse of those kinds of technologies? Thank you. It’s a great question, and there’s probably a very l…
S33
Advancing Scientific AI with Safety Ethics and Responsibility — Artificial intelligence | Building confidence and security in the use of ICTs | Monitoring and measurement Open source …
S34
Safe and Responsible AI at Scale Practical Pathways — “Deep work on working on fragmented data silos.”[5]. “It can be bridged but we have to think about how to make data inte…
S35
AI Meets Cybersecurity Trust Governance & Global Security — I mean, one of the most sacred things for us right now is to maintain public trust in our institutions. It’s a little ch…
S36
Building Trust through Transparency — Additionally, the speakers mention that in case of fraud or data leakage on the merchant’s end, the liability also falls…
S37
The Power of the Commons: Digital Public Goods for a More Secure, Inclusive and Resilient World — Eileen Donahoe echoed this sentiment, advocating for universal safeguards to protect human rights in DPGs and DPI. This …
S38
Knowledge Café: WSIS+20 Consultation: Strenghtening Multistakeholderism — Both speakers recognize that current governance processes are fragmented and overly complex, requiring better coordinati…
S39
Upholding Human Rights in the Digital Age: Fostering a Multistakeholder Approach for Safeguarding Human Dignity and Freedom for All — Eileen Donahoe:It’s difficult. So many good questions and so many layers to them. I will start with the two points by ac…
S40
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S41
Hard power of AI — The analysis comprises multiple arguments related to technology, politics, and AI. One argument suggests that the rapid …
S42
Ethics and AI | Part 1 — Once brought to commercial existence, digital technologies raise multiple safety and security issues, which could have b…
S43
What is it about AI that we need to regulate? — For indigenous communities, the challenge is even more acute. InOpen Forum #73 Indigenous Peoples Languages in a Digital…
S44
Why science metters in global AI governance — I should just add that on this score, it will be much better if we can cooperate internationally to develop sound approa…
S45
Smart Regulation Rightsizing Governance for the AI Revolution — This comment is deeply insightful because it cuts through the optimistic summit rhetoric to present a stark geopolitical…
S46
WS #103 Aligning strategies, protecting critical infrastructure — Several international initiatives and tools were mentioned:
S47
Roundtable — A focus on infrastructure that has an immediate impact on human life, such as transportation, power supply, healthcare, …
S48
Opening of the session — Capacity building is essential for political and institutional resource development.
S49
Building Capacity in Cyber Security — 3. Strengthening institutional capabilities: Building capacity in cybersecurity involves equipping institutions such as …
S50
WSIS Action Line C7: E-Agriculture — Development | Capacity development | Legal and regulatory Since IFAD works through public sector investments to governm…
S51
Indias AI Leap Policy to Practice with AIP2 — “they are deliberately delayed because there are some private sector actors that don’t want these standards to be there …
S52
AI leaders call for a global pause in superintelligence development — More than 850 public figures, including leading computer scientists Geoffrey Hinton and Yoshua Bengio,have signeda joint…
S53
Artificial Intelligence & Emerging Tech — Certain principles, like “human in the loop,” can have different interpretations at different stages of AI deployment. A…
S54
Are AI safety institutes shaping the future of trustworthy AI? — As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunit…
S55
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — The analysis also suggests that responsible development, governance, regulation, and capacity building should be multi-s…
S56
GOVERNING AI FOR HUMANITY — – 120 Supported by the proposed AI office, the standards exchange would also benefit from strong ties to the internation…
S57
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Matilda Road:Mathilda, over to you. Thank you, Florian. Good morning everyone. It’s great to see so many of you here and…
S58
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — And so when you think about the kind of infrastructural needs, it’s so it creates barriers for a lot of countries in the…
S59
Press Conference: Closing the AI Access Gap — Countries need robust data strategies that include sharing frameworks and data protection measures. These strategies are…
S60
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Alain Ndayishimiye:Yes thank you moderator once again let me take the opportunity to greet everyone whatever you are in …
S61
What Proliferation of Artificial Intelligence Means for Information Integrity? — Specifically mentioned ‘transparency for frontier models’, ‘trust and safety, an investment in trust and safety, especia…
S62
Policymaker’s Guide to International AI Safety Coordination — But also a governance challenge. How do we ensure that those are the systems and only those systems get built? And this …
S63
Towards a Safer South Launching the Global South AI Safety Research Network — Dr. Balaraman Ravindran from IIT Madras raised important questions about coordination, noting that multiple AI safety ne…
S64
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — This comment addresses the fundamental challenge that cybersecurity threats are global while responses are often nationa…
S65
Advancing Scientific AI with Safety Ethics and Responsibility — High level of consensus with significant implications for AI governance policy. The agreement across speakers from diffe…
S66
The mismatch between public fear of AI and its measured impact — Inmedicine and science, AI has shown promise in pattern recognition and data analysis. Deployment is cautious, as clinic…
S67
Day 0 Event #255 Update Required Fixing Tech Sectors Role in Conflict — Companies unwilling to engage beyond policy references; governments taking less responsibility leaving burden on investo…
S68
About the Commission — However, consistency and predictability in each and every aspect of the environment – be they political, economic, finan…
S69
Blended Finance’s Broken Promise and How to Fix It / Davos 2025 — Leila Fourie points out that the perception of risk in emerging markets is a significant barrier to investment. This per…
S70
PrefACe — The National Broadband Plan recognizes that making the right policy choices at home that result in domestic market succe…
S71
India unveils AI incident reporting guidelines for critical infrastructure — India isdevelopingAI incident reporting guidelines for companies, developers, and public institutions to report AI-relat…
S72
OPENING SESSION | IGF 2023 — Ulrik Vestergaard Knudsen:Thank you very much. It seems I have the opposite challenge compared to the previous speaker, …
S73
AI and EDTs in Warfare: Ethics, Challenges, Trends | IGF 2023 WS #409 — In conclusion, the discussions surrounding AI and emerging technologies in warfare highlight the potential benefits and …
S74
The Dawn of Artificial General Intelligence? / DAVOS 2025 — Yoshua Bengio advocates for substantial investment in AI safety research alongside the development of AI capabilities. H…
S75
AI and international peace and security: Key issues and relevance for Geneva — Regional Cooperation Mechanisms: Building regional cooperation mechanisms can significantly enhance the governance of AI…
S76
Laying the foundations for AI governance — This discussion revealed both the substantial challenges in translating AI governance principles into practice and the s…
S77
Policymaker’s Guide to International AI Safety Coordination — This comment crystallizes the fundamental tension at the heart of AI governance – the misalignment between market incent…
S78
Ethics and AI | Part 1 — Once brought to commercial existence, digital technologies raise multiple safety and security issues, which could have b…
S79
WS #64 Designing Digital Future for Cyber Peace & Global Prosperity — Rapid pace of technological change outpacing policy frameworks
S80
Hard power of AI — The analysis comprises multiple arguments related to technology, politics, and AI. One argument suggests that the rapid …
S81
Global AI Governance: Reimagining IGF’s Role & Impact — Ivana Bartoletti: Thank you very much and so sorry for not being able to be physically with you. So I think I wanted to …
S82
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Tomiwa Ilori:Thank you very much, Michael. And quickly to my presentation, I’ll be focusing more on the regional initiat…
S83
Asia’s middle powers could shape AI governance framework — The European Union, China, and the United States may set benchmarks for AI governance. Still, Asia’s middle powers have …
S84
https://dig.watch/event/india-ai-impact-summit-2026/policymakers-guide-to-international-ai-safety-coordination — While much of the discourse on frontier AI safety has focused on AI superpowers, there’s an urgent need for deeper inter…
S85
Closure of the session — Intersessional technical meetings and working groups should focus on critical infrastructure, incident response, and int…
S86
Future of International Cyber Diplomacy: Comprehensive Discussion Report — Practical tools for incident response and cooperation still need development
S87
Opening of the session — Chair: Thank you very much, Ms. Nakamitsu, for your very detailed and comprehensive overview of the work that we have…
S88
Opening of the session — Capacity building is essential for political and institutional resource development.
S89
HIGH LEVEL LEADERS SESSION I — Institutions should have the capacity for enforcement to ensure adherence to any rules that are set in place
S90
Media Hub — Need law enforcement, judiciary, court system, judges to understand cyber space and offenses, lawyers to be trained, pol…
S91
(Plenary segment) Summit of the Future – General Assembly, 4th plenary meeting, 79th session — International Development Law Organization: Mr. President, Excellencies, it is a pleasure to participate in the summit…
S92
How to enhance participation and cooperation of CSOs in/with multistakeholder IG forums | IGF 2023 Open Forum #96 — Institutional capacity building is vital for civil societies. By strengthening their institutional structures, civil soc…
S93
AI leaders call for a global pause in superintelligence development — More than 850 public figures, including leading computer scientists Geoffrey Hinton and Yoshua Bengio,have signeda joint…
S94
Indias AI Leap Policy to Practice with AIP2 — He points out that some private‑sector actors deliberately slow standards development, and calls for mechanisms that imp…
S95
DeepSeek AI shake-up affects Bitcoin and tech stocks — Bitcoin experienced a 6% drop on 27 January, as stock markets reacted to the debut of China’s open-source AI model, Deep…
S96
OPENING STATEMENTS FROM STAKEHOLDERS — Discussions on artificial intelligence show that technological development is not without risk.
S97
9821st meeting — Mr. President, as the Secretary General has noted, artificial intelligence represents both the greatest opportunity, and…
S98
Main Session | Policy Network on Artificial Intelligence — Benifei argues for the importance of developing common standards and definitions for AI at a global level. He suggests t…
S99
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Audience:Thank you. Thank you so much. I represent you from Chinese mission. We appreciate Her Excellency, Ambassador Es…
S100
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Ahmad Bhinder: Hello. Good afternoon, everybody. I see a lot of faces from all around the world, and it is really, re…
S101
AI in practice across the UN system: UN 2.0 AI Expo — TheUN 2.0 Data & Digital Community AI Expoexamined how AI is currently embedded within the operational, analytical and i…
S102
The Commonwealth AI Consortium will gather in New York to develop the AI action plan — The Commonwealth Artificial Intelligence Consortium (CAIC)members will meet during the UN General Assembly in New York t…
S103
Artificial intelligence (AI) – UN Security Council — Additionally, the development of AI systems should involve collaboration with local communities to better understand cul…
S104
Ethics and AI | Part 6 — Even if the Act itself does not make direct reference to “ethics”, it is closely tied to the broader context of ethical …
S105
Introduction — | Term | EU definition …
S106
AI safeguards prove hard to define — Policymakers seeking to regulate AI face an uphill battle as the science evolves faster than safeguards can be devised.E…
S107
Comprehensive Report: Cyber Fraud and Human Trafficking – A Global Crisis Requiring Multilateral Response — Speed of response and enforcement capabilities The Minister emphasizes that governments must act together due to the tr…
S108
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace
S109
Meta joins the tech giants’ race for AGI — Meta, the parent company of Facebook, has entered the race for Artificial General Intelligence (AGI).Meta CEO Mark Zucke…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Stuart Russell
1 argument119 words per minute250 words125 seconds
Argument 1
AI safety requires worldwide coordination because harms cross borders (Stuart Russell)
EXPLANATION
Russell emphasizes that AI‑related harms such as psychological damage or loss of human control are not confined to any single country, making international coordination essential to prevent or mitigate these risks.
EVIDENCE
He stated that the harms, whether it’s psychological damage to the next generation or loss of human control altogether, those harms cross borders, and we must coordinate to make sure that they don’t happen or they don’t originate anywhere [44-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Policymaker’s Guide stresses that AI-related harms cross borders and calls for global coordination to prevent them [S1].
MAJOR DISCUSSION POINT
Need for global coordination on AI safety
AGREED WITH
Nicolas Miailhe, Mathias Cormann, Eileen Donahoe
N
Nicolas Miailhe
2 arguments149 words per minute812 words325 seconds
Argument 1
AI Safety Connect convenes regular global summits and UN sessions to shape a unified safety agenda (Nicolas Miailhe)
EXPLANATION
Miailhe describes AI Safety Connect’s model of convening at AI summits worldwide and at the UN General Assembly, with a six‑month cadence, to accelerate safety discussions and build capacity.
EVIDENCE
He explained that they convene at each AI summit, started in Paris, then India, next Switzerland, also at the UN General Assembly, and hold global convenings every six months to speed up safety discussions [11-15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The guide notes that AI Safety Connect convenes at each AI summit and holds semi-annual global convenings to accelerate safety discussions [S1].
MAJOR DISCUSSION POINT
Regular global convenings for AI safety
Argument 2
Capacity‑building and trust‑building exercises are vital for preparing stakeholders (Nicolas Miailhe)
EXPLANATION
Miailhe notes that beyond public events, AI Safety Connect conducts behind‑closed‑door capacity‑building and trust‑building activities to ready stakeholders for AI safety challenges.
EVIDENCE
He mentioned that they also do capacity building and trust building exercises at times behind closed doors during the intensive week in New Delhi [15-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity-building and trust-building are highlighted as essential for stakeholder readiness in the capacity-building initiatives report [S28].
MAJOR DISCUSSION POINT
Importance of capacity and trust building
AGREED WITH
Josephine Teo, Sangbu Kim, Mathias Cormann
M
Mathias Cormann
5 arguments145 words per minute864 words356 seconds
Argument 1
Building consensus through inclusive, evidence‑based processes is key to effective governance (Mathias Cormann)
EXPLANATION
Cormann argues that trust is earned by including all relevant actors—governments, industry, civil society, and technical experts—and grounding decisions in objective evidence.
EVIDENCE
He said trust is built through inclusion and on the basis of objective evidence, and that bringing together all relevant actors is what we need to do [77-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Policymaker’s Guide echoes this, stating that trust is built through inclusion of all relevant actors and reliance on objective evidence [S1].
MAJOR DISCUSSION POINT
Inclusive, evidence‑based consensus building
AGREED WITH
Nicolas Miailhe, Gobind Singh Deo
Argument 2
Trust is built through inclusion of governments, industry, civil society, and technical experts (Mathias Cormann)
EXPLANATION
He reiterates that a shared interest in trustworthy systems requires the participation of diverse stakeholders, each bringing distinct perspectives and imperatives.
EVIDENCE
He highlighted that bringing together governments, companies, civil society, and technical experts is essential for building trust and ensuring systems are trustworthy [77-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusion of governments, industry, civil society and technical experts as a trust-building mechanism is affirmed in the guide’s discussion of inclusive, evidence-based governance [S1].
MAJOR DISCUSSION POINT
Stakeholder inclusion for trust
AGREED WITH
Nicolas Miailhe, Gobind Singh Deo
Argument 3
Coordinated transparency and incident reporting are critical; an international incident response centre should be pursued (Mathias Cormann)
EXPLANATION
Cormann identifies coordinated transparency and incident reporting as the most critical frontier‑AI safety infrastructure, proposing a global incident response centre to share failure data without penalising reporters.
EVIDENCE
He described coordinated transparency and incident reporting as the most critical piece, referenced the Hiroshima Code of Conduct reporting framework, noted 25 organizations have submitted reports, and outlined the GPI Common Framework for Incident Reporting that could evolve into an international incident response centre [91-96].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The OECD common framework for incident reporting and the guide’s emphasis on coordinated transparency support the need for a global incident response centre [S5][S1].
MAJOR DISCUSSION POINT
Need for global incident reporting infrastructure
AGREED WITH
Eileen Donahoe
DISAGREED WITH
Gobind Singh Deo
Argument 4
Open‑source safety tools and metrics are needed to make trustworthy AI practical (Mathias Cormann)
EXPLANATION
He points out that the OECD has launched an open call for open‑source safety and evaluation tools, which will be catalogued to help implement trustworthy AI in practice.
EVIDENCE
He noted that the OECD recently launched an open call for open source safety and evaluation tools hosted in the OECD.ai catalog of tools and metrics to make trustworthy AI easier to implement [98-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open-source safety and evaluation tools are identified as crucial for practical trustworthy AI in the open-source tools report [S33].
MAJOR DISCUSSION POINT
Open‑source tools for practical AI safety
AGREED WITH
Nicolas Miailhe, Josephine Teo, Sangbu Kim
Argument 5
Periodic pauses for testing, auditing, and monitoring are necessary to maintain public trust (Mathias Cormann)
EXPLANATION
Cormann suggests that occasional slow‑downs—pausing to test, monitor, audit, and share information—are essential to build confidence that AI systems respect fundamental rights and earn public trust.
EVIDENCE
He said that occasionally we should pause, test, monitor, audit, share information, and invest in building confidence that these systems can work as intended and respect fundamental rights [84-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Maintaining public trust through testing, auditing and monitoring is discussed in the public-trust governance commentary [S35].
MAJOR DISCUSSION POINT
Pausing for testing to sustain trust
AGREED WITH
Jann Tallinn
E
Eileen Donahoe
2 arguments122 words per minute1101 words539 seconds
Argument 1
Current governance is fragmented; policymakers must close gaps and create binding incentives (Eileen Donahoe)
EXPLANATION
Donahoe describes a governance landscape that is unharmonised, fragmented across jurisdictions, and lacking binding incentives for developers and investors, which hampers effective AI risk management.
EVIDENCE
She explained that the technology is advancing rapidly with minimal guardrails, while risk-management processes are ill-adapted, fragmented across jurisdictions, or insufficiently binding, resulting in an unharmonized governance landscape that fails to shape incentives [56-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Fragmented AI governance and the need for harmonisation and binding incentives are highlighted in the multistakeholder coordination analysis [S38][S39].
MAJOR DISCUSSION POINT
Fragmented AI governance needs binding incentives
AGREED WITH
Mathias Cormann
Argument 2
Middle powers can leverage pooled resources and normative influence to steer AI safety (Eileen Donahoe)
EXPLANATION
Donahoe argues that middle powers, through pooled resources, market leverage, normative influence, and regulatory innovation, can shape global AI practices and safety outcomes more effectively than previously thought.
EVIDENCE
She stated that middle powers can, through pooled resources, market leverage, normative influence, and regulatory innovation, shape the direction of global AI practices and safeties, and that leading from the middle may be a more powerful approach [62-64].
MAJOR DISCUSSION POINT
Role of middle powers in AI safety
AGREED WITH
Gobind Singh Deo, Josephine Teo
G
Gobind Singh Deo
3 arguments174 words per minute535 words183 seconds
Argument 1
ASEAN AI Safety Network exemplifies regional coordination to align standards (Gobind Singh Deo)
EXPLANATION
Gobind highlights the ASEAN AI Safety Network as a regional mechanism that places AI at the centre of ASEAN’s agenda, aligning standards and fostering cooperation among member states.
EVIDENCE
He noted that under Malaysia’s leadership, ASEAN placed AI at the centre of its agenda by establishing the ASEAN AI Safety Network, representing a model of regional coordination [152-155].
MAJOR DISCUSSION POINT
Regional coordination via ASEAN AI Safety Network
AGREED WITH
Eileen Donahoe, Josephine Teo
Argument 2
Malaysia’s dual‑track national plan and regional network offers a model for other middle powers (Gobind Singh Deo)
EXPLANATION
Gobind describes Malaysia’s approach of simultaneously building national AI capacity (AI National Action Plan, AI Governance Bill) while leading regional coordination through the ASEAN AI Safety Network, offering a replicable model.
EVIDENCE
He explained that Malaysia, as ASEAN chair, placed AI at the centre of the agenda, is finalising its AI National Action Plan, and expects an AI Governance Bill in 2026, illustrating a dual-track approach of national capacity building and regional coordination [152-156].
MAJOR DISCUSSION POINT
Dual‑track national and regional AI strategy
Argument 3
Enforcement agencies and institutional capacity are essential for implementing standards across ASEAN (Gobind Singh Deo)
EXPLANATION
Gobind stresses that without dedicated agencies to enforce standards, regulations, and legislation, AI governance will remain merely paper‑based and ineffective across ASEAN.
EVIDENCE
He argued that standards, regulation, and legislation require an agency capable of enforcement; otherwise they remain strong on paper but lack impact, and called for building mechanisms across ASEAN that strengthen institutional capacity [162-166].
MAJOR DISCUSSION POINT
Need for enforcement institutions in ASEAN
AGREED WITH
Mathias Cormann, Nicolas Miailhe
DISAGREED WITH
Mathias Cormann
S
Sangbu Kim
3 arguments112 words per minute525 words280 seconds
Argument 1
The World Bank can help Global South nations design safety‑by‑design AI systems (Sangbu Kim)
EXPLANATION
Kim suggests that the World Bank should assist developing countries by ensuring AI systems are designed with safety architecture from the outset, leveraging partnerships with advanced economies and tech firms.
EVIDENCE
He said the World Bank can help clients be well prepared from the scratch, design safety architecture within AI systems, and work closely with advanced economies, companies, and high-end examples to transfer good practices to the developing world [176-179].
MAJOR DISCUSSION POINT
World Bank support for safety‑by‑design AI
Argument 2
Safety architecture must be embedded from the design stage, with dedicated investment in protection mechanisms (Sangbu Kim)
EXPLANATION
Kim emphasizes that safety must be built into AI at the design phase, requiring investment and collaboration with high‑end partners to develop red‑team practices and protective measures.
EVIDENCE
He noted the need to design safety architecture from the start, invest in protection, and collaborate with advanced economies and companies running red-team exercises to learn how to prevent AI attacks [178-182].
MAJOR DISCUSSION POINT
Embedding safety architecture early
AGREED WITH
Nicolas Miailhe, Josephine Teo, Mathias Cormann
Argument 3
The World Bank can partner with advanced economies to transfer safety best practices to developing countries (Sangbu Kim)
EXPLANATION
Kim reiterates that close collaboration with advanced economies and tech firms is essential for the World Bank to convey best‑practice safety solutions to low‑capacity nations.
EVIDENCE
He described the necessity of working closely with advanced economies, companies, and high-end examples to connect good practices to the developing world, highlighting partnership as the only way forward [180-184].
MAJOR DISCUSSION POINT
Partnerships for safety knowledge transfer
J
Josephine Teo
3 arguments143 words per minute889 words371 seconds
Argument 1
Singapore can bridge coordination gaps despite limited jurisdiction by translating science into effective policy (Josephine Teo)
EXPLANATION
Teo explains that although AI systems originate abroad, Singapore can influence safety by converting scientific insights into actionable policies, emphasizing research, testing, and standards.
EVIDENCE
She noted that smaller states rely on external technology, but Singapore can translate science into policy, citing the need to understand what works, trade-offs, and the importance of research, testing, simulations, and interoperable standards, illustrated with an aviation safety example [104-112][119-136].
MAJOR DISCUSSION POINT
Science‑to‑policy translation in Singapore
AGREED WITH
Eileen Donahoe, Gobind Singh Deo
Argument 2
Robust research, testing, and interoperable standards are required to turn scientific insights into policy (Josephine Teo)
EXPLANATION
Teo stresses that effective AI policy demands extensive research, rigorous testing, simulations across conditions, and the development of interoperable standards to ensure safety across jurisdictions.
EVIDENCE
She described the need for investment in research, testing, simulations (e.g., aviation runway distances under different weather), and the creation of interoperable standards, noting that without such work, safety cannot be assured [110-136].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of research, rigorous testing and interoperable standards for policy effectiveness is emphasized in the data-interoperability and evidence-based decision-making reports [S34][S30].
MAJOR DISCUSSION POINT
Research, testing, and standards for policy
AGREED WITH
Nicolas Miailhe, Sangbu Kim, Mathias Cormann
Argument 3
Singapore emphasizes policy effectiveness, trade‑off analysis, and international collaboration to protect its citizens (Josephine Teo)
EXPLANATION
Teo highlights that Singapore focuses on evaluating policy effectiveness, understanding trade‑offs, and collaborating internationally through bodies like the OECD and AI Safety Connect to safeguard its population.
EVIDENCE
She explained that policymakers must assess whether policies are effective and understand trade-offs, citing the need for research, testing, and international collaboration via the OECD, AI Safety Connect, and ICI as essential for building trust and safety [110-119][140-144].
MAJOR DISCUSSION POINT
Policy effectiveness and international cooperation
J
Jann Tallinn
4 arguments143 words per minute517 words216 seconds
Argument 1
Effective prohibition of superintelligence hinges on transparent disclosure of lab capabilities (Jann Tallinn)
EXPLANATION
Tallinn argues that any prohibition on superintelligent AI must be based on clear, public disclosure of what labs can achieve, ensuring scientific consensus and public buy‑in.
EVIDENCE
He referenced the Future of Life Institute statement calling for a prohibition until there is broad scientific consensus and strong public buy-in, emphasizing the need for transparency [203-204].
MAJOR DISCUSSION POINT
Transparency as basis for prohibition
Argument 2
Private investors now have limited sway over leading AI firms; market forces dominate (Jann Tallinn)
EXPLANATION
Tallinn observes that leading AI companies have grown beyond the influence of private investors, especially as they approach IPOs, reducing investors’ ability to affect safety decisions.
EVIDENCE
He stated that investors don’t play much of a role anymore because leading AI companies are above the level where private investors can influence them, and they will soon IPO [232-233].
MAJOR DISCUSSION POINT
Diminished investor influence
Argument 3
Massive funding streams can be harnessed to pressure companies toward safety if public demand is strong (Jann Tallinn)
EXPLANATION
Tallinn points out that large financial flows into AI can be leveraged to enforce safety, provided there is sufficient public pressure and signatures supporting regulation.
EVIDENCE
He noted that the Future of Life Institute statement has gathered over 130,000 signatures, indicating public pressure, and that the trillions flowing into AI actually make it easier to govern if there is enough demand [224-227].
MAJOR DISCUSSION POINT
Using funding pressure for safety
DISAGREED WITH
Mathias Cormann
Argument 4
Development of superintelligent AI should be halted until broad scientific consensus and strong public buy‑in are achieved (Jann Tallinn)
EXPLANATION
Tallinn reiterates the call for a moratorium on superintelligence development until the scientific community reaches consensus on safety and the public demonstrates strong support.
EVIDENCE
He restated the Future of Life Institute’s call for prohibition until there is broad scientific consensus and strong public buy-in, emphasizing the need for such conditions before proceeding [203-206].
MAJOR DISCUSSION POINT
Moratorium until consensus and buy‑in
DISAGREED WITH
Mathias Cormann
O
Osama Manzar
1 argument72 words per minute193 words159 seconds
Argument 1
AI safety must focus first on protecting people and preserving human intelligence before expanding AI capabilities (Osama Manzar)
EXPLANATION
Manzar stresses that the primary goal of AI safety is to safeguard human beings and human intelligence, likening it to protecting passengers before teaching them how to think, and calls for strong safety guards and ethical policies.
EVIDENCE
He argued that the entire safety aspect of AI should be about saving people before teaching them how to think, emphasizing the need to save human intelligence from artificial intelligence and embed safety guards, ethics, and policy playbooks [272-276].
MAJOR DISCUSSION POINT
Prioritizing human protection over AI advancement
Agreements
Agreement Points
AI safety requires worldwide coordination because harms cross borders
Speakers: Stuart Russell, Nicolas Miailhe, Mathias Cormann, Eileen Donahoe
AI safety requires worldwide coordination because harms cross borders (Stuart Russell)
All speakers stress that AI-related risks such as psychological damage or loss of human control are not confined to any single country and therefore demand global coordination. Russell explicitly notes the cross-border nature of harms and the need to coordinate [44-46]; Miailhe describes semi-annual global convenings at AI summits and the UN to accelerate safety discussions [11-15]; Cormann frames the governance challenge as requiring global coordination to ensure only safe systems are built [42-44]; Donahoe calls for deeper international diplomacy on extreme risks and highlights the role of middle powers in bridging gaps [61-64].
POLICY CONTEXT (KNOWLEDGE BASE)
This view echoes the Policymaker’s Guide to International AI Safety Coordination, which stresses that AI harms cross borders and require global coordination [S62], and aligns with IGF discussions on the global nature of cybersecurity threats [S64] and the need for worldwide readiness [S60].
Inclusive, evidence‑based consensus building and stakeholder inclusion are essential for trustworthy AI
Speakers: Mathias Cormann, Nicolas Miailhe, Gobind Singh Deo
Building consensus through inclusive, evidence‑based processes is key to effective governance (Mathias Cormann) Trust is built through inclusion of governments, industry, civil society, and technical experts (Mathias Cormann) Capacity‑building and trust‑building exercises are vital for preparing stakeholders (Nicolas Miailhe) Enforcement agencies and institutional capacity are essential for implementing standards across ASEAN (Gobind Singh Deo)
Cormann argues that trust and effective governance arise from bringing together all relevant actors and grounding decisions in objective evidence. Miailhe adds that AI Safety Connect conducts capacity-building and trust-building activities behind closed doors. Gobind stresses that without agencies to enforce standards, consensus remains ineffective. Together they underline inclusion, evidence, and institutional capacity as pillars of trustworthy AI [77-80][84-86][15-16][162-166].
POLICY CONTEXT (KNOWLEDGE BASE)
Multi-stakeholder, evidence-based consensus building is highlighted in the IGF 2023 report on evolving AI governance [S55] and reinforced by the high-level consensus on AI governance principles [S65]; standards bodies also stress inclusive processes [S56], and recent analyses note collaborative approaches as essential for trustworthy AI [S76].
Coordinated transparency and incident reporting, potentially via an international incident response centre, are critical infrastructure for frontier AI safety
Speakers: Mathias Cormann, Eileen Donahoe
Coordinated transparency and incident reporting are critical; an international incident response centre should be pursued (Mathias Cormann) Current governance is fragmented; policymakers must close gaps and create binding incentives (Eileen Donahoe)
Cormann identifies coordinated transparency and incident reporting as the most critical piece of frontier-AI safety infrastructure and proposes an international incident response centre to share failure data without penalising reporters. Donahoe highlights the fragmented, un-harmonised governance landscape and asks whether an incident response centre should be a priority, indicating shared concern for a coordinated reporting mechanism [91-96][73-76].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s AI incident reporting guidelines propose a centralized database for critical-infrastructure incidents, exemplifying coordinated transparency mechanisms [S71]; similar calls for transparency of frontier models and investment in trust-and-safety institutions appear in recent policy briefs [S61]; the AI Standards Hub advocates an international incident response centre as core infrastructure [S56][S57].
Middle powers and regional bodies can lead AI safety by pooling resources, normative influence and regional coordination
Speakers: Eileen Donahoe, Gobind Singh Deo, Josephine Teo
Middle powers can leverage pooled resources and normative influence to steer AI safety (Eileen Donahoe) ASEAN AI Safety Network exemplifies regional coordination to align standards (Gobind Singh Deo) Singapore can bridge coordination gaps despite limited jurisdiction by translating science into effective policy (Josephine Teo)
Donahoe argues that middle powers, through pooled resources and normative influence, can shape global AI practices. Gobind points to the ASEAN AI Safety Network as a concrete regional coordination model. Teo explains how Singapore, though a smaller state, can translate scientific insights into policy and work through international bodies to protect its citizens. All three emphasize the strategic role of non-superpower states in global AI governance [62-64][101-102][152-155][156-157][104-108][110-112].
POLICY CONTEXT (KNOWLEDGE BASE)
Regional coordination is advocated by the Global South AI Safety Research Network, which urges middle powers to pool resources and normative influence [S63]; the AI and International Peace and Security report highlights regional cooperation mechanisms for AI governance [S75]; discussions on assurance gaps stress the role of regional bodies in the Global South [S58]; IGF panels also note regional capacity building as a pathway for leadership [S55].
Capacity building, trust building, and investment in safety tools are necessary to prepare stakeholders for frontier AI
Speakers: Nicolas Miailhe, Josephine Teo, Sangbu Kim, Mathias Cormann
Capacity‑building and trust‑building exercises are vital for preparing stakeholders (Nicolas Miailhe) Robust research, testing, and interoperable standards are required to turn scientific insights into policy (Josephine Teo) Safety architecture must be embedded from the design stage, with dedicated investment in protection mechanisms (Sangbu Kim) Open‑source safety tools and metrics are needed to make trustworthy AI practical (Mathias Cormann)
Miailhe highlights ongoing capacity-building and trust-building activities. Teo stresses the need for extensive research, testing, simulations and interoperable standards to translate science into policy. Kim calls for safety-by-design and investment in protective mechanisms, while Cormann promotes open-source safety tools to operationalise trustworthy AI. Together they underscore a multi-layered approach of capacity development, investment and tooling [15-16][110-136][178-182][254-255][98-99].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity and trust building are repeatedly called for in IGF multi-stakeholder governance recommendations [S55]; the assurance-gap discussion emphasizes investment in safety tools for developing regions [S58]; policy briefs call for investment in trust-and-safety infrastructure for frontier AI [S61]; and broader analyses underline the need for capacity building to prepare stakeholders [S76].
Periodic pauses, testing and a slowdown of AI development are needed to ensure safety and public trust
Speakers: Mathias Cormann, Jann Tallinn
Periodic pauses for testing, auditing, and monitoring are necessary to maintain public trust (Mathias Cormann) slow down we really need to slow down (Jann Tallinn)
Cormann recommends occasional slow-downs to test, monitor, audit and share information, building confidence that systems respect fundamental rights. Tallinn echoes this by explicitly calling for a slowdown of AI development, especially superintelligence efforts. Both converge on the need to temper speed with safety checks [84-86][256-257].
Similar Viewpoints
Both emphasize that AI safety challenges are global and demand coordinated governance mechanisms, whether through broad coordination or specific incident‑reporting infrastructure [44-46][42-44][91-96].
Speakers: Stuart Russell, Mathias Cormann
AI safety requires worldwide coordination because harms cross borders (Stuart Russell) Coordinated transparency and incident reporting are critical; an international incident response centre should be pursued (Mathias Cormann)
Both see regional or middle‑power initiatives as essential pathways to achieve coordinated AI governance and to operationalise standards across jurisdictions [62-64][152-155][156-157].
Speakers: Eileen Donahoe, Gobind Singh Deo
Middle powers can leverage pooled resources and normative influence to steer AI safety (Eileen Donahoe) ASEAN AI Safety Network exemplifies regional coordination to align standards (Gobind Singh Deo)
Both stress that safety must be built into AI from the outset through rigorous research, testing and investment, and that policy must translate scientific evidence into actionable safeguards [104-108][110-112][178-182].
Speakers: Josephine Teo, Sangbu Kim
Singapore can bridge coordination gaps despite limited jurisdiction by translating science into effective policy (Josephine Teo) Safety architecture must be embedded from the design stage, with dedicated investment in protection mechanisms (Sangbu Kim)
Both agree that a deliberate slowdown of AI development, accompanied by testing and monitoring, is essential to safeguard public trust and prevent unsafe outcomes [84-86][256-257].
Speakers: Mathias Cormann, Jann Tallinn
Periodic pauses for testing, auditing, and monitoring are necessary to maintain public trust (Mathias Cormann) slow down we really need to slow down (Jann Tallinn)
Unexpected Consensus
Massive funding streams can be leveraged as a lever for AI safety
Speakers: Jann Tallinn, Sangbu Kim
Massive funding streams can be harnessed to pressure companies toward safety if public demand is strong (Jann Tallinn) Safety architecture must be embedded from the design stage, with dedicated investment in protection mechanisms (Sangbu Kim)
While Tallinn focuses on using the trillions flowing into AI as a pressure point for safety, Kim emphasizes the need for upfront investment in safety-by-design. Both converge on the insight that financial resources, whether through public pressure or direct investment, are pivotal levers for achieving AI safety-a linkage not explicitly drawn elsewhere in the discussion [224-227][254-255].
POLICY CONTEXT (KNOWLEDGE BASE)
Large-scale funding for AI safety is highlighted in reports on AI safety institutes leveraging substantial research investments [S54] and in statements by AI leaders such as Yoshua Bengio urging massive safety research funding [S74]; investor-focused analyses stress the importance of consistent, predictable policy environments for channeling finance [S68] and note challenges in blended finance for AI safety projects [S69].
Overall Assessment

There is strong consensus that AI safety is a global challenge requiring coordinated governance, inclusive evidence‑based consensus building, and robust capacity‑building. Middle powers and regional bodies are seen as pivotal actors, and concrete infrastructure such as incident‑reporting mechanisms and open‑source safety tools are widely endorsed. Participants also agree on the need for periodic slow‑downs, testing and investment in safety‑by‑design.

High consensus on the need for global coordination, inclusive governance, capacity building and investment; moderate consensus on specific mechanisms (incident response centre) and on the role of funding as a lever. This broad agreement provides a solid foundation for advancing coordinated policy initiatives and allocating resources toward practical safety tools and regional cooperation.

Differences
Different Viewpoints
Role of private investors in AI safety governance
Speakers: Eileen Donahoe, Jann Tallinn
What would it take to bring investors meaningfully into the safety conversation? (Eileen Donahoe) Investors don’t play much of a role anymore because the leading AI companies are above the level where private investors can influence them (Jann Tallinn)
Eileen asks how investors can be engaged to shape safety incentives, implying they could have a meaningful role [228-230]. Jann counters that investors now have little influence over leading AI firms, especially as they approach IPOs, suggesting they cannot be a lever for safety [232-233].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent commentary notes that companies often limit engagement beyond policy references, shifting responsibility to private investors and raising questions about their governance role [S67]; investor-focused literature stresses the need for consistent regulatory frameworks to enable effective investor participation [S68]; blended-finance discussions also highlight investor influence on AI safety initiatives [S69].
Preferred mechanism to slow or halt risky AI development
Speakers: Mathias Cormann, Jann Tallinn
Occasionally we should pause, test, monitor, audit, share information and invest in building confidence (Mathias Cormann) Development of superintelligent AI should be halted until broad scientific consensus and strong public buy‑in are achieved (Jann Tallinn) Massive funding streams can be harnessed to pressure companies toward safety if public demand is strong (Jann Tallinn)
Cormann advocates periodic pauses for testing and auditing as a pragmatic way to maintain trust [84-86]. Tallinn calls for a more decisive prohibition on superintelligence until consensus and public buy-in are reached, and argues that large funding can be used as leverage if there is sufficient public pressure [203-206][224-227]. The two propose different primary levers-operational pauses versus a moratorium tied to consensus.
How to ensure compliance with AI safety standards: voluntary reporting vs enforced institutions
Speakers: Mathias Cormann, Gobind Singh Deo
Coordinated transparency and incident reporting are critical; an international incident response centre should be pursued (Mathias Cormann) Enforcement agencies and institutional capacity are essential for implementing standards across ASEAN (Gobind Singh Deo)
Cormann emphasizes building a voluntary, transparent incident reporting framework and a future international response centre to share failures without penalising reporters [91-96]. Gobind stresses that without dedicated agencies to enforce standards, regulations remain paper-based and ineffective, calling for institutional mechanisms to ensure compliance [162-166]. The disagreement lies in reliance on voluntary transparency versus mandatory enforcement structures.
POLICY CONTEXT (KNOWLEDGE BASE)
India’s mandatory AI incident reporting guidelines illustrate a move toward enforced compliance mechanisms [S71]; policy analyses advocate investment in robust institutions to oversee safety standards [S61]; and standards bodies discuss the balance between voluntary reporting and formal enforcement in international frameworks [S56].
Unexpected Differences
Investor influence versus irrelevance
Speakers: Eileen Donahoe, Jann Tallinn
What would it take to bring investors meaningfully into the safety conversation? (Eileen Donahoe) Investors don’t play much of a role anymore because the leading AI companies are above the level where private investors can influence them (Jann Tallinn)
Eileen treats investors as a potentially powerful lever for safety governance, a view not commonly emphasized in high-level AI policy discussions. Tallinn’s dismissal of investor influence was unexpected, revealing a stark contrast in perceived stakeholder relevance [228-230][232-233].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on investor relevance note that while investors can leverage funding, inconsistent policy environments may render their influence marginal, as highlighted in analyses of corporate-government dynamics [S67] and investor consistency requirements [S68].
Philosophical framing of AI safety as protecting human intelligence
Speakers: Osama Manzar, Other panelists
The entire safety aspect of AI should be about saving people before you teach them how to think; we must save human intelligence from artificial intelligence (Osama Manzar) Other speakers focus on technical, regulatory, and coordination measures without invoking a fundamental protection of human intelligence
Manzar’s framing of AI safety as a moral imperative to protect human intelligence is a broader, more existential stance than the predominantly technical and policy-oriented perspectives of the other speakers, representing an unexpected divergence in the conceptualization of AI safety [272-276].
Overall Assessment

The panel largely concurs on the necessity of coordinated AI governance, but diverges on the mechanisms to achieve safety—ranging from voluntary transparency and incident reporting, to enforced institutional compliance, to periodic pauses, to outright prohibitions. A notable unexpected split concerns the perceived role of private investors, with one speaker viewing them as a potential lever and another dismissing their influence. These disagreements highlight the challenge of aligning diverse stakeholder perspectives into a coherent global safety strategy.

Moderate to high. While there is broad consensus on the goal of AI safety, the lack of agreement on concrete levers—investor engagement, enforcement versus voluntary reporting, and the preferred slowdown mechanism—suggests that achieving unified policy action will require substantial negotiation and compromise.

Partial Agreements
All speakers agree that coordinated governance—whether global, regional, or national—is essential to manage AI risks. However, they differ on the scale and mechanism: Russell calls for worldwide coordination; Gobind focuses on ASEAN regional mechanisms; Cormann stresses inclusive consensus building; Eileen highlights the need for binding incentives; Teo emphasizes science‑to‑policy translation within limited jurisdiction [44-46][77-84][56-60][152-155][104-112].
Speakers: Stuart Russell, Mathias Cormann, Eileen Donahoe, Gobind Singh Deo, Josephine Teo
AI safety requires worldwide coordination because harms cross borders (Stuart Russell) Building consensus through inclusive, evidence‑based processes is key (Mathias Cormann) Current governance is fragmented; policymakers must close gaps and create binding incentives (Eileen Donahoe) ASEAN AI Safety Network exemplifies regional coordination to align standards (Gobind Singh Deo) Singapore can bridge coordination gaps by translating science into effective policy (Josephine Teo)
Both agree that practical tools and design practices are needed to embed safety, but Cormann focuses on open‑source tool catalogs, while Kim emphasizes financial and partnership support to embed safety architecture from the design stage [98-99][178-182].
Speakers: Mathias Cormann, Sangbu Kim
Open‑source safety tools and metrics are needed to make trustworthy AI practical (Mathias Cormann) The World Bank can help Global South nations design safety‑by‑design AI systems (Sangbu Kim)
Takeaways
Key takeaways
AI safety risks are global and require coordinated international governance. Current AI governance is fragmented; inclusive, evidence‑based consensus building is essential. Middle powers and global‑majority states can leverage pooled resources and normative influence to shape safety standards. Transparency, incident reporting, and a potential international incident‑response centre are critical infrastructure for frontier AI safety. Open‑source safety tools, interoperable standards, and rigorous testing/simulation are needed to translate scientific insights into enforceable policy. Institutional capacity and enforcement agencies are necessary for implementing standards, especially in regional bodies like ASEAN. The World Bank can help Global South countries adopt safety‑by‑design practices through partnerships with advanced economies. There is a call for periodic pauses or slow‑downs in AI development to allow testing, auditing, and public trust building. Investor influence on leading AI firms is diminishing; public pressure and massive funding streams may be used to enforce safety commitments. Protecting human beings and preserving human intelligence must be prioritized over rapid AI advancement.
Resolutions and action items
AI Safety Connect will continue its semi‑annual global convenings and publish the results of the closed‑door scientific dialogue. The OECD will expand coordinated transparency and incident‑reporting mechanisms, building toward an international AI incident‑response centre. The OECD AI Policy Observatory will continue to collect and share data on AI governance practices worldwide. An open call for open‑source safety and evaluation tools will be maintained, with tools catalogued on the OECD.ai platform. Singapore will refresh the AI safety research priorities (second edition of the Singapore Consensus) and advance practical testing tools within the next 12 months. Malaysia will operationalise the ASEAN AI Safety Network, strengthen enforcement agencies, and finalize its AI National Action Plan and AI Governance Bill by 2026. The World Bank will facilitate safety‑by‑design collaborations between developing‑country clients and advanced‑economy partners, including red‑team exercises. Panelists agreed to prioritize institutionalising AI safety governance structures at national and regional levels within the next year. A call was made to the United Nations General Assembly to host the fourth edition of AI Safety Connect in New York.
Unresolved issues
Specific design, funding, and legal framework for an international AI incident‑response centre remain undefined. How to create binding incentives for AI developers and deployers across jurisdictions without stifling innovation. Mechanisms for effectively bringing private investors into the safety governance conversation were not agreed upon. Details of how a prohibition on superintelligent AI development could be enforced in practice were not resolved. The exact process for harmonising ASEAN enforcement agencies and ensuring consistent implementation of standards across member states remains open. Methods for measuring policy effectiveness and trade‑offs, especially in rapidly evolving AI contexts, were discussed but not concretely specified.
Suggested compromises
Adopt periodic, limited pauses in AI development to allow for testing, auditing, and public transparency before proceeding. Use coordinated incident‑reporting as a voluntary but widely adopted step, building trust while avoiding punitive legal exposure for reporters. Middle powers lead on normative frameworks and resource pooling, allowing larger AI‑producing nations to adopt these standards gradually. Encourage open‑source safety tools and shared metrics as a common baseline, reducing duplication and fostering collaborative improvement.
Thought Provoking Comments
Middle powers and global majority states can’t be seen as peripheral actors; leading from the middle may turn out to be a more powerful approach than previously anticipated.
She reframes the AI governance narrative away from a binary superpower vs. rest dynamic, highlighting the strategic agency of middle‑income countries and suggesting a new diplomatic lever for safety coordination.
This comment shifted the discussion toward the role of non‑superpower nations, prompting panelists from Singapore, Malaysia and the World Bank to discuss concrete ways their regions can influence standards, and set the stage for the later focus on regional cooperation (e.g., ASEAN AI Safety Network).
Speaker: Eileen Donahoe
Trust is built through inclusion and objective evidence; occasionally we should pause, test, monitor, audit, share information, and invest in building confidence that systems respect fundamental rights.
He links the abstract notion of ‘trust’ to concrete procedural steps (pausing, transparency, incident reporting) and frames these as prerequisites for public acceptance, moving the conversation from high‑level principles to actionable governance mechanisms.
His call for pauses and incident‑reporting infrastructure sparked subsequent remarks about coordinated transparency (e.g., OECD’s incident reporting framework) and reinforced the panel’s focus on building practical safety tools, influencing the later emphasis on an international incident response centre.
Speaker: Mathias Cormann
Translating scientific knowledge into policy requires rigorous testing, simulations, and interoperable standards—just as aviation safety demands evidence‑based distance rules for aircraft take‑offs and landings.
She uses a concrete aviation analogy to illustrate the gap between scientific understanding and policy implementation, emphasizing the need for evidence‑based standards and cross‑jurisdictional interoperability.
The analogy deepened the discussion on how technical research can be operationalised, leading other speakers (e.g., Gobind Singh Deo) to stress the necessity of enforcement agencies and standardized testing regimes.
Speaker: Josephine Teo
Standards, regulations, and legislation are ineffective without an agency that can enforce them; otherwise they remain strong on paper but have no real impact.
He highlights a critical missing piece in AI governance—implementation capacity—shifting the focus from rule‑making to institutional capability and sustainability.
This point redirected the conversation toward building enforcement bodies within ASEAN and other regional frameworks, reinforcing the earlier call for institutionalisation and influencing the panel’s concluding recommendations about sustainable structures.
Speaker: Gobind Singh Deo
The biggest risk lies in the labs of top AI companies; a prohibition on superintelligence development should only happen after broad scientific consensus and strong public buy‑in, and political pressure can make such a prohibition feasible.
He brings a stark, lab‑centric perspective that contrasts with the policy‑focused remarks of others, introducing the idea of an outright prohibition and linking it to public mobilisation and political leverage.
His emphasis on a prohibition and the limited role of investors prompted a brief exchange on investor influence, and reinforced the urgency expressed by other speakers about slowing down development and increasing transparency.
Speaker: Jann Tallinn
Ensuring AI systems operate safely and ethically is partly a technical challenge and partly a governance challenge; global coordination is essential because harms cross borders.
He succinctly frames the dual nature of the problem and underscores the necessity of international coordination, setting a conceptual foundation for the entire panel.
This framing guided the subsequent questions from Eileen Donahoe and anchored the panel’s focus on coordination mechanisms, influencing the direction of the discussion toward global governance structures.
Speaker: Stuart Russell
AI is like a sphere that can penetrate any shield, but we can also build stronger protective shields using AI itself; the solution lies in close collaboration between developing and advanced economies.
He uses a vivid metaphor to illustrate the paradox of AI as both threat and defence, emphasizing the need for collaborative learning and co‑development of safety tools across capacity levels.
The metaphor reinforced the theme of partnership between high‑ and low‑capacity countries, supporting earlier points about middle‑power agency and prompting the panel to consider concrete collaborative models for safety tool development.
Speaker: Sangbu Kim
Overall Assessment

The discussion was shaped by a handful of pivotal insights that moved it from a generic acknowledgment of AI risks to a nuanced exploration of governance levers. Stuart Russell’s framing of the dual technical‑governance challenge set the agenda, while Eileen Donahoe’s spotlight on middle‑power agency broadened the geopolitical lens. Mathias Cormann’s call for trust‑building pauses and incident reporting introduced concrete procedural tools, which were elaborated by Mathias Cormann and later reinforced by Gobind Singh Deo’s insistence on enforcement capacity. Josephine Teo’s aviation analogy and Sangbu Kim’s sphere‑shield metaphor grounded abstract concepts in real‑world analogies, prompting concrete discussions about standards, testing, and collaborative safety tool development. Jann Tallinn’s stark warning about lab‑level risks and the feasibility of a prohibition injected urgency and highlighted the limits of market‑based solutions, leading to a brief debate on investor influence. Collectively, these comments redirected the conversation toward actionable, inclusive, and internationally coordinated governance mechanisms, culminating in a consensus that the coordination gap is real but bridgeable through inclusive institutions, transparent reporting, and sustained political pressure.

Follow-up Questions
What are the key lessons learned from building consensus on AI safety frameworks and what is the most critical piece of coordinated frontier AI safety infrastructure to build now, such as an international incident response center?
Understanding past successes and pinpointing the most needed infrastructure will help shape effective global coordination and rapid response to AI incidents.
Speaker: Eileen Donahoe (to Mathias Cormann)
What role can Singapore and other middle powers play in bridging the coordination gap and keeping scientific and safety channels open, and what is the most important step they can take in the next 12 months to establish a shared minimum understanding of frontier safety?
Middle powers have unique diplomatic leverage; identifying concrete actions can enable them to steer global AI governance despite limited domestic jurisdiction over frontier AI.
Speaker: Eileen Donahoe (to Josephine Teo)
What lessons can other middle powers draw from Malaysia’s experience with the ASEAN AI Safety Network, and what concrete steps should ASEAN take in the next 12–18 months to move beyond aspirational goals?
Malaysia’s dual‑track approach offers a potential model; clarifying actionable steps will help the region operationalize AI safety coordination.
Speaker: Eileen Donahoe (to Gobind Singh Deo)
How can the World Bank help Global South countries transition from passive recipients of frontier AI to active shapers of safety and reliability requirements before large‑scale deployment?
The World Bank’s financing and technical assistance could be pivotal, but specific mechanisms for capacity‑building, standards adoption, and risk assessment need definition.
Speaker: Eileen Donahoe (to Sangbu Kim)
What would an effective prohibition on superintelligent AI development look like in practice, and how could it be enforced?
A clear, enforceable prohibition is a cornerstone of the Future of Life Institute’s stance; detailing its practical design is essential for policy implementation.
Speaker: Eileen Donahoe (to Jann Tallinn)
What would it take to bring investors meaningfully into the AI safety conversation?
Investors shape incentives for AI developers; identifying mechanisms (e.g., safety‑linked financing terms, disclosure requirements) could align capital flows with safety goals.
Speaker: Eileen Donahoe (follow‑up to Jann Tallinn)
What should be prioritized in the next 12–24 months to enhance AI safety and security globally?
A short‑term priority list will guide governments, industry, and multilateral bodies in allocating resources and legislative effort before capabilities outpace governance.
Speaker: Eileen Donahoe (to the panel)
How can coordinated transparency and incident reporting frameworks be standardized across jurisdictions to enable an international AI incident response center?
Standardized reporting is prerequisite for a global response hub; research is needed on data sharing protocols, legal protections, and interoperability.
Speaker: Mathias Cormann (implied)
What are the most effective methods for refreshing AI safety research priorities to keep pace with rapid technological advances?
The Singapore consensus quickly becomes outdated; a systematic, periodic review process is required to ensure research agendas remain relevant.
Speaker: Josephine Teo (implied)
What practical testing tools and evaluation metrics are needed to give developers assurance of safety before deployment?
Guidelines alone are insufficient; concrete, open‑source testing suites would enable developers to validate safety claims across diverse contexts.
Speaker: Josephine Teo (implied)
How can institutions be built or strengthened within ASEAN to enforce AI safety standards and sustain long‑term governance?
Enforcement agencies are essential for translating standards into impact; research should explore institutional design, funding, and cross‑border coordination.
Speaker: Gobind Singh Deo (implied)
What financing models can ensure adequate investment in AI safety measures, especially for low‑capacity countries?
Developing nations need dedicated funding streams for safety; exploring grants, blended finance, and risk‑sharing mechanisms is critical.
Speaker: Sangbu Kim (implied)
What mechanisms can increase transparency of AI companies’ internal knowledge to support global slowdown efforts?
Greater openness about development roadmaps and risk assessments could create political pressure for a slowdown; viable transparency frameworks must be studied.
Speaker: Jann Tallinn (implied)
How can open‑source safety and evaluation tools be curated, maintained, and adopted globally?
A centralized catalog (e.g., OECD.ai) is a start, but sustainable governance, community contributions, and integration into regulatory processes need further investigation.
Speaker: Mathias Cormann (implied)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Partnering on American AI Exports Powering the Future India AI Impact Summit 2026

Partnering on American AI Exports Powering the Future India AI Impact Summit 2026

Session at a glanceSummary, keypoints, and speakers overview

Summary

Jacob Helberg asked Ambassador Sergio Gore about U.S.-India tech ties; Gore cited “liberalist potential” and AI as a three-year priority [2-4][5-9][16-20][22-24].


Sanjay Mehrotra said Micron, a U.S. memory leader with Indian R&D, will open a $2.75 billion Gujarat plant to support Pax Silica AI supply chains [27-33][35-37][39-45][45-46].


Dr. Thakur noted the semiconductor chain runs from minerals to silicon, that the economy relies on compute, and cited India’s engineering workforce and $25 billion semiconductor investment [56-61][62][63-71][66-68].


He added India produced $70 billion of mobile phones, exporting $30 billion, providing a base for AI-enabled products [74-75][76].


Secretary Krishnan urged alignment with partners sharing democratic values, warned against single-source dependence, and called for value chains [83-88].


Gore said the AI revolution is inevitable, likening resistance to past tech shifts, and urged both nations to lead together as leading democracies [92-101][104-106][335-341].


Micron’s CEO linked its vision of enriching lives through information to the summit’s welfare theme and Pax Silica’s AI supply-chain role [111-119].


Michael Kratios described the U.S. AI Export Program, offering full-stack AI technology to partners, funded by agencies and a new Tech Corps [150-158].


Panelists said the program lets “national champions” build on the American AI stack, providing hardware, model, and application choices for sovereign AI needs [200-207][272-276][284-291].


They emphasized simplicity for firms and startups, matching buyers with consortia and AI use cases in health, education, and agriculture [245-254][300-306].


The discussion concluded that the reinforced U.S.-India partnership, joint investments, and export framework together create a resilient global AI ecosystem [335-341][335-341].


Participants agreed collaborative AI development and secure supply chains are essential for economic growth and shared democratic values [83-88][92-101].


Keypoints


Major discussion points


U.S.-India technology partnership as a strategic priority – Ambassador Gor highlighted the “natural partnership” between the two democracies, the “special relationship” of their leaders, and identified AI as a key focus for the next three years [5-9][16-22].


Micron’s role in building a resilient, secure semiconductor supply chain – Sanjay Mehrotra described Micron’s R&D and memory-design work in India, the $2.75 billion investment in a Gujarat assembly-test facility, and how this effort complements U.S. manufacturing to secure AI-infrastructure [30-40][41-45].


The AI revolution as an inevitable, transformative force – The Ambassador framed AI as a historic paradigm shift comparable to the Model T, urging both nations to embrace it together and leverage shared democratic values [92-106].


U.S. government AI export and “sovereign AI” initiatives – Michael Kratios outlined the American AI Export Program, new financing mechanisms, and the Tech Corps; Kimmett and Remington detailed the forthcoming full-stack consortia, industry-led proposals, and the goal of providing sovereign-AI toolkits to partner countries [150-158][225-236][272-284][285-291].


Broader societal impact of AI in emerging markets – Panelists emphasized AI-driven advances in health, education, and other verticals, citing India’s massive mobile-phone production and the potential for AI-enabled teachers to reach learners of all ages [300-307][308-313].


Overall purpose / goal


The discussion was designed to showcase and deepen the U.S.-India collaboration on artificial intelligence and semiconductor technologies, announce concrete initiatives (e.g., the Pax Silica/Paxilica agreement, Micron’s Indian investments, the American AI Export Program), and articulate a shared vision of building secure, resilient supply chains and sovereign AI capabilities that drive economic growth and societal benefit for both nations.


Overall tone


The conversation maintained an upbeat, promotional tone throughout-marked by optimism, mutual admiration, and repeated affirmations of partnership. Early remarks were celebratory of the bilateral relationship; mid-session shifted to detailed policy and programmatic explanations, yet retained the same enthusiastic and collaborative spirit. The tone concluded on a hopeful note, emphasizing future opportunities and the transformative promise of AI.


Speakers

Brendan Remington – Deputy Undersecretary for Policy, International Trade Administration, U.S. Department of Commerce (panelist on AI exports) [S1][S2]


Area of expertise: International trade policy, AI export programs


Dr. Randhir Thakur – Doctor/Expert in semiconductor and technology sector; CEO, Tata Electronics [S3][S4]


Area of expertise: Semiconductor manufacturing, AI hardware, edge technologies


Michael Kratsios – Director of the White House Office of Science and Technology Policy; National Science and Technology Advisor to the President; Head of U.S. delegation to the India AI Impact Summit [S8]


Area of expertise: Science & technology policy, AI strategy, international AI cooperation


Ambassador Sergio Gor – U.S. Ambassador to India [S9]


Area of expertise: Diplomatic relations, U.S.-India technology collaboration


William Kimmett – Under Secretary for International Trade, U.S. Department of Commerce [S11]


Area of expertise: International trade, AI export initiatives, technology partnerships


Secretary S. Krishnan – Secretary, Ministry of Electronics and Information Technology (MeitY), Government of India [S13][S14]


Area of expertise: Electronics policy, semiconductor ecosystem, AI strategy


Sanjay Mehrotra – President and CEO, Micron Technology [S16]


Area of expertise: Memory and storage technology, AI hardware, supply-chain resilience


Jacob Helberg – Under Secretary of State for Economic Affairs, United States [S19]


Area of expertise: Economic diplomacy, U.S.-India trade and technology partnerships


Moderator – Session moderator (unnamed)


Area of expertise: (unspecified)


Mr. Sriram Krishnan – Senior Advisor for Artificial Intelligence, Office of Science and Technology Policy; Panel moderator [S24]


Area of expertise: AI policy, international AI cooperation, technology export programs


Additional speakers: None identified beyond the list above.


Full session reportComprehensive analysis and detailed insights

The summit opened with moderator Jacob Helberg inviting Ambassador Sergio Gor to outline the United States-India technology partnership, asking him to “help us understand your vision for the opportunities that you see to deepen U S.-India technology collaboration” [2-4].


Ambassador Gor described the bilateral relationship as built on “liberalist potential” and a “natural partnership” in which the United States contributes leading-edge technology while India offers a vibrant innovation ecosystem [5-9]. He noted the personal rapport between the leaders, quoting that “our president really, really, really likes the prime minister,” and said this chemistry would shape the next three years of cooperation [16-20]. Gor identified artificial intelligence as the focal point of the agenda, stating that AI will be the sector on which the United States concentrates its efforts over the coming three-year period [21-24].


Sanjay Mehrotra, CEO of Micron, used the occasion to celebrate the signing of the Paxilica (also referred to as Pax Silica) agreement, calling it a “tremendous initiative” that underscores U S.-India collaboration on semiconductors and resilient supply chains [27-29]. He highlighted Micron’s more than 60 000 patents and its R & D facilities in India that contribute to “leading-edge memory design” [30-33]. Emphasising memory as “fuel” for the AI-driven digital economy [36-38], Mehrotra announced a $2.75 billion investment in an assembly-test plant in Sanand, Gujarat, which will produce “hundreds of millions of chips,” complement U S. manufacturing capacity, and reinforce a win-win supply-chain partnership [39-44]. He concluded that initiatives such as Paxilica are essential for building “successful supply-chain resiliency and security” for AI infrastructure [45-46].


Dr Randhir Thakur placed the discussion in a technical context, asserting that the 21st century’s engine is “compute and the minerals that feed it,” a shift from the 20th-century reliance on oil and steel [59-62]. He highlighted India’s massive engineering talent pool-1.5 million graduates annually, accounting for roughly 20 % of the world’s semiconductor design activity[63-64]-and noted that three years ago there was no semiconductor investment in India, whereas today more than $25 billion is being poured into ten factories, including an AI-enabled fab and indigenous packaging technology in Assam for edge-device chips [66-71]. Thakur also pointed to India’s $70 billion mobile-phone production, with $30 billion exported, as a strong manufacturing base that will accelerate AI-enabled products, and affirmed that Paxilica will further boost this momentum [74-76].


Secretary S. Krishnan delivered a macro-level message, urging the audience to “align and ally on lines which really work for people who share values” and to avoid becoming “enslaved…to just one dependence” [83-86]. He stressed the need for “trusted partners” and “trusted value chains” so that technology can serve the public good, framing the summit’s aim to “democratise an important element of technology” [87-89].


Returning to the theme of inevitability, Ambassador Gor warned that “the AI revolution is here” and that denial is futile [92-94]. He drew a historical parallel with the Model T, noting that early resistance to the automobile came from horse-and-buggy drivers, yet no one would now choose a buggy [95-101]. Gor argued that AI, like past transformative technologies, will become indispensable, and that the United States and India-“the world’s oldest democracy and the world’s largest democracy”-must lead together, using shared democratic values to harness AI for good [104-106][335-341].


Micron’s CEO later linked the company’s long-standing vision-“transforming how the world uses information to enrich life for all”-to the summit’s welfare motto “Sarvajan Hittai and Sarvajan Sukhai” [111-113]. He reiterated that memory and storage are the backbone of artificial general intelligence (AGI) and that Micron’s investments, both in the United States and in India, together with Paxilica, will secure the AI supply chain and shape the future of AI worldwide [114-119].


Michael Kratsios, U.S. National Science and Technology Advisor, outlined the American AI Export Program. He described a suite of financing tools-the International Development Finance Corporation, the Export-Import Bank, the U.S. Trade and Development Agency, the Millennium Challenge Corporation, and a new World Bank fund-designed to help partner countries import the American AI stack [150-154]. Kratsios announced the launch of the U.S. Tech Corps, a modernised Peace Corps that will embed technical volunteers with partners to provide “last-mile support” for AI applications across sectors [155-158]. He framed AI as a new frontier that can “unlock new knowledge…and new sources of prosperity” and called on democracies to join the effort [159-162].


William Kimmett of the Department of Commerce reinforced the priority of building AI infrastructure, stating that AI needs “energy” and “data centres” for national security and economic stability [214-217]. He explained that the AI Export Program stems from an executive order that calls for industry-led consortia to offer full-stack AI solutions, and that the Commerce Department has already issued a request for information, receiving “hundreds of submissions” now being analysed [225-236]. Kimmett further clarified that the program will support “national champions” by providing a foundational American AI stack on which domestic firms can build sovereign capabilities [272-276].


Brendan Remington detailed the design of these consortia, emphasizing simplicity with “t-shirt sizes of small, medium, large” to make the offering accessible to both large buyers and startups while still accommodating niche, highly customised solutions [251-259]. He described “sovereign AI kits” that let countries choose which components of the stack (chips, GPUs, models, agents) to adopt, thereby supporting diverse policy and security preferences [270-283]. Remington also highlighted priority verticals-health, education, agriculture, manufacturing, maritime-and suggested a “one-stop-shop” approach to match buyers with appropriate AI solutions [284-291][300-306].


Sriram Krishnan, senior AI advisor, expressed optimism about the energy of the Indian ecosystem, especially its youth, noting that AI-driven tutoring could provide “a teacher…who never gets tired, who knows how to speak to you in a local language” and transform learning for all ages [307-313]. He closed by reiterating the historic partnership between the two democracies, the significance of the Paxilica signing, and the promise of an “amazing, enduring technology partnership” [335-341].


Consensus vs. divergence – The panel largely agreed that AI is an inevitable, transformative force and that the U S.-India partnership is strategically vital. Points of nuanced difference emerged: Gor highlighted personal diplomatic rapport (“our president really, really, really likes the prime minister”) as a catalyst, whereas Krishnan emphasized shared democratic values and diversified, trusted supply chains; Kratsios focused on outward-looking export financing and the Tech Corps, while Kimmett stressed domestic infrastructure (energy, data centres) as the primary security priority [16-20][83-89][150-162][214-217].


Key takeaways


– The Paxilica (also referred to as Pax Silica) agreement deepens U S.-India AI and semiconductor collaboration [27-29].


– Micron’s $2.75 billion Gujarat assembly-test plant will complement U S. manufacturing and bolster supply-chain resilience [39-44].


– India’s expanding talent pool (1.5 million engineers), AI-enabled fab, indigenous packaging, and $70 billion mobile-phone output provide a strong base for AI-enabled products [63-64][66-71][74-76].


– A shared call to diversify supply chains and align on democratic values underpins the partnership [83-89].


– The American AI Export Program introduces financing mechanisms and the U.S. Tech Corps to accelerate AI adoption in partner countries [150-162].


– “Sovereign AI kits” allow countries to select stack components, supporting autonomy and policy preferences [270-283].


– Health, education, agriculture, manufacturing, and maritime sectors were identified as priority verticals, with AI-driven education tools highlighted as a flagship use case [284-291][300-306].


Action items


– Finalise the Paxilica agreement.


– Micron to proceed with construction of the Gujarat assembly-test facility.


– The Department of Commerce to issue a public call for industry-led consortia proposals (full-stack AI solutions).


– Launch the AI Agent Standards Initiative and the U.S. Tech Corps.


– Ambassador Gor to focus on AI collaboration over the next three years.


– Panelists identified health and education as priority verticals for future pilot projects (specific outreach actions were not detailed in the transcript) [150-162][284-291].


Unresolved topics – The transcript did not provide details on the operational framework for Paxilica’s supply-chain security mechanisms, the exact timeline and eligibility criteria for the AI export consortia call-for-proposals, how data sovereignty, model ownership, and regulatory compliance will be managed within sovereign AI kits, coordination mechanisms between U S. and Indian R & D teams for next-generation memory designs, allocation criteria for financing through IDFC, EX-IM and other agencies, or concrete metrics for monitoring the “trusted partnership” principle to avoid over-reliance on any single source.


Overall, the summit presented a cohesive narrative that linked geopolitical relationships, technical dependencies, and policy frameworks into a roadmap for a resilient, democratic-led global AI ecosystem. High-impact remarks-from the personal chemistry highlighted by Ambassador Gor, to Micron’s memory-fuel metaphor, Dr Thakur’s compute-economy framing, Secretary Krishnan’s values-based call, and Kratsios’s export-programme blueprint-steered the dialogue from celebratory announcements to actionable strategies for joint AI development and secure supply-chain construction, suggesting a strong likelihood that the announced initiatives will translate into coordinated policy actions, joint investments, and a durable U S.-India partnership in artificial intelligence.


Session transcriptComplete transcript of the session
Secretary S. Krishnan

and resilient supply chain in these critical areas of technology which the world needs.

Jacob Helberg

And that’s actually a great segue to shift to Ambassador Gore, who just arrived in India with a bang. Ambassador Gore, could you help us understand your vision for the opportunities that you see to deepen U .S.-India technology collaboration? Thank you.

Ambassador Sergio Gor

Thank you, Jacob. Jacob, look, liberalist potential, those are the two words. And I truly mean it. As I’ve started traveling around this country, and I’ve been to multiple states already, what I have seen here, it’s such a natural partnership. And what the United States has with the best technology and with the innovation that we see here across India, this is a natural partnership. The President and the Prime Minister have a special relationship, and I mean that. And that goes a long way. And I think that’s a great point. I think that’s a great point. I think that’s a long way. You have great elements here in the sense of the technology, in the sense of the innovation.

and in the want. India wants to get involved. But also the magic touch is that special relationship between our two leaders. It’s a friendship that goes back many years. And for those colleagues of mine from Washington to understand the difference that it makes when our president likes you or he doesn’t like you. And with India, our president really, really, really likes the prime minister. And so that makes a huge difference for the next three years. Not only the administration, but the White House itself is open to engaging India. And one of those areas where we can further this to a record is this AI. It’s the technology sector. And so that’s something that I’ll be focused on over the next three years.

Jacob Helberg

Thank you. Sanjay, could you help us understand a little bit, what does the partnership between America and India mean for the security of the supply chains of a company like Micron? Which obviously operates on a global scale.

Sanjay Mehrotra

First of all, Jacob, let me just say congratulations on this India and U .S. Paxilica signing today. This is certainly a tremendous initiative and wonderful to see the collaboration between the two great countries on the technology front, semiconductors, and, of course, resilient, secure supply chains. Micron, as I was mentioning earlier, a global memory and storage leader. And, of course, Micron is headquartered out of U .S., an American company, and Innovation Powerhouse, 60 ,000 -plus patents. We are here in India. We have R &D facilities here in India, absolutely contributing to leading -edge memory design. I should name Secretary Vaishnav earlier. Earlier. We earlier talked about two -nanometer designs. In memory, the most advanced designs in the world are also taking place here in India.

very much in collaboration with our teams in the U .S. So it’s an example, good example, of how we are advancing AI forward. Memory is a critical enabler of AI. Just think of it this way, that if, you know, AI is driving, is the growth engine of the digital economy, then memory is the fuel. And that fuel is being, you know, really developed, is manufactured between collaboration between U .S. and India here with R &D teams here, but also manufacturing. And that’s an important piece with Micron performing assembly and test operations here in the Sanand, Gujarat facility with investments, with the support from the Indian government, with $2 .75 billion of investments, with time that will result in, hundreds of millions of chips assembled and tested here.

And that complements Micron’s manufacturing plan. in the U .S. Actually, as you look at our manufacturing plants in the U .S. on the silicon side, as well as advanced packaging side, the work we’ll do here will complement that. It will add to it. It will contribute to it in terms of AI in manufacturing, in terms of automation in manufacturing, refining and making workflows related to manufacturing more efficient. This will be a win -win partnership with Micron’s investments in the U .S. getting the support and all the learnings of large -scale manufacturing of assembly and test operations here in India as well. So we are really looking forward to it. And it’s initiatives like Paxilica absolutely ensure that there is successful supply chain resiliency and security built in to continue to build the AI infrastructure and advance the technology.

And it’s

Jacob Helberg

Thank you. And Dr. Thakur, could you help us understand a little bit better the special connection between heavy data center investments and edge technology like smartphones and connected vehicles, especially in emerging markets?

Dr. Randhir Thakur

Well, thank you very much. And I first want to really congratulate on Paxilica at a personal level. It’s very exciting. We are doing this between our two countries. Truth be told, for my PhD, I went to Oklahoma of hot places. You know, so I’m a sooner and pretty soon I realized that football, I don’t have chance to do silicon, you know, so I worked on silicon. So, but you know, the key is that first transistor built was really built on germanium produced near Oklahoma, germanium transistor. Until we switched to silicon and thank God we did and Shockley made the first transistor in Bell Labs and rest is history. So our industry has always been dependent on this material engineering, ability to work these minerals, deploy them into making the chips.

And as far as the question about data center, I think the enablement of the data centers or AI is hardware driven. Because AI was known long time, but the hardware was not ready. Our ability to compute was just not there. And as you have said, Undersecretary Halbert, that the 20th century ran on oil and steel. The 21st century runs on compute and the minerals that feed it. That is so true. So packed silica is just such a timely change. For us in India, the innovation and the drive we have is tremendous. 1 .5 million engineers are produced every year. 20 % of the global semiconductor industry. 20 % of the global semiconductor industry designed. is done, the chip design is done by Indian engineers here in India.

And we never really had any non -coercive issues in the design space. So I think this is a very, very natural fit. In terms of the progress we are making, I think three years ago, there was no investment in India on the semiconductor side. Today, we have more than $25 billion being invested in 10 different factories, including Micron and Tata Electronics. We are working on the first AI -enabled fab that will be producing the AI -specific chips in India. We are using the indigenously developed packaging technology in Northeast Assam, where we’ll be packaging all of the automotive and other chips that are at the edge, being done for the U .S. companies. Partnership -wise with the U .S., because semiconductors brings us together, we are working with companies like Analog Devices.

Qualcomm, Synopsys, and Inter, where we have memorandum of understanding to work together. to deploy the ecosystem. Sometimes we are the customers, sometimes they are the customers. So at a holistic level, that engagement moving extremely well. On mobile phone, we are producing now, I think this year India produced mobile phones worth $70 billion in the last year, $30 billion of which were exported out. So there is just tremendous push all around in terms of manufacturing. And this initiative today, I really believe it’s going to bring and accelerate the momentum that we already have. Thank you.

Jacob Helberg

I want to end by zooming out and asking a question for all of our panelists that’s a little bit more macro. And as we gather here in India in front of world leaders and business executives, and as the global economy undergoes this incredible change driven by the reorganization of our supply chains and the AI revolution. What is your message to this? And maybe we can start with Secretary Krishnan and work our way down. The message

Secretary S. Krishnan

to this audience is that we need to align and ally on lines which really work for people who share values, for countries that share values, and to ensure that we do not become enslaved or do not become tied down to just one dependence. I think that is the critical thing. That is what we learned. Through the pandemic and through all the geopolitical upheavals. And therefore, we need to have trusted partners with whom we can work and trusted value chains so that technology can work for all of us in organizing this India AI Summit. I think what we have truly managed to do is to democratize an important element of technology. The people have been let into the room, and that needs to continue through.

valuable partnerships.

Jacob Helberg

Thank you. Ambassador Gore, you talked about limitless potential earlier. Can you give us a little bit of a color on what your main top -level message is to this audience?

Ambassador Sergio Gor

Look, the message is the AI revolution is here. People can pretend it’s not. It’s coming. And so it’s one of those things, the sooner that people can adapt to your point, the sooner that people can partner with like -minded individuals, that’s a good thing. And so you find in some places of the world, not India, but in other places of the world where they’re going to resist AI, where they’re going to resist this revolution, it’s here. It’s here to stay. Every hundred years, every so often, we see in history something that changes the world. And you always have a sector that resists. When Ford had the first Model T come off the assembly line, the first people that protested were those in a horse and buggy.

But today, nobody would want to go back to a horse and buggy. They would want to go back to a horse and buggy. They would want to go back to a horse and buggy and give up their cars. That revolution came, whether you like it or not. And the same thing is going to happen here over the next few years. And so India and the United States being at the leading, at the cutting edge of this new technology, embracing it, using it for good, and partnering with those who share our common values. We’re the world’s oldest democracy. This year we’re celebrating 250 years. India is the world’s largest democracy. This is a national partnership for both of our nations.

Jacob Helberg

Thank you. Sanjay?

Sanjay Mehrotra

Micron’s vision statement defined several years ago now is transforming how the world uses information to enrich life for all. And that vision is truly coming to life today. This AI summit, the message of Sarvajan Hittai and Sarvajan Sukhai, welfare for all, happiness for all, is very much aligned. with Micron’s vision. U .S. vision for AI in terms of national and economic security, and, of course, the businesses and the global leaders around the globe working toward AGI, artificial general intelligence, all of this critically relies on memory and storage, and Micron is very proud to be at the center of it. More and more memory is needed. Micron is making the investments in order to increase the supply.

But it’s not about just the importance of memory and storage to advancement of AI. It’s not just about investments that Micron is making in the U .S. to advance semiconductor supply chain as well as in India and other locations, but it is also absolutely about initiatives like Pax Silica, that really secure the future of supply chain. and ensure that AI infrastructure and AI capabilities will be there ready to shape the world of the future. We are very proud to be part of this, very proud as an American company to be able to bring up advanced technology capability here in India, which will benefit our U .S. operations as well, and very thankful to the partnership between U .S.

and India to jointly together define the future of AI and shape the future of the world.

Jacob Helberg

Thank you so much. Dr. Thakur, any closing thoughts for the audience?

Dr. Randhir Thakur

Well, thank you very much. As our Tata Sun chairman, Mr. Chandrasekharan, said yesterday, under the vision of our prime minister, India has treated AI as strategic national capability. So I see. I see the declaration of Pax Silica as a response and an enabler, a codification of trust, and for us. the opportunity to work together. The expectation is laid out from the nations. It is now up to us to deliver on this promise as an industry. So, Honorable Undersecretary Helberg, Ambassador Gore, I really want to thank you from bottom of my heart for Paxilica. We’ll make it work. Thank you. Thank you so

Moderator

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you Thank you. Thank you. you you Thank you. Thank you. And this is the panel partnering on American AI Exports Program. First, I take this opportunity to welcome Mr. Michael Kratios for the keynote remarks to kick off this session. Michael Kratios is the head of delegation for the United States to the India AI Impact Summit. And also, he is President Trump’s National Science and Technology Advisor and the Director of the White House Office of Science and Technology Policy.

Ladies and gentlemen, please welcome Mr. Michael Kratios.

Michael Kratsios

asked to choose between completing the stack and developing a domestic AI, we have established a national champions initiative. We recognize that partners need a chance to build their native technology industries and believe facilitating this will be a critical part of the export program. To facilitate the development of industry -led, open, and secure AI standards and to give the public confidence in this next generation of technology, we are creating an AI agent standards initiative. To empower developing partner countries to overcome financing obstacles as they import the American AI stack, the U .S. International Development Finance Corporation and the Export -Import Bank of the United States, the U .S. Trade and Development Agency, the Millennium Challenge Corporation, and a new World Bank fund have initiated new AI -focused programs.

And to further enable AI adoption in the developing world, the Trump administration is bringing America’s historic Peace Corps into the 21st century. with the launch of the U .S. Tech Corps. This initiative will embed volunteer technical talent with import partners to provide last -mile support in deploying powerful AI applications for enhanced public services. In everything from energy and education to manufacturing and medicine to transportation and agriculture, I’m confident that the American AI stack can be key to unlocking new economic and social benefits for your people. The hope of the United States is that the pursuit of real AI sovereignty, the adoption and deployment of sovereign infrastructure, sovereign data, sovereign models, and sovereign policies within your borders under your control will become an occasion for bilateral diplomacy, international development, and global economic dynamism.

The American AI Export Program exists to make that happen. The U .S. wants to share the American AI stack because this technology presents the opportunity to lead. as our nation’s founders did 250 years ago, a revolution in human history to the benefit of all of mankind. These tools used well will unlock new knowledge for our growth and new sources of prosperity and challenge us to grow the strength of our humanity to match our growing capabilities. American AI is settling a new frontier, but America does not seek to build this new future alone. So I ask you to join us. Thank you.

Moderator

Thank you so much, Mr. Kratios, for your ideas, your remarks, which are truly enlightening and illuminating as well. Ladies and gentlemen, next I would like to invite the speakers for a panel on partnering on AI exports. Interesting, isn’t it? Well, the moderator is Mr. Sriram Krishnan, the Senior Advisor for Artificial Intelligence at the Office of Science and Technology Policy, and the panelists are Department of Commerce Undersecretary for International Trade, Mr. William Kimmett, and Department of Commerce Deputy Undersecretary for Policy at the International Trade Administration, Mr. Brendan Remington. Please welcome the panelists. Over to you, Mr. Krishnan.

Mr. Sriram Krishnan

Good morning. How is everyone doing? How is everyone doing? First off, before we get started, I just want to say what a privilege and honor it has been for us to be here the last couple of days. I want to thank all of our hosts. I want to thank the Honorable Prime Minister Narendra Modi. I want to thank the huge team which has made this possible. It has been an amazing privilege. And especially today, when I was roaming the halls and I was here, I was just struck by the honor and the privilege of being here. And I want to thank the optimism of so many of the delegates and attendees here. in particular I was struck by the optimism of so many young people so I’m curious how many of you here are students okay can all of you who are students just please stand up okay can everyone else give them a round of applause because I was just so blown away by the enthusiasm they have for AI and you know and you know the hope and the potential and you know thank you for coming here and I think you need to get back to studying after this but thank you for coming here it really blew me away and so I wanted to say that just because I think that hope and optimism is what we in the Trump administration have really embraced when it comes to AI and I think that’s going to be a core part of when we talk about AI exports so first off I want to kind of introduce my distinguished fellow panelists we have Under Secretary William Kimmett from Department of Commerce we have Deputy Under Secretary Remington Well, before we get into the serious stuff, you’ve been all over India for the last couple of days.

No pressure, but what has been your favorite part? Everyone here is judging you.

William Kimmett

My favorite part, I think, it’s been fabulous. We actually did a stop in Bangalore before we came to Delhi, which was really fabulous and really just amazing. I want to echo what Sri Rama said about the excitement and the dynamism we’re seeing in the ecosystem here, and it’s just really remarkable, and particularly the young, talented students here in India. It’s just really been remarkable to see. And I’d say riding in the streets of Bangalore, that was an experience, and seeing the traffic there. But what I noticed while we were driving throughout all the traffic around us was how, well, digitalized the country is. And, you know, I see people on motorcycles, and they’re on the back with their phones, and everybody’s on their phone, and just how digital the country is, and it’s really remarkable.

So I’d say experiencing the streets of Bangalore on the riding side, but also seeing how integrated tech is in everybody’s everyday life here has been really remarkable to see.

Mr. Sriram Krishnan

Amazing. Anybody from Bangalore or Karnataka here? Okay, a couple of folks. Okay, you need to help show them around next time he’s there. There you go. And Deputy Undersecretary, what about you?

Brendan Remington

I’d say the energy and the pace. I mean, it’s just unreal. I’ll stick with the driving theme. I think you can see it. It’s both precise and it’s decisive. It doesn’t wait for you. It’s representative of a lot of things, and Indians keep pace. I love the energy.

Mr. Sriram Krishnan

That’s true. I think the energy has been amazing. And so we’re going to talk about exports, but all of this comes from what President Trump set into motion in his very first week in office, where he did two things. First, he rescinded the Biden diffusion rule, which, as Dr. Kratzio said, made it difficult. It made it near impossible for countries like India to access advanced semiconductor chips. So I think that’s a big thing. I think that’s a big thing. I think that’s a big thing. I think that’s a big thing. I think that’s a big thing. I think that’s a big thing. I think that’s a big thing. I think that’s a big thing. Second, he tasked all of us with coming up with an action plan to deliver on what the country’s, America’s priorities should be when it comes to AI.

And we did that in July, and we have come up with three priorities. First is to build infrastructure. AI needs energy. AI needs data centers. And we’ve been focused on building those in a way that works for America and works for our citizens. Second, we’re focusing on innovation. How do we make sure that we have our entrepreneurs and we have our companies building the technologies that are necessary? But third, I think, is a spirit of partnership. How do we share these technologies that are built in Silicon Valley, in America, with our allies and with the rest of the world? And that’s what we’ve been really focused on. And on that end, and Will, I’d love to start with you.

Could you talk a little bit about the AI? The AI export program that Dr. Kratz has talked about, what it

William Kimmett

Absolutely. So certainly, President Trump has made AI a national priority. And so what does that mean? And when you think of the United States and our great tech companies, obviously, we’re doing what we can to support them. And of course, we’re doing that for our national security, our economic security and the success of our great companies. But how do we use that to share that with the rest of the world? And so specifically on this AI exports program, the president issued an executive order last July that tasked the Department of Commerce with standing up the AI exports program. And what that is, is it’s going to call for industry led proposals of consortia that will offer full stack offerings to the world and how we can promote the exports of those full stack consortia.

So it’s sort of a question of what does that mean? What does full stack mean? And so we wanted to make sure we were as thoughtful as possible in this process. us. And so we issued a request for information, asking companies to give us information, tell us what might be helpful, tell us maybe what wouldn’t be helpful. And we got a tremendous, tremendous response from the industry. We got hundreds of submissions, and we have spent the last several weeks digesting those and understanding the dynamics that maybe we weren’t aware of and things we should think about as we craft this program. And we are putting the finishing touches on it. And the next step is going to be a public call for proposals from the industry to submit these consortia and how we’re going to shape that program to do full -stack offerings and maybe other offerings as well.

Mr. Sriram Krishnan

That’s awesome. And Deputy Undersecretary, if I may come to you, maybe if you can just get into the details. We have guests from multiple countries over here. We have companies from all over the world here. Could you maybe break down a little bit about the next level of granularity? How do these consortia work if I’m a country attending this event or if I’m a company? what should I be doing?

Brendan Remington

Sure. I’ll start by saying you’ll hear more on how it actually works, but I’ll describe what we’ve heard so far and what people have asked of us. We’ve heard really two motions. One is how can we go outbound to the world? How do we offer, how do we help companies find buyers? And then on the other side for foreign buyers, how do we make it easier for you? And so as we’ve looked at that, we’ve decided, and as we’ve approached it, we’ve looked at a couple of different kinds of consortia. On the one hand, you would think, and what we’ve heard is make it easy, make it simple, like t -shirt sizes, small, medium, large.

I don’t need 100 permutations. I just need to know what’s available. But there are others who do want that special, very, very unique niche kind of thing and to accommodate both of those. And we’ll say in each of these, we’re looking for simplicity. We’re looking for elegant solutions. Our goal here is to make this easy for both sides. for both buyers, whether they are governments, whether they are state -owned enterprises or any sort, and then also for the real companies that we talk to, both the large ones but also the small startups who are thinking, what should I do next? I’m in my Series A, I’m in my Series B. Should I sell abroad? Is this possible to make that feasible for them?

Mr. Sriram Krishnan

And so if I’m a founder, should I come find you?

Brendan Remington

Yes.

Mr. Sriram Krishnan

Oh, there we go. Wow, I like putting him on the spot over there. So find him.

Brendan Remington

Through the website, not me personally.

Mr. Sriram Krishnan

He’s the man. I think for the last couple of days, one of the remarkable announcements was the launch of Sarvam’s new model, which I was really blown away by. And if you folks haven’t checked it out, you should check out some of the technical details. It is really, really impressive. And I think that is a good segue to the theme of sovereign AI. We have countries all over the world who want to have a sovereign AI kit. capability. What does it mean when working with some of the programs that we are talking about today with if you’re a national champion or if you’re a country which wants to have sovereign capabilities?

William Kimmett

Sure. So I think the program is going to, of course, be built on the American AI tech stack. But then, like I said, what does that mean exactly? And so what it really means is we want to set the foundation for possibilities as we’re exporting to other countries. And so in the context of a national champion, you know, if there’s a great company that wants to use American tech, we provide that foundation and then allow that national champion to build on that foundation of American tech. And so it’s really providing a level of the tech stack to countries so that they can build on that with their great local domestic champions.

Mr. Sriram Krishnan

I totally agree. I think one thing that, you know, when it comes to the stack, is there are multiple parts of it. There are the chips, the GPUs, the TPUs, whether using NVIDIA or AMD or Google. For example, you know, Servum has done great work working with NVIDIA on training their model. Then there is the model layer. There are agents or applications on top. So I think when we talk about the program and the stack, it is really you can pick as a company or as a country what part of the stack you want to build on. And there’s a whole range of possibilities.

Brendan Remington

If I could add one thing, we’re trying to facilitate choice. We hear about AI sovereignty a lot. And there are many different versions of this, right? We hear about does every village need their own data center, right? Or does everyone need an LLM for, like, their specific context? Some of them just say I want control over my data. I want to know where it goes. I want transparency. Because there are so many permutations, we want to offer these many choices and allow each context. And we’re trying to do that. And we’re trying to do that. And we’re trying to do that. and each buyer to make those choices.

Mr. Sriram Krishnan

And that is true. And I think I want to go back to what Dr. Katcha said, and I said about the first week of President Trump being in office, he wanted to make it easy for other countries to get access to our technology, and that set this in motion last January. Next up, I want to move to use cases. A lot of countries all over the world that Dr. Katcha is talking about and who are trying to figure out how to adopt AI, how to provide their citizens a better quality of life, better services. What are use cases that are interesting to you that you think, you know, we are going to see a lot of great progress and work on in the next year or two?

William Kimmett

Yeah, so I think the ones that are interesting to me certainly are in emerging markets are both in the health space and the education space and what we can do to bring AI solutions in those crucial sectors in various countries. And so. working with like the Ministry of Health in an emerging market and coming up with a solution that would revolutionize their health industry to the benefit of their citizens, that’s the part of the program that really excites me.

Mr. Sriram Krishnan

Ben?

Brendan Remington

Yeah, there’s so many. I mean, it’s so sweeping, but others we’ve also heard have been agriculture, manufacturing, I mean, maritime, you name it. There are a lot of verticals that have so many new use cases and so many new applications coming out all the time. I think back to the simplicity point, organizing around verticals, a one -stop shop, if you come here, this is where you can find offerings, has been something we’ve heard would be very useful.

Mr. Sriram Krishnan

I think so. For me, there’s obviously many, but I think education is something which has just blown me away. Even this morning, talking to a student, I met somebody from my alma mater and from second year of undergrad who’s just doing amazing things with AI at an age when I was not doing anything at all. And I think that’s something that’s really important. I think that’s something that’s really important. And, you know, that just fills me with hope and inspiration. Imagine every student, whether you could be five years old or maybe you’re 50 years old, and having access to a teacher, a lecturer, a professor who never gets tired, who knows how to speak to you in a local language, can answer any single question.

I think that is going to change so many people’s lives. Okay, one last note. We’re all working on AI. Just on a broad theme, what is something about AI, whether it’s in the U .S. government, whether it is how you’re approaching your work that fills you with optimism?

William Kimmett

I’d say for me, speaking, you know, working for the government, we’re talking about helping export USAI tech stack to the world. We actually, in the U .S. government, need to do a good job also bringing it into government in a lot of the work we do. I run the International Trade Administration. As part of that, we do a lot of analysis of supply chains and looking at things, and there are certainly better ways we can. So that’s one area where I think as we’re bringing tech. to the world, we also need to do a good job of bringing it into the U .S. government and helping us become more efficient as well.

Mr. Sriram Krishnan

I love that. I have to know that the U .S. government, I mean, we’ve done a lot of work. If you look at the action plan, a lot of work on making things more efficient. And you?

Brendan Remington

I’d say two things. The first, it’s so sweeping. There are very few technologies that change like your personal life and your work life. And they’re both going very quickly. The second is the hunger for this is so high. It’s not hard. We don’t have to sell AI in the sense of, do you want this? People want this. It’s really, how should we best provide it to them? How do we help both sides of this? How do we help the companies and how do we help the buyers? And being in the middle of that and enabling that is very exciting.

Mr. Sriram Krishnan

I agree. I think it’s a great note to end on. And I think I just want to close out by reemphasizing something that the Ambassador Gore spoke about earlier, which is, you know, these are two great nations. the world’s oldest democracy and the world’s largest democracy. Both countries, I obviously have very deep ties to. And a lot of this has been made possible by the special relationship between President Trump and Prime Minister Modi. And I think today, what you saw with the Pax Silica signing with Undersecretary Helberg and Dr. Kratios, what you saw with Dr. Kratios’ announcement is such a remarkable moment. But for me, it is just the beginning of what is going to be an amazing, enduring technology partnership.

But thank you so much. And thank you for being an amazing audience. And thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Moderator

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Hello everyone, welcome back once again. I’m sure you’re all refreshed after this break. And now we’re going to start with the next session and have some wonderful keynote speakers once again with us today. And a great lineup, as I said in the morning as well. So now I’m going to invite our keynote speaker. He is Mr. Jeetu Patel, President and Chief Product Officer, Cisco. Well, Mr. Patel sits at the intersection of AI and enterprise infrastructure. It’s kind of the plumbing that makes it work.

At Cisco, he’s leading the company’s transformation into an AI -native networking and security powerhouse. In a world obsessed with models and algorithms, his reminder that none of it works without resilient, secure infrastructure is both timely and essential. Ladies and gentlemen, please welcome Mr. G.

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Jacob Helberg served as moderator and invited Ambassador Sergio Gor to discuss the U.S.–India technology partnership.”

The knowledge base lists Jacob Helberg as the Undersecretary of State for Economic Affairs who moderated the discussion and includes Ambassador Sergio Gor as a panelist, confirming his invitation to speak.

Confirmedhigh

“Ambassador Gor described the bilateral relationship using the terms “liberalist potential” and “natural partnership.””

Source S2 contains Gor’s exact wording “liberalist potential” and “natural partnership,” corroborating the report’s description.

!
Correctionhigh

“The agreement discussed is called Pax Silica (referred to in the report as Paxilica).”

The knowledge base refers to the agreement solely as “Pax Silica” and does not mention the name “Paxilica,” indicating that the correct name is Pax Silica.

Confirmedmedium

“The Pax Silica agreement was highlighted as a significant U.S.–India collaboration on semiconductors and supply‑chain resilience.”

Source S5 mentions “Pax Silica” as an agreement that will change how the two countries work together in this domain, confirming its relevance to semiconductor and supply‑chain cooperation.

External Sources (86)
S1
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — -Brendan Remington- Department of Commerce Deputy Undersecretary for Policy at the International Trade Administration
S2
https://dig.watch/event/india-ai-impact-summit-2026/partnering-on-american-ai-exports-powering-the-future-india-ai-impact-summit-2026 — Thank you so much, Mr. Kratios, for your ideas, your remarks, which are truly enlightening and illuminating as well. Lad…
S3
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — -Dr. Randhir Thakur- Doctor/Expert in semiconductor and technology sector
S4
Keynote Adresses at India AI Impact Summit 2026 — -Randhir Thakur- CEO of Tata Electronics I invite our distinguished guests to please join us for this conversation. Und…
S5
https://dig.watch/event/india-ai-impact-summit-2026/keynote-adresses-at-india-ai-impact-summit-2026 — I invite our distinguished guests to please join us for this conversation. Undersecretary Jacob Helberg is going to mode…
S6
https://dig.watch/event/india-ai-impact-summit-2026/partnering-on-american-ai-exports-powering-the-future-india-ai-impact-summit-2026 — Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you….
S7
Taiwan deepens AI and digital ties at APEC summit — Taiwan’s Digital Minister, Huang Yen-nun,discussed deeper cooperationin digital and AI technologies with the United Stat…
S8
Keynote Adresses at India AI Impact Summit 2026 — -Michael Kratios- OSTP Director (Office of Science and Technology Policy), head of U.S. delegation
S9
S10
Keynote Adresses at India AI Impact Summit 2026 — Ambassador Sergio Gore explained that Pax Silica creates “a coalition of capabilities that replaces coercive dependencie…
S11
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — -William Kimmett- Department of Commerce Undersecretary for International Trade
S12
https://dig.watch/event/india-ai-impact-summit-2026/partnering-on-american-ai-exports-powering-the-future-india-ai-impact-summit-2026 — Thank you so much, Mr. Kratios, for your ideas, your remarks, which are truly enlightening and illuminating as well. Lad…
S13
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -S. Krishnan- Role/Title: Secretary of METI (Ministry of Electronics and Information Technology) This discussion on Ind…
S14
Empowering India & the Global South Through AI Literacy — -Shri S. Krishnan: Secretary, Ministry of Electronics and Information Technology (MeitY), Government of India
S15
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — -Mr. Sriram Krishnan- Senior Advisor for Artificial Intelligence at the Office of Science and Technology Policy, Panel m…
S16
Keynote Adresses at India AI Impact Summit 2026 — -Sanjay Mehrotra- CEO of Micron Technology And so we are here to listen to our distinguished guests as they present the…
S17
https://dig.watch/event/india-ai-impact-summit-2026/keynote-adresses-at-india-ai-impact-summit-2026 — And so we are here to listen to our distinguished guests as they present their views, their remarks on Pax Silica. This …
S19
Keynote Adresses at India AI Impact Summit 2026 — -Jacob Helberg- Undersecretary of State for Economic Affairs, United States I invite our distinguished guests to please…
S20
S21
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S22
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S23
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S24
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — -Mr. Sriram Krishnan- Senior Advisor for Artificial Intelligence at the Office of Science and Technology Policy, Panel m…
S25
https://dig.watch/event/india-ai-impact-summit-2026/partnering-on-american-ai-exports-powering-the-future-india-ai-impact-summit-2026 — Thank you so much, Mr. Kratios, for your ideas, your remarks, which are truly enlightening and illuminating as well. Lad…
S26
AI Governance Dialogue: Presidential address — Ettore Balestrero: On behalf of His Holiness Pope Leo XIV, I would like to extend his cordial greetings to all participa…
S27
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Ladies and gentlemen, technological development does not automatically equal to social development or progress. The hist…
S28
Panel Discussion Data Sovereignty India AI Impact Summit — Specific mechanisms for ensuring trusted supply chains and technology partnerships
S29
Conversation: 01 — Krishnan outlined the Trump administration’s three-pillar strategy developed over 13 months. The first pillar focuses on…
S30
AI and the future of digital global supply chains (UNCTAD) — In conclusion, AI has emerged as a powerful tool that can significantly impact trade logistics. It can optimize routes a…
S31
Supply Chain Fortification: Safeguarding the Cyber Resilience of the Global Supply Chain — Assets and items are becoming digitized and being pushed from the physical space to the cyberspace, so securing the cybe…
S32
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — The speaker describes AI as a technology that expands human cognitive capacity, likening its impact to the physical ampl…
S33
Closing Ceremony — This argument positions artificial intelligence as a transformative force rather than merely a technological tool. It su…
S34
Artificial intelligence and diplomacy: A new tool for diplomats? — Artificial intelligence (AI) is transitioning from science fiction into our everyday lives. Over the past few years, the…
S35
AI export rules tighten as the US opens global opportunities — President Trumphas signedan Executive Order to promote American leadership in AI exports, marking a significant policy s…
S36
What is it about AI that we need to regulate? — Several sessions raised concerns about foreign-driven agendas in development aid. InOpen Forum #67, moderator Alison Gil…
S37
Is AI a catalyst for development? — The Economist argues that AI has the potential to revolutionise developing countries by transforming their economies and…
S38
Driving Indias AI Future Growth Innovation and Impact — These key comments fundamentally shaped the discussion by expanding it beyond technical infrastructure to encompass trus…
S39
Building the Next Wave of AI_ Responsible Frameworks & Standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S40
Trusted Connections_ Ethical AI in Telecom & 6G Networks — These key comments fundamentally transformed the discussion from a technical implementation conversation to a strategic …
S41
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S42
Discussion Report: Sovereign AI in Defence and National Security — Faisal advocates for a strategic approach where countries focus their limited sovereign resources on the most critical c…
S43
The US push for AI dominance through openness — In a bold move to maintain its edge in the global AI race—especially against China—the United States has unveiled a swee…
S44
Building the AI-Ready Future From Infrastructure to Skills — And Manhattan Project, about 65 % of the entire funding of Manhattan Project was at Oak Ridge National Laboratory. And i…
S45
Skilling and Education in AI — Infrastructure development emerged as crucial, with investments in data centers, subsea cables, and compute capacity to …
S46
WS #100 Integrating the Global South in Global AI Governance — Roeske Martin: Thanks Fadi, great question. So I think you made a great point that came out in the research which was …
S47
Conversation: 01 — Krishnan outlined the Trump administration’s three-pillar strategy developed over 13 months. The first pillar focuses on…
S48
BILATERAL AAV — U nder the UN Charter that was framed at San Francisco in 1945 at the end of World War II, the new organization’s 50 fou…
S49
BETWEEN — DETERMINED to uphold the spirit of reciprocity and promote mutually beneficial trade relations through the esta…
S50
Strategic Partnership and Cooperation Agreement — BELIEVING that this Agreement will create a new climate for economic relations between the Parties and above all for the…
S51
BILATERAL DIPLOMACY: — | Type | Causes …
S52
Discussion Summary: US AI Governance Strategy Under the Trump Administration — The US is deploying both strategic and financial resources to support global AI infrastructure development. This involve…
S53
EU Digital Diplomacy: Geopolitical shift from focus on values to economic security  — The EU emphasises ‘resilient ICT supply chains’ and the use of trusted suppliers. In practice, this means diversifying a…
S54
Parallel Session D3: Supply Chain Disruptions – The Role and Response of NTFCs — The United States is experiencing a pivotal change in its trade policy, prioritising supply chain resilience over previo…
S55
High-Level session: Building and Financing Resilient and Sustainable Global Supply chains and the Role of the Private Sector — Good afternoon. The honoured presence of the Caribbean Development Bank (CDB) at the significant Barbados conference hig…
S56
Keynote Adresses at India AI Impact Summit 2026 — A central theme was the critical importance of building secure, trusted supply chains resistant to coercion. Pichai emph…
S57
How AI Drives Innovation and Economic Growth — And when I say incumbents, those firms that have more than 1 ,000 employees. In around 2000, 50 % of employees used to w…
S58
Harnessing the potential of artificial intelligence in developing countries — The Economistarguesthat there are three main reasons for optimism about AI and development: First, the technology is imp…
S59
How AI Drives Innovation and Economic Growth — <strong>Jeanette Rodrigues:</strong> all around the Bharat Mandapam. So once again, thank you very much for your time th…
S60
Parliamentary Session 5 Parliamentary Exchange Enhancing Digital Policy Practices — Olga Reis: Thank you so much. My name is Olga Reis and I represent the private sector here. I work at Google and I cover…
S61
The Global Power Shift India’s Rise in AI &amp; Semiconductors — Sovereignty involves ensuring that data and applications remain resident within the country and relevant to national con…
S62
Parallel Session A5: Achieving Sustainable and Resilient Transport and Logistics including inSIDS — It acknowledges the importance of technology and infrastructure, the ethical necessity for transparency, and the strateg…
S63
The Geopolitics of Materials: Critical Mineral Supply Chains and Global Competition — And I think that’s the role of governments. In addition, and I want to close with this, as a result of all of this, we r…
S64
Digital Technologies and the Environment: a Synergy for the Future — One of the latest developments are the outcomes of the first meeting of the U.S.-EU Trade and Technology Council (TTC) 1…
S65
Strengthening bilateral technological cooperation: Indian Prime Minister discusses joint projects in US visit — Indian Prime Minister Narendra Modi is currently undertaking a significant state visit to the United States, where he ha…
S66
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Ambassador Sergio Gor emphasized the “limitless potential” of the U.S.-India partnership, noting the strong personal rel…
S67
Keynote Adresses at India AI Impact Summit 2026 — -Strategic partnership between democracies: Multiple speakers emphasized the alliance between the world’s oldest and lar…
S68
Strengthening bilateral technological cooperation: Indian Prime Minister discusses joint projects in US visit — Indian Prime Minister Narendra Modi is currently undertaking a significant state visit to the United States, where he ha…
S69
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — The speaker describes AI as a technology that expands human cognitive capacity, likening its impact to the physical ampl…
S70
Closing Ceremony — This argument positions artificial intelligence as a transformative force rather than merely a technological tool. It su…
S71
AI Governance Dialogue: Presidential address — Ettore Balestrero: On behalf of His Holiness Pope Leo XIV, I would like to extend his cordial greetings to all participa…
S72
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — Power is accumulating rapidly in the hands of those at the forefront of AI development. A handful of technology corporat…
S73
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Amb Thomas Schneider — This comment is insightful because it provides a powerful historical framework for understanding AI’s transformative pot…
S74
AI export rules tighten as the US opens global opportunities — President Trumphas signedan Executive Order to promote American leadership in AI exports, marking a significant policy s…
S75
What is it about AI that we need to regulate? — Several sessions raised concerns about foreign-driven agendas in development aid. InOpen Forum #67, moderator Alison Gil…
S76
The US government is considering controls on exports of emerging technologies — TheUS Department of Commerce (DoC) published an Advance notice of proposed rulemaking, asking for public comments on cri…
S77
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — 8 year old prodigy: Sharing is learning with the rest of the world. One, an AI that is independent. From large global A…
S78
From India to the Global South_ Advancing Social Impact with AI — 60 ,000 crores is being put in our ITIs. So our ITIs are the grassroots organizations, government ITIs, there’s maybe mo…
S79
Is AI a catalyst for development? — The Economist argues that AI has the potential to revolutionise developing countries by transforming their economies and…
S80
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — Applications range from advanced data analytics and automation to augmenting human capabilities in healthcare, agricultu…
S81
Review of AI and digital developments in 2024 — In addition AI governance regulation are becoming long. tend to be very long. For example,U.S. President’s Executive Ord…
S82
Tariffs and AI top the agenda for US CEOs over the next three years — US CEOs prioritise cost reduction and AI integration amid global economic uncertainty. According toKPMG’s 2025 CEO Outlo…
S83
INTRODUCTION — To effectively pursue the objectives defined in the strategy, it will be essential to define an entity responsible for t…
S84
TIMELINE — The country is preparing to position itself at the forefront of innovation by developing a strategy to integrate A…
S85
Empowering the Ethical Supply Chain: steps to responsible sourcing and circular economy (Lenovo) — By leveraging technology, it is possible to keep the benefits of extraction and manufacturing within the loop. This high…
S86
How Investment Promotion Agencies (IPAs) and trade institutions could leverage digital tools to create sustainable supply chain partnerships’ — The policy aims to increase productivity, economic efficiency, boost national economic growth, and build a civilized soc…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Ambassador Sergio Gor
2 arguments193 words per minute504 words156 seconds
Argument 1
Natural partnership and special bilateral relationship emphasizing AI cooperation – Ambassador Sergio Gor
EXPLANATION
The ambassador describes the U.S.–India relationship as a natural partnership grounded in shared values and a long‑standing friendship between the leaders. He highlights AI as a key sector where this collaboration can be deepened over the next three years.
EVIDENCE
He notes the “natural partnership” between the United States’ technology and India’s innovation, the special relationship between the President and Prime Minister, and the President’s strong personal liking for the Indian leader, which he says will make a huge difference for the next three years. He also identifies AI as a focus area for further cooperation. [5-9][16-20][22-24]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Gor describes the U.S.–India partnership and the Pax Silica initiative as a ‘positive‑sum alliance of trusted industrial bases,’ emphasizing AI collaboration (S4) and notes the natural partnership in the summit briefing (S1).
MAJOR DISCUSSION POINT
Emphasis on bilateral ties and AI as a strategic collaboration area
AGREED WITH
Sanjay Mehrotra, Dr. Randhir Thakur
DISAGREED WITH
Secretary S. Krishnan
Argument 2
AI revolution is inevitable; nations must adapt and partner on shared democratic values – Ambassador Sergio Gor
EXPLANATION
Gor asserts that the AI revolution is already underway and cannot be ignored, comparing resistance to AI to historical opposition to the Model T. He calls for countries to embrace AI, partner with like‑minded democracies, and use the technology for good.
EVIDENCE
He states that “the AI revolution is here” and that societies that resist will be left behind, using the Model T analogy to illustrate inevitable technological change, and urges India and the United States to lead together as the world’s oldest and largest democracies. [92-104]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He frames AI as an inevitable revolution and calls for democratic partners, echoing the democratic‑values emphasis in the AI‑for‑Democracy discussion (S27) and his remarks on the AI revolution at the summit (S1).
MAJOR DISCUSSION POINT
Inevitability of AI and need for democratic partnership
S
Sanjay Mehrotra
2 arguments132 words per minute667 words302 seconds
Argument 1
Micron’s R&D, manufacturing investments in India, memory as AI fuel, and Paxilica’s role in securing supply chains – Sanjay Mehrotra
EXPLANATION
Mehrotra outlines Micron’s extensive R&D presence in India, its $2.75 bn investment in a Gujarat assembly‑test facility, and how memory acts as the fuel for AI. He links these efforts to the Paxilica initiative, which he says ensures resilient and secure supply chains for AI infrastructure.
EVIDENCE
He congratulates the Paxilica signing, describes Micron’s U.S. headquarters, patents, Indian R&D facilities, two-nanometer designs, and the Gujarat plant’s investment and production capacity, noting that this complements U.S. manufacturing and advances AI-driven memory design. He states that Paxilica guarantees supply-chain resiliency. [27-46]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Mehrotra outlines Micron’s $2.75 bn Gujarat assembly‑test facility, the role of memory as AI fuel, and the Paxilica initiative securing supply chains, as documented in the summit report (S1) and the keynote address (S4).
MAJOR DISCUSSION POINT
Micron’s India investments and Paxilica’s supply‑chain role
AGREED WITH
Ambassador Sergio Gor, Dr. Randhir Thakur
Argument 2
Micron’s $2.75 bn investment in Gujarat assembly‑test, complementing U.S. manufacturing, and building resilient, secure supply chains – Sanjay Mehrotra
EXPLANATION
Mehrotra emphasizes that the Gujarat facility will assemble and test hundreds of millions of chips, reinforcing Micron’s U.S. manufacturing plan and enhancing AI‑related manufacturing efficiency. This investment is presented as a win‑win partnership that bolsters supply‑chain security.
EVIDENCE
He details the $2.75 bn Gujarat investment, its role in assembly and test operations, how it complements U.S. silicon and advanced packaging plants, and its contribution to AI-driven automation and workflow efficiency. [39-44]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He emphasizes that the Gujarat plant complements Micron’s U.S. silicon and advanced‑packaging operations, reinforcing supply‑chain resilience (S1; S4).
MAJOR DISCUSSION POINT
Strategic Indian manufacturing complementing U.S. operations
D
Dr. Randhir Thakur
3 arguments144 words per minute609 words252 seconds
Argument 1
India’s semiconductor design capacity, AI‑enabled fab, indigenous packaging, mobile‑phone production, and Paxilica accelerating momentum – Dr. Randhir Thakur
EXPLANATION
Thakur highlights India’s large engineering talent pool, its 20 % share of global semiconductor design, and new investments including an AI‑enabled fab and indigenous packaging in Assam. He connects these capabilities to the Paxilica initiative, which he says will further accelerate India’s semiconductor momentum.
EVIDENCE
He cites 1.5 million engineers produced annually, 20 % of global semiconductor design done in India, $25 bn invested in ten factories, the upcoming AI-enabled fab, packaging technology in Northeast Assam for edge chips, partnerships with U.S. firms, and $70 bn mobile-phone production with $30 bn exported. [63-77]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Thakur cites India’s 1.5 m engineers, 20 % share of global chip design, $25 bn in ten factories, an upcoming AI‑enabled fab, indigenous packaging in Assam, and links these to the Paxilica initiative (S1).
MAJOR DISCUSSION POINT
India’s growing semiconductor ecosystem and Paxilica’s boost
AGREED WITH
Ambassador Sergio Gor, Sanjay Mehrotra
Argument 2
21st‑century reliance on compute and critical minerals; importance of secure material supply for AI hardware – Dr. Randhir Thakur
EXPLANATION
Thakur argues that while the 20th century was powered by oil and steel, the 21st century depends on compute and the minerals that enable AI hardware. Secure access to these minerals is therefore a strategic priority.
EVIDENCE
He references the undersecretary’s comment that the 20th century ran on oil and steel, then states that the 21st century runs on compute and the minerals that feed it, emphasizing the timeliness of silicon and other critical materials. [59-62]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He states ‘the 21st century runs on compute and the minerals that feed it,’ highlighting strategic mineral security (S1).
MAJOR DISCUSSION POINT
Strategic importance of compute and minerals for AI
Argument 3
Data‑center hardware drives AI; minerals and compute are the new strategic resources – Dr. Randhir Thakur
EXPLANATION
Thakur explains that AI’s growth is hardware‑driven, requiring data‑center infrastructure built on compute power and critical minerals. He positions these resources as the new strategic assets of the 21st century.
EVIDENCE
He notes that AI was known long ago but lacked hardware, that compute was insufficient, and that today minerals and compute are the strategic resources, echoing his earlier point about the 21st-century reliance on compute and minerals. [56-62]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He notes that AI’s growth is hardware‑driven, requiring data‑center infrastructure built on compute and critical minerals (S1).
MAJOR DISCUSSION POINT
Hardware and mineral foundations of AI
S
Secretary S. Krishnan
2 arguments139 words per minute145 words62 seconds
Argument 1
Need to align with shared values, diversify supply chains, and democratize technology through trusted partnerships – Secretary S. Krishnan
EXPLANATION
Krishnan calls for alignment among nations that share democratic values, stressing the importance of diversified, trusted supply chains to avoid dependence on a single source. He frames this as a way to democratize technology and ensure it serves all.
EVIDENCE
She urges alignment on shared values, warns against becoming enslaved to one dependence, emphasizes trusted partners and value chains, and says democratizing technology lets more people into the room. [83-89]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Krishnan calls for alignment on democratic values, diversified trusted supply chains, and democratizing technology, echoing her three‑pillar strategy (S29), statements on partnership and democratization (S14), and mechanisms for trusted supply chains discussed in the panel (S28).
MAJOR DISCUSSION POINT
Value‑based alignment and supply‑chain diversification
DISAGREED WITH
Ambassador Sergio Gor
Argument 2
Avoiding over‑dependence on a single source; fostering trusted partners for resilient value chains – Secretary S. Krishnan
EXPLANATION
Krishnan reiterates that nations must not rely on a single supplier and should build trusted partnerships to create resilient and secure value chains, especially in the context of pandemic and geopolitical upheavals.
EVIDENCE
She mentions lessons learned from the pandemic and geopolitical upheavals, stressing the need for trusted partners and trusted value chains so technology can work for all. [83-87]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She stresses avoiding reliance on a single source and building trusted partners, as outlined in her pandemic‑learned lessons and three‑pillar framework (S29) and reinforced in the data‑sovereignty panel (S28).
MAJOR DISCUSSION POINT
Resilience through diversified trusted partners
J
Jacob Helberg
2 arguments156 words per minute240 words92 seconds
Argument 1
Framing the macro‑level question: how AI and supply‑chain re‑organisation affect global security and business – Jacob Helberg
EXPLANATION
Helberg asks the panel to consider the broader implications of AI and the re‑organisation of supply chains on global security and business, inviting each panelist to share their perspective.
EVIDENCE
He poses the macro-level question to the audience, stating the context of the AI summit and the changing global economy driven by supply-chain re-organisation and AI. [78-81]
MAJOR DISCUSSION POINT
Macro perspective on AI and supply‑chain impacts
Argument 2
Seeking a macro‑level message on how AI reshapes economies and societies – Jacob Helberg
EXPLANATION
Helberg requests a top‑level message from the panel about the impact of AI on economies and societies, aiming to capture a unifying theme for the audience.
EVIDENCE
He asks for a macro-level message from the panelists, referencing the AI revolution and supply-chain changes. [78-81]
MAJOR DISCUSSION POINT
Request for overarching AI message
M
Michael Kratsios
3 arguments148 words per minute375 words151 seconds
Argument 1
AI export program’s financing tools (IDFC, EXIM, etc.) to support partner countries in building secure AI stacks – Michael Kratsios
EXPLANATION
Kratsios outlines a suite of financing mechanisms—including the U.S. International Development Finance Corporation, Export‑Import Bank, Trade and Development Agency, Millennium Challenge Corporation, and a new World Bank fund—to help partner nations acquire the American AI stack.
EVIDENCE
He lists the specific agencies and new AI-focused programs that will provide financing to overcome obstacles for importing the U.S. AI stack. [153-155]
MAJOR DISCUSSION POINT
Financial instruments for AI export support
DISAGREED WITH
William Kimmett
Argument 2
AI as a new frontier that can unlock prosperity, knowledge, and human progress when used responsibly – Michael Kratsios
EXPLANATION
Kratsios frames AI as a historic revolutionary technology that can unlock new knowledge, economic prosperity, and human potential if deployed responsibly, likening it to past transformative inventions.
EVIDENCE
He states that the American AI stack can lead to new economic and social benefits, unlock knowledge, and that responsible use will benefit all of mankind. [159-162]
MAJOR DISCUSSION POINT
Potential of AI as a transformative frontier
Argument 3
Creation of a national‑champions initiative, AI stack export, and Tech Corps to embed U.S. expertise abroad – Michael Kratsios
EXPLANATION
Kratsios describes the U.S. AI Export Program, which includes a national‑champions initiative, an executive order to export the full AI stack, and the launch of a Tech Corps that places U.S. technical volunteers with partner countries to support AI deployment.
EVIDENCE
He mentions the national champions initiative, the AI stack export, and the Tech Corps that embeds volunteer technical talent for last-mile AI application support. [150-162]
MAJOR DISCUSSION POINT
Program architecture for AI export and capacity building
W
William Kimmett
3 arguments173 words per minute779 words269 seconds
Argument 1
Building AI infrastructure (data‑centers, energy) as a priority for national security and economic stability – William Kimmett
EXPLANATION
Kimmett identifies AI infrastructure—specifically energy and data‑center capacity—as a top priority for ensuring national security and economic stability, and notes ongoing efforts to build this infrastructure.
EVIDENCE
He lists the three priorities: building infrastructure (energy, data-centers), fostering innovation, and promoting partnership, emphasizing the first priority of AI infrastructure. [214-217]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Kimmett highlights AI infrastructure—energy and data‑centers—as critical for security and economic stability, aligning with the summit’s view that AI is hardware‑driven (S1) and UNCTAD’s analysis of AI’s impact on digital supply chains (S30).
MAJOR DISCUSSION POINT
Infrastructure as a security and economic priority
DISAGREED WITH
Michael Kratsios
Argument 2
Executive order mandating industry‑led consortia, request for information, and upcoming public call for proposals – William Kimmett
EXPLANATION
Kimmett explains that an executive order requires industry‑led consortia to propose full‑stack AI offerings, that the Commerce Department issued a request for information, and that a public call for proposals will soon follow.
EVIDENCE
He details the executive order, the request for information, the industry response, and the upcoming public call for proposals to shape the AI export program. [230-239]
MAJOR DISCUSSION POINT
Policy process for AI consortia formation
DISAGREED WITH
Mr. Sriram Krishnan
Argument 3
Health and education are priority sectors for AI deployment in emerging markets – William Kimmett
EXPLANATION
Kimmett highlights health and education as key sectors where AI can make a transformative impact in emerging economies, emphasizing the potential for AI‑driven solutions to improve public services.
EVIDENCE
He states that working with ministries of health and education to develop AI solutions that revolutionize those sectors is the most exciting part of the program. [300-301]
MAJOR DISCUSSION POINT
Sectoral focus on health and education
M
Mr. Sriram Krishnan
3 arguments117 words per minute1491 words762 seconds
Argument 1
Optimism about youth, education, and AI‑driven tutoring that can transform learning for all ages – Mr. Sriram Krishnan
EXPLANATION
Krishnan expresses enthusiasm for AI‑enabled education, describing AI tutors that can teach anyone, anytime, in any language, and sees this as a source of hope for the future.
EVIDENCE
He recounts meeting a student using AI for tutoring, imagines tireless multilingual teachers, and emphasizes the transformative potential for learners of all ages. [307-314]
MAJOR DISCUSSION POINT
AI as a catalyst for universal education
Argument 2
Sovereign AI kits allow countries to select stack components (chips, models, agents) that fit their context – Mr. Sriram Krishnan
EXPLANATION
Krishnan explains that sovereign AI kits let nations choose which parts of the AI stack—such as chips, GPUs, models, or agents—to adopt, enabling tailored AI capabilities aligned with local needs.
EVIDENCE
He outlines the stack layers (chips, GPUs/TPUs, model layer, agents/applications) and notes that countries can pick the components that suit them. [278-283]
MAJOR DISCUSSION POINT
Customizable AI stack for national sovereignty
DISAGREED WITH
William Kimmett
Argument 3
AI‑enabled education tools can act as tireless, multilingual teachers, reshaping learning worldwide – Mr. Sriram Krishnan
EXPLANATION
Krishnan reiterates that AI tutors can provide continuous, language‑adapted instruction to learners of any age, potentially revolutionizing global education.
EVIDENCE
He describes AI teachers that never tire, can speak local languages, and answer any question, highlighting the transformative impact on learning. [307-314]
MAJOR DISCUSSION POINT
AI tutors as universal educators
B
Brendan Remington
3 arguments195 words per minute603 words185 seconds
Argument 1
AI’s sweeping influence on personal and work life creates high demand; the challenge is delivering it effectively – Brendan Remington
EXPLANATION
Remington notes that AI is one of the few technologies that profoundly changes both personal and professional life, generating massive demand, and stresses the importance of efficiently delivering AI solutions to both buyers and providers.
EVIDENCE
He comments on the sweeping nature of AI, the high hunger for it, and the need to help both companies and buyers deliver AI effectively. [326-334]
MAJOR DISCUSSION POINT
High demand for AI and delivery challenges
Argument 2
Design of consortia to be simple (size categories) yet flexible, facilitating both buyers and sellers across verticals – Brendan Remington
EXPLANATION
Remington describes the consortia model as using simple size categories (small, medium, large) while also accommodating niche needs, aiming for simplicity and elegance for both governments and companies.
EVIDENCE
He explains the approach of offering t-shirt size categories, avoiding excessive permutations, and ensuring solutions are simple yet flexible for buyers and sellers. [251-259]
MAJOR DISCUSSION POINT
Simplified yet adaptable consortia structure
Argument 3
Vertical use cases (agriculture, manufacturing, maritime) require a one‑stop‑shop approach for AI solutions – Brendan Remington
EXPLANATION
Remington argues that AI applications span many sectors and that a centralized, one‑stop‑shop model would help buyers discover and acquire solutions across verticals efficiently.
EVIDENCE
He lists agriculture, manufacturing, maritime among many verticals and suggests organizing offerings by verticals as a useful one-stop-shop. [304-306]
MAJOR DISCUSSION POINT
One‑stop‑shop for sectoral AI solutions
M
Moderator
1 argument14 words per minute416 words1777 seconds
Argument 1
Moderator’s role in highlighting gratitude, setting the stage, and emphasizing the significance of the partnership – Moderator
EXPLANATION
The moderator repeatedly thanks participants, welcomes speakers, and frames the importance of the AI summit and the partnership between the United States and India.
EVIDENCE
He delivers multiple thank-you statements, introduces Michael Kratsios, and emphasizes the significance of the panel on AI exports and the broader partnership. [130-148]
MAJOR DISCUSSION POINT
Facilitating dialogue and acknowledging partnership
Agreements
Agreement Points
AI revolution is inevitable and must be embraced by democratic partners
Speakers: Ambassador Sergio Gor, Michael Kratsios
AI revolution is inevitable; nations must adapt and partner on shared democratic values — Ambassador Sergio Gor AI as a new frontier that can unlock prosperity, knowledge, and human progress when used responsibly — Michael Kratsios
Both speakers assert that the AI revolution is already underway, cannot be ignored, and should be pursued by democracies that share values, positioning AI as a historic transformative technology. [92-104][159-162]
POLICY CONTEXT (KNOWLEDGE BASE)
The U.S. AI Action Plan frames AI as a strategic imperative for democratic nations to maintain leadership and openness, underscoring the inevitability of the AI revolution and the need for coordinated democratic engagement [S43].
US‑India partnership is a natural, strategic foundation for AI and semiconductor collaboration
Speakers: Ambassador Sergio Gor, Sanjay Mehrotra, Dr. Randhir Thakur
Natural partnership and special bilateral relationship emphasizing AI cooperation – Ambassador Sergio Gor Micron’s R&D, manufacturing investments in India, memory as AI fuel, and Paxilica’s role in securing supply chains – Sanjay Mehrotra India’s semiconductor design capacity, AI‑enabled fab, indigenous packaging, mobile‑phone production, and Paxilica accelerating momentum – Dr. Randhir Thakur
All three highlight a strong, natural US-India relationship that leverages India’s growing semiconductor ecosystem and AI potential, with concrete investments and initiatives (e.g., Paxilica) reinforcing the partnership. [5-9][16-20][22-24][27-46][63-77]
Need for diversified, trusted supply chains to ensure resilience and security
Speakers: Secretary S. Krishnan, Sanjay Mehrotra
Need to align with shared values, diversify supply chains, and democratize technology through trusted partnerships — Secretary S. Krishnan Micron’s India investments and Paxilica’s role in securing supply chains — Sanjay Mehrotra
Both stress that resilient, secure supply chains built on trusted partners and diversified sources are essential for technology and AI infrastructure, with Paxilica cited as a mechanism to embed such resilience. [83-89][45-46]
POLICY CONTEXT (KNOWLEDGE BASE)
Policy shifts in the EU and the United States prioritize resilient ICT and critical-material supply chains, advocating diversification to reduce dependence on single sources and enhance security [S53][S54][S64].
Building AI infrastructure (energy, data centers) is a top priority for security and economic stability
Speakers: William Kimmett, Dr. Randhir Thakur
Building AI infrastructure (data‑centers, energy) as a priority for national security and economic stability — William Kimmett Data‑center hardware drives AI; minerals and compute are the new strategic resources — Dr. Randhir Thakur
Both identify the physical infrastructure that powers AI-energy, data-centers, and the underlying compute/mineral resources-as critical for national security and economic growth. [214-217][56-62]
POLICY CONTEXT (KNOWLEDGE BASE)
The Trump administration’s three-pillar AI strategy places data-center and grid capacity expansion at the forefront, and U.S. financing mechanisms support overseas AI infrastructure projects, linking infrastructure building to security and stability [S47][S52].
AI can transform education and health, especially in emerging markets
Speakers: William Kimmett, Mr. Sriram Krishnan
Health and education are priority sectors for AI deployment in emerging markets — William Kimmett Optimism about youth, education, and AI‑driven tutoring that can transform learning for all ages — Mr. Sriram Krishnan
Both see AI’s greatest societal impact in improving health and education services, with emphasis on emerging economies and AI-enabled tutoring as a catalyst for universal learning. [300-301][307-314]
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of AI’s impact on the Global South note its potential to improve education and health outcomes, with initiatives such as AI assistants for citizens and private-sector programs targeting emerging markets [S58][S60][S45].
Similar Viewpoints
Both frame AI as an unavoidable, historic revolution that should be harnessed by democracies for the common good. [92-104][159-162]
Speakers: Ambassador Sergio Gor, Michael Kratsios
AI revolution is inevitable; nations must adapt and partner on shared democratic values — Ambassador Sergio Gor AI as a new frontier that can unlock prosperity, knowledge, and human progress when used responsibly — Michael Kratsios
Both emphasize India’s rapidly expanding semiconductor ecosystem, the strategic role of AI‑focused fabs and packaging, and the reinforcing effect of the Paxilica initiative. [27-46][63-77]
Speakers: Sanjay Mehrotra, Dr. Randhir Thakur
Micron’s R&D, manufacturing investments in India, memory as AI fuel, and Paxilica’s role in securing supply chains — Sanjay Mehrotra India’s semiconductor design capacity, AI‑enabled fab, indigenous packaging, mobile‑phone production, and Paxilica accelerating momentum — Dr. Randhir Thakur
Both focus on creating streamlined, user‑friendly mechanisms (infrastructure or consortia) that enable broad adoption of AI across sectors. [214-217][251-259]
Speakers: William Kimmett, Brendan Remington
Building AI infrastructure (data‑centers, energy) as a priority for national security and economic stability — William Kimmett Design of consortia to be simple (size categories) yet flexible, facilitating both buyers and sellers across verticals — Brendan Remington
Unexpected Consensus
Strategic importance of material and supply‑chain independence
Speakers: Dr. Randhir Thakur, Secretary S. Krishnan
21st‑century reliance on compute and critical minerals; importance of secure material supply for AI hardware — Dr. Randhir Thakur Avoiding over‑dependence on a single source; fostering trusted partners for resilient value chains — Secretary S. Krishnan
A technical expert stressing minerals and compute as strategic resources and a policy official urging diversification of partners converge on the need to secure material and supply-chain independence, a link not obvious from their distinct domains. [59-62][83-87]
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on critical-mineral supply chains and the need for diversified sources to mitigate coercion risk reinforce the strategic value of material and supply-chain independence [S53][S63][S64].
Overall Assessment

The panel shows strong convergence on several fronts: the inevitability of the AI revolution, the centrality of the US‑India partnership, the necessity of resilient and diversified supply chains, the priority of AI‑related infrastructure, and the societal benefits of AI in health and education. These shared positions indicate a high level of consensus that can translate into coordinated policy actions, joint investments, and collaborative programs such as Paxilica and the AI Export Initiative.

High consensus across technical, policy, and commercial perspectives, suggesting that future initiatives are likely to receive broad support and coordinated implementation.

Differences
Different Viewpoints
Basis of bilateral partnership – personal diplomatic rapport vs. shared democratic values and diversified supply chains
Speakers: Ambassador Sergio Gor, Secretary S. Krishnan
Natural partnership and special bilateral relationship emphasizing AI cooperation – Ambassador Sergio Gor Need to align with shared values, diversify supply chains, and democratize technology through trusted partnerships – Secretary S. Krishnan
Ambassador Gor stresses that the U.S.-India partnership is driven by the personal liking of the President for the Prime Minister and a historic friendship, which he says will make a huge difference for the next three years [18-20]. Secretary Krishnan, by contrast, calls for alignment on shared democratic values, diversification of supply chains, and democratization of technology, warning against dependence on a single source [83-89]. The two speakers agree that cooperation is needed but disagree on whether personal diplomatic rapport or institutional value-based alignment should be the foundation of the partnership.
POLICY CONTEXT (KNOWLEDGE BASE)
U.S.-India dialogue stresses shared democratic values and supply-chain diversification as structural foundations of the partnership, reducing reliance on personal diplomatic rapport [S61][S65].
Priority focus for U.S. AI strategy – exporting the AI stack and financing partners vs. building domestic AI infrastructure (energy, data centers) for security and economic stability
Speakers: Michael Kratsios, William Kimmett
AI export program’s financing tools (IDFC, EXIM, etc.) to support partner countries in building secure AI stacks – Michael Kratsios Building AI infrastructure (data‑centers, energy) as a priority for national security and economic stability – William Kimmett
Kratsios outlines a suite of financing mechanisms and a Tech Corps to help partner nations acquire the American AI stack, emphasizing export and capacity-building abroad [150-162]. Kimmett, however, lists building AI infrastructure-energy and data-center capacity-as the top priority for national security and economic stability, focusing on domestic investment rather than export [214-217]. Both aim to strengthen AI, but they diverge on whether the primary effort should be outward-focused export support or inward-focused infrastructure development.
POLICY CONTEXT (KNOWLEDGE BASE)
U.S. policy debates juxtapose the AI export program highlighted in the AI Action Plan with domestic infrastructure investments emphasized in the three-pillar strategy and AI financing initiatives abroad [S41][S42][S43][S47][S52].
Degree of flexibility in the AI export program – a U.S.–centric foundation versus sovereign AI kits that let countries pick stack components
Speakers: William Kimmett, Mr. Sriram Krishnan
Executive order mandating industry‑led consortia, request for information, and upcoming public call for proposals – William Kimmett Sovereign AI kits allow countries to select stack components (chips, models, agents) that fit their context – Mr. Sriram Krishnan
Kimmett describes a program built on the American AI tech stack that will provide a foundation for partners, emphasizing a standardized, U.S.-centric approach [272-276]. Krishnan later explains that sovereign AI kits let nations choose which parts of the stack (chips, GPUs, models, agents) to adopt, suggesting a more modular, country-driven model [270-283]. The disagreement lies in how much autonomy partner countries should have versus how much the U.S. framework should dictate.
POLICY CONTEXT (KNOWLEDGE BASE)
The India AI Impact Summit 2026 discussion of “AI sovereignty” and reports on sovereign AI advocate a flexible, modular export approach rather than a monolithic U.S.-centric stack [S41][S42].
Unexpected Differences
Emphasis on personal presidential preference as a decisive factor in bilateral cooperation
Speakers: Ambassador Sergio Gor, Secretary S. Krishnan
Natural partnership and special bilateral relationship emphasizing AI cooperation – Ambassador Sergio Gor Need to align with shared values, diversify supply chains, and democratize technology through trusted partnerships – Secretary S. Krishnan
Gor’s claim that “the President really, really, really likes the Prime Minister” will make a huge difference for the next three years [18-20] is an unusual focus on personal diplomatic sentiment. Krishnan’s call for alignment on shared democratic values and diversified, trusted supply chains [83-89] reflects a more institutional, policy-driven approach. The contrast between personal rapport and systemic value-based partnership was not anticipated given the high-level diplomatic context.
Overall Assessment

The discussion revealed three main axes of disagreement: (1) the underlying rationale for the U.S.–India partnership (personal diplomatic ties vs. shared democratic values and supply‑chain diversification); (2) the strategic priority for U.S. AI policy (export‑oriented financing and technology transfer versus domestic infrastructure building for security); and (3) the level of autonomy afforded to partner countries in the AI stack (U.S.–centric foundation versus sovereign, modular kits). While all speakers concur on the importance of AI cooperation, they diverge on the mechanisms and philosophical basis for that cooperation.

Moderate to high – the disagreements are substantive, touching on policy orientation, partnership philosophy, and program design. They suggest that achieving consensus on a joint AI agenda will require reconciling personal diplomatic narratives with institutional frameworks, balancing export ambitions with domestic security needs, and agreeing on the degree of partner autonomy. Without such alignment, implementation of initiatives like Paxilica or the AI Export Program may face friction between political, economic, and technical priorities.

Partial Agreements
Both speakers agree that deeper U.S.–India AI collaboration is essential for mutual benefit. However, Gor emphasizes the personal rapport between leaders as the engine of cooperation, while Krishnan stresses institutional alignment on democratic values and supply‑chain diversification as the path forward [5-9][83-89].
Speakers: Ambassador Sergio Gor, Secretary S. Krishnan
Natural partnership and special bilateral relationship emphasizing AI cooperation – Ambassador Sergio Gor Need to align with shared values, diversify supply chains, and democratize technology through trusted partnerships – Secretary S. Krishnan
Both speakers view the Paxilica initiative as a catalyst for a resilient semiconductor ecosystem. Mehrotra focuses on Micron’s corporate investment and memory supply, while Thakur highlights national design talent and new fab capacity. They share the goal of strengthening the supply chain but differ on whether corporate‑level investment or national‑level design capacity is the primary lever [27-46][63-77].
Speakers: Sanjay Mehrotra, Dr. Randhir Thakur
Micron’s R&D, manufacturing investments in India, memory as AI fuel, and Paxilica’s role in securing supply chains – Sanjay Mehrotra India’s semiconductor design capacity, AI‑enabled fab, indigenous packaging, and Paxilica accelerating momentum – Dr. Randhir Thakur
Takeaways
Key takeaways
The United States and India are deepening AI and semiconductor collaboration, highlighted by the Paxilica initiative and a shared vision of a natural, values‑based partnership. Micron’s $2.75 bn investment in a Gujarat assembly‑test facility will complement U.S. manufacturing and strengthen a resilient, secure semiconductor supply chain. India’s growing design talent, AI‑enabled fab, indigenous packaging, and mobile‑phone production are key assets that accelerate the partnership. Both governments stress the need to diversify supply chains, avoid over‑reliance on a single source, and build trusted, democratic partnerships. AI is portrayed as an inevitable, transformative revolution; embracing it responsibly is framed as a strategic priority for both nations. The U.S. AI Export Program will create a “national‑champions” framework, finance mechanisms (IDFC, EX‑IM, etc.), and a Tech Corps to help partner countries adopt the American AI stack. Sovereign AI kits will let countries select stack components (chips, models, agents) that fit their policy and security needs. Priority use‑case sectors identified include health, education, agriculture, manufacturing, and maritime, with a particular emphasis on AI‑driven education tools for all ages. Optimism is expressed about youth engagement, the energy of the Indian ecosystem, and the broader societal benefits of AI when aligned with shared democratic values.
Resolutions and action items
Signing of the Paxilica agreement between the U.S. and India to formalize AI and semiconductor cooperation. Micron to proceed with its $2.75 bn investment in the Sanand, Gujarat assembly‑test facility, creating hundreds of millions of chips annually. U.S. Department of Commerce to finalize the AI Export Program, issue a public call for industry‑led consortia proposals, and launch the AI Agent Standards Initiative. Launch of the U.S. Tech Corps to embed technical volunteers with partner countries for last‑mile AI deployment. Commitment from Ambassador Sergio Gor to focus on AI collaboration over the next three years. Invitation for startups and established firms to submit proposals via the designated website for participation in the export consortia. U.S. officials (Kimmett, Remington) to continue outreach with emerging‑market ministries (health, education) to pilot AI solutions.
Unresolved issues
Detailed operational framework for the Paxilica supply‑chain security mechanisms remains unspecified. Exact timelines for the public call for AI export consortia proposals and subsequent award processes were not provided. How data sovereignty, model ownership, and compliance with local regulations will be managed within sovereign AI kits was not fully addressed. Specific coordination mechanisms between U.S. and Indian R&D teams for next‑generation memory designs and AI‑specific chips were mentioned but not detailed. Funding allocation criteria and eligibility for partner‑country financing through IDFC, EX‑IM, etc., were not clarified. Mechanisms for monitoring and enforcing the “trusted partnership” principle to avoid over‑dependence on any single source were not defined.
Suggested compromises
Balancing U.S. leadership in the AI stack with partner countries’ desire for sovereignty by offering modular, selectable components rather than a monolithic solution. Providing both simple “t‑shirt‑size” consortia options for ease of adoption and more customized niche solutions for specialized needs. Encouraging diversified supply chains by jointly investing in Indian manufacturing while maintaining complementary U.S. production capacity. Aligning AI deployment goals with shared democratic values, allowing partners to adopt technology while respecting local policy preferences.
Thought Provoking Comments
The magic touch is that special relationship between our two leaders. Our president really, really, really likes the prime minister, and that makes a huge difference for the next three years.
Highlights how personal diplomatic rapport—not just policy—can accelerate technology collaboration, framing the U.S.-India partnership as a function of leadership chemistry.
Shifted the conversation from technical details to geopolitical dynamics, prompting other speakers (e.g., Secretary Krishnan and Dr. Thakur) to reference the importance of trusted partnerships and value‑chain independence.
Speaker: Ambassador Sergio Gor
If AI is the growth engine of the digital economy, then memory is the fuel.
Uses a vivid metaphor to explain the foundational role of semiconductor memory in AI, making a complex technical dependency accessible to a broader audience.
Steered the discussion toward concrete supply‑chain implications, leading Jacob Helberg to ask about security of Micron’s supply chain and prompting further elaboration on manufacturing investments in India.
Speaker: Sanjay Mehrotra
The 20th century ran on oil and steel. The 21st century runs on compute and the minerals that feed it.
Frames the current era as a materials‑driven compute economy, linking geopolitics, resource security, and semiconductor design capacity in India.
Introduced a macro‑level perspective that broadened the dialogue beyond bilateral projects to global strategic competition, influencing later remarks about AI sovereignty and the need for diversified, secure supply chains.
Speaker: Dr. Randhir Thakur
We need to align and ally on lines which really work for people who share values… and ensure we do not become enslaved to just one dependence.
Calls for a values‑based, multi‑partner approach to supply‑chain resilience, directly addressing concerns about over‑reliance on any single nation.
Reinforced the earlier point about the special U.S.–India relationship while expanding the narrative to include broader coalition‑building, setting the stage for the AI Export Program discussion.
Speaker: Secretary S. Krishnan
The AI revolution is here. People can pretend it’s not… When Ford introduced the Model T, the first protesters were horse‑and‑buggy drivers. You can’t go back to a horse and buggy.
Uses historical analogy to underscore inevitability of AI adoption and to pre‑empt resistance, framing AI as a transformative force akin to past industrial revolutions.
Energized the panel’s tone, prompting participants to speak about proactive partnership and to stress urgency in building AI infrastructure, culminating in the closing optimism about democratic values.
Speaker: Ambassador Sergio Gor
The American AI Export Program exists to share the AI stack so that sovereign infrastructure, data, models, and policies can be built under each country’s control, turning AI into a tool of diplomacy and development.
Articulates a comprehensive policy vision that blends technology transfer, economic development, and diplomatic strategy, introducing the concept of “AI sovereignty” as a cornerstone of U.S. foreign policy.
Created a turning point from bilateral anecdotes to a structured programmatic framework, leading the subsequent panel (Krishnan, Kimmett, Remington) to discuss consortia, use‑cases, and how countries can engage with the export initiative.
Speaker: Michael Kratios
We want to offer choices—t‑shirt sizes of small, medium, large—so that both large buyers and small startups can easily navigate the AI stack, while still accommodating niche, highly customized solutions.
Translates the abstract notion of “AI sovereignty” into a practical, user‑friendly model, addressing concerns about complexity and accessibility for diverse stakeholders.
Guided the conversation toward implementation details of the export program, prompting questions from Sriram Krishnan about how founders should engage and reinforcing the theme of simplicity and inclusivity.
Speaker: Brendan Remington
Overall Assessment

The discussion was driven forward by a handful of high‑impact remarks that repeatedly shifted the focus from surface‑level announcements to deeper strategic themes. Ambassador Gor’s emphasis on personal diplomatic chemistry and the inevitability of AI set a political and historical context, while Sanjay Mehrotra’s fuel‑analogy and Dr. Thakur’s compute‑economy framing grounded the conversation in technical realities. Secretary Krishnan’s call for value‑based, multi‑partner supply‑chain resilience broadened the scope to global coalition‑building. Michael Kratios then crystallized these ideas into a concrete policy— the American AI Export Program—introducing “AI sovereignty” as a diplomatic tool. Subsequent comments from the trade officials translated this vision into actionable mechanisms, ensuring the dialogue moved from vision to implementation. Collectively, these pivotal comments redirected the panel’s tone, deepened the analytical layer, and shaped a narrative that linked geopolitical relationships, technological dependencies, and policy frameworks into a cohesive roadmap for U.S.–India AI collaboration.

Follow-up Questions
How will the AI export program consortia operate, what are the eligibility criteria and steps for countries and companies to participate?
Clarifying the mechanics of the consortia is essential for stakeholders to engage effectively with the program.
Speaker: Sriram Krishnan, Brendan Remington
What are the specific components of a sovereign AI kit that countries can adopt, and how can national champions build on the American AI stack?
Understanding sovereign AI capabilities will help nations design policies and investments that align with the export program.
Speaker: Sriram Krishnan, William Kimmett
What metrics and frameworks will be used to assess supply‑chain resiliency and security for Micron’s operations in India?
Defining measurable resiliency criteria is crucial to ensure the long‑term security of critical memory and storage supply chains.
Speaker: Sanjay Mehrotra
How does heavy data‑center investment translate to edge‑technology performance (smartphones, connected vehicles) in emerging markets, and what data supports this connection?
Empirical evidence is needed to validate the claimed synergy between data‑center capacity and edge device innovation.
Speaker: Dr. Randhir Thakur
What are the most promising AI use cases in health and education sectors in emerging markets, and how can they be scaled effectively?
Identifying high‑impact applications will guide resource allocation and partnership development for the export program.
Speaker: William Kimmett
How can AI technologies be integrated into U.S. government processes to improve efficiency and supply‑chain analysis?
Exploring internal government adoption will ensure the U.S. benefits from the same AI advances it exports.
Speaker: William Kimmett
What are the details and expected impact of the AI standards initiative announced by the White House?
Understanding the standards framework is key for industry alignment and for building trust in AI deployments abroad.
Speaker: Michael Kratios
What has been the effect of rescinding the Biden diffusion rule on semiconductor access for countries like India?
Assessing policy outcomes will inform future export‑control decisions and partnership strategies.
Speaker: Sriram Krishnan
What is the timeline, capacity, and strategic role of the AI‑enabled fab and indigenous packaging technology in Assam for automotive edge chips?
Specific production details are needed to gauge India’s ability to meet domestic and export demand for edge‑focused semiconductors.
Speaker: Dr. Randhir Thakur
How will AI‑driven automation be implemented in Micron’s Indian assembly and test operations, and what efficiency gains are expected?
Operational details will help evaluate the real‑world impact of AI on manufacturing productivity and cost.
Speaker: Sanjay Mehrotra
What mechanisms will ensure AI ethics, data privacy, and transparency in the AI export program for partner countries?
Safeguards are essential to maintain trust and prevent misuse of exported AI technologies.
Speaker: Brendan Remington
How can early‑stage startups (Series A/B) effectively engage with the AI export program and connect with potential buyers?
Providing clear guidance will enable smaller innovators to benefit from the program and broaden the ecosystem.
Speaker: Brendan Remington

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

The Foundation of AI Democratizing Compute Data Infrastructure

The Foundation of AI Democratizing Compute Data Infrastructure

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by highlighting that AI democratization is hampered by limited access to compute and skewed data, with over 80 % of global datasets concentrated in high-income countries and less than 2 % in sub-Saharan Africa, creating a stark gap that must be addressed now [5-11].


Panelists identified different primary obstacles: the sheer breadth of undocumented African languages makes data collection a massive task [32-33]; lack of open, usable models and AI literacy are seen as more critical than raw infrastructure, since hardware can improve over time while model access remains essential [34-37][38-40]; and the concentration of digitized data in the developed world further entrenches inequities, a point reinforced by calls for open-weight, open-source models and federated learning to let regions contribute without relinquishing data ownership [41-44].


Several solutions were proposed. Digital public infrastructure (DPI) must be trusted, interoperable, reusable and give people agency, with a federated rather than centralized design to preserve data sovereignty while enabling shared AI development [101-108][117-122]. Community-driven initiatives such as Masakane demonstrate that participatory data collection and gender-responsive projects build trust and ownership, while talent development and open-model ecosystems are deemed vital for sustainable innovation [158-166][173-180][300-304].


Regarding investment, Sanjay suggested directing funds toward building DPI systems that give citizens control of their data, and Sangbu emphasized creating concrete use cases in agriculture, health, education and government services to inspire low-income users and change mindsets [289-298]. Saurabh added that strengthening AI capability and developing domain-specific niche models can reduce compute demands [300-303]. Yann warned that today’s compute-heavy LLM training is a temporary phase and that the next AI revolution will focus on world-models that understand real-world sensory data, a shift that will require academic research support and new funding mechanisms [222-236][267-274].


Overall, the discussion concluded that democratizing AI will require coordinated investment in data sovereignty, open models, community participation, talent, and targeted use cases, with a clear signal that progress depends on both technical breakthroughs and inclusive governance structures.


Keypoints


Major discussion points


Data and compute inequities hinder AI democratization.


The panel highlighted that most global datasets are concentrated in high-income countries, with Africa receiving less than 2 % of the data, and that access to computing power and large-scale data remains a major bottleneck for low-income regions [5][7-10][38].


Open-source models, federated learning, and new architectures can lower barriers.


Yann Le Cun argued that releasing top-performing open-weight models and using federated learning to keep data local are essential steps, while also noting that the current compute-intensive LLM era is temporary and that research on smaller, smarter models is already underway [41-44][65-71][117-119].


Digital public infrastructure (DPI) is key to trustworthy, sovereign AI ecosystems.


Saurabh Garg described DPI as needing trust, interoperability, and agency for users; Sanjay Jain explained how consent-based data layers and open-source ID platforms (e.g., MOSIP) enable countries to build their own AI-ready systems without creating new dependencies [101-108][128-138][205-214].


Community-driven language initiatives illustrate a participatory path forward.


Chenai Chair emphasized the sheer number of African languages and the need to document them, citing Masakhane’s grassroots, multilingual data collection, gender-responsive projects, and local ownership models as examples of building trusted data infrastructure [32-33][158-169][174-179].


A shift from “knowledge-storage” LLMs to world-model, intelligence-focused AI will change compute demands.


Yann Le Cun explained that today’s massive LLMs are a temporary solution for storing facts, whereas future AI will learn from multimodal, real-world data (world models) and become more intelligent with potentially lower training compute, though inference may remain costly [65-69][222-236][244-252].


Overall purpose / goal


The discussion aimed to diagnose the structural barriers that prevent low- and middle-income countries from both consuming and building AI, and to explore concrete strategies-ranging from open models and federated learning to DPI, community-led data collection, and talent development-that could democratize AI compute, data, and expertise worldwide.


Overall tone


The conversation began with a concerned and problem-focused tone, emphasizing data skew and resource gaps. As participants offered solutions, the tone shifted to optimistic and collaborative, highlighting ongoing initiatives, open-source collaborations, and future technological breakthroughs. Toward the end, the tone became pragmatic and forward-looking, balancing enthusiasm for new paradigms with realistic acknowledgment of funding, policy, and implementation challenges.


Speakers

Sanjay Jain – Leads the Digital Public Infrastructure team at the Gates Foundation; focuses on DPI, data empowerment, and digital identity systems.


Arun Sharma – Works with the World Bank; moderator asked about lag between physical and virtual worlds. [S3]


Sangbu Kim – World Bank representative discussing democratizing AI, indicators of moving from AI consumption to building.


Chenai Chair – Director of the Masakane African Languages Hub, a grassroots community for African language NLP. [S6]


Saurabh Garg – Secretary in the Ministry of Statistics and Programme Implementation, Government of India. [S8]


Faith Waidaka – Panel moderator; builds electrical and mechanical infrastructure in African data centers and serves as Board Chair of the Africa Data Center Association. [S10]


Yann LeCun – Executive Chairman of AMI Labs; former Chief AI Scientist at Meta; professor at New York University. [S12]


Audience – General audience members; includes participants such as Daniel Dobos (particle physicist, CERN; research director, Swisscom), Yuv (individual from Senegal), Professor Charu (Indian Institute of Public Administration), and Dr. Nazar. [S15][S16][S17]


Additional speakers:


Daniel Dobos – Particle physicist from CERN and research director for Swisscom; asked about federated learning coordination. [S15]


Yuv – Audience member from Senegal (role not specified). [S15]


Professor Charu – Audience member, professor at the Indian Institute of Public Administration. [S16]


Dr. Nazar – Audience member, participant in collaborative session on cyber threats. [S17]


Jan – Referenced in the discussion (e.g., “Jan mentioned about training data sets”); identity unclear, not listed among primary speakers.


Full session reportComprehensive analysis and detailed insights

1. Opening & framing (Sangbu Kim) – Sangbu Kim opened the session by outlining five pillars for responsible AI – access to energy, compute power, data, talent, and a credible policy framework – and highlighted the most acute short-term constraints: limited compute capacity and a severe skew of data sets toward high-income nations, where over 80 % of global data resides while sub-Saharan Africa holds less than 2 % [5-11][38-40]. He framed the discussion as a timely effort to “democratise the competing compute power access” [11-13].


2. Panel introductions – Faith Waidaka introduced the panel: herself (infrastructure specialist), Yann Le Carin, executive chairman of AMI Labs [14-27]; Sanjay Jain, lead for digital-public-infrastructure; Saurabh Garg, secretary in the Ministry of Statistics and Programme Implementation, Government of India [34-37]; and Chenai Chair, director of language-NLP initiatives [32-33].


3. Identifying the biggest barrier to AI-compute democratisation


Chenai Chair emphasized the breadth of African linguistic diversity (over 2 000 documented languages) and the massive effort required to document them [32-33].


Saurabh Garg argued that open-access models and AI literacy are more critical than raw hardware, because infrastructure can be acquired over time but model availability is a prerequisite for impact [34-37].


Sangbu Kim pointed to the concentration of digitised data in the developed world as a structural inequity [38-40].


Sanjay Jain added that AI will only scale when “data for everyone is available” and personal data can be accessed securely for personalised services [39-40].


Yann Le Carin echoed these points, insisting that top-performing open-weight, open-source models are a necessary condition for equity and proposing federated learning as a way for regions to contribute data without surrendering ownership [41-48].


4. World Bank indicator of AI-building capacity – When asked how the World Bank measures a country’s shift from AI consumer to AI builder, Sangbu responded that the key indicator is the ability of a nation to “fully manage and harness the data set locally” – i.e., local data ownership and control – because demand for compute only materialises when clear, locally relevant applications exist [45-60][55-59].


5. Compute intensity: temporary vs. structural – Yann Le Carin clarified that the current compute-intensive era of large language model (LLM) training is temporary. He described LLMs as “knowledge-storage systems” that require massive memory, but argued that the next AI revolution will involve smaller, smarter models that reason at inference time, shifting the compute burden from training to inference [65-71][72-78][80-88][89-92]. He noted industry efforts in model distillation, mixture-of-experts, and other efficiency techniques, while stressing that breakthroughs in hardware beyond incremental CMOS improvements remain years away [85-92]. He later introduced the concept of “world-model AI” – systems that learn from multimodal sensory data and perform reasoning rather than rote memorisation [220-225], and compared the data-size requirements of such models (≈10¹⁴ bytes) to the visual experience of a child (≈10⁹ bytes) [230-235].


6. Small-AI playbook (Sangbu Kim) – Sangbu outlined a user-centred approach for scaling small AI: develop concrete, high-impact use cases that inspire low-income users and change mind-sets, rather than merely supplying raw compute [190-197]. (Sector examples such as agriculture, health, education, and government services are discussed later in the funding allocation section.)


7. Digital public infrastructure (DPI) proposals


Saurabh Garg described DPI as needing trust, interoperability, reusability, and citizen agency, and presented the METRI “Friendship” platform – a modular, multi-stakeholder architecture that can plug in compute, data, models and talent while preserving local governance [101-108][111-115][117-122].


Sanjay Jain illustrated how consent-based DPI layers (e.g., MOSIP for digital ID, OpenG2P for payments) enable countries to build AI-ready systems without creating new dependencies, citing India’s Aadhaar and Ethiopia’s FIDA as examples [128-138][205-214].


8. Community-driven data infrastructure – Chenai Chair detailed the grassroots Masakhane network, which has documented African languages through participatory workshops, won a Wikimedia award [166-169], and is now launching “Project Echo”, a gender-responsive initiative that couples language data with AI tools for women’s economic empowerment and health [174-179][180-189]. She argued that trust is earned when communities own the data lifecycle and when local content creation is supported, echoing the broader call for federated, non-extractive architectures [160-169][173-176][190-189].


9. Funding allocation of a hypothetical $500 M – Panelists offered divergent priorities:


Sanjay Jain advocated directing the money to global DPI deployment, giving citizens control over their digital records and thereby “empowering them” to participate in the AI revolution [289-292].


Sangbu Kim suggested investing in sectoral pilots (agriculture, health, education, government services) that demonstrate value and inspire users [291-298].


Saurabh Garg urged a focus on capability development and domain-specific niche models that reduce infrastructure demands [300-303].


Chenai Chair called for funding open-model ecosystems and talent pipelines, citing the “Crane AI” offline-first stack that emerged from Masakhane [304-307][304-306].


Yann Le Carin emphasized the need to support academic research on non-LLM paradigms (e.g., world-model approaches) because industry is currently locked into a monoculture of LLM development [267-274], and highlighted practical examples such as smart-glasses for Indian farmers that use multilingual assistants [279-283].


10. Future outlook & AGI discussion – Yann Le Carin later addressed audience questions about AGI, noting that the notion of a single “AGI event” is misleading and calling for incremental progress toward more capable, multimodal systems [345-352]. He reiterated that hardware breakthroughs (e.g., carbon-nanotube or photonic computing) are necessary but lack a clear horizon [85-92][89-92].


11. Audience questions & unanswered gaps – Arun Sharma asked about the lag between virtual AI recommendations and physical delivery of inputs (seeds, fertilizer); the panel did not provide a concrete answer. Additional gaps included: (a) lack of defined governance and technical standards for federated-learning collaborations across jurisdictions; (b) absence of metrics beyond “local data ownership” to signal a country’s transition to AI building; (c) no clear timeline for the required hardware breakthroughs.


12. Key take-aways & action items


– Open-weight, open-source models combined with federated learning provide a technical pathway to democratise AI without compromising data sovereignty.


– Trusted, interoperable, agency-granting DPI is a prerequisite for local AI ecosystems.


– The present compute-heavy LLM era is expected to give way to smaller, reasoning-centric models and world-model AI, shifting compute burden toward inference [65-71][220-225][230-235].


– A holistic investment strategy should simultaneously fund high-impact use cases, domain-specific niche models, DPI deployment, open-model development, and talent pipelines [291-298][300-307][111-115].


– Community-led, gender-responsive projects such as Masakhane’s initiatives are essential for building trust and avoiding extractive dynamics [166-169][174-179].


Proposed action items


1. Develop the METRI “Friendship” platform as a modular global AI infrastructure [101-108][111-115].


2. Scale open-source ID platforms (e.g., MOSIP) and other DPI tools worldwide [128-138][205-214].


3. Allocate funds to both sectoral pilots and open-model/talent ecosystems [291-298][304-307].


4. Establish international coordination bodies (UNESCO, AI Alliance, SEM) to manage federated-learning collaborations [117-122][345-352].


5. Adopt participatory, gender-responsive design principles for community data infrastructures [160-169][173-176].


In conclusion, the panel agreed that democratising AI will require coordinated investment in open models, federated DPI, community-owned data, and talent development, while recognising divergent views on compute priorities and funding allocations. The discussion moved from diagnosing entrenched inequities to proposing concrete, multi-layered solutions that blend technical innovation, policy frameworks and participatory governance, outlining a roadmap for inclusive AI advancement over the next one, five and ten years [308-315].


Session transcriptComplete transcript of the session
Sangbu Kim

access and energy. Number two, computing power. Number three, data access. Number four, talent building. And number five, credible, responsible AI framework and policy. Among those five, everything is very important, but we are currently struggling with some lack of access to computing power and data sets. So that’s why today’s discussion is very important. Unfortunately, more than 80 % of our data set in the world are very heavily skewed to the developed world, high -income countries. Less than 2 % in Africa, sub -Saharan Africa. If we just carve out South Africa, less than zero -something percent, only for the other sub -Saharan Africa. So we see the big gap in this space. So this is a pretty important time to talk about how we can really democratize the competing power access in this space.

So thank you for joining us, and then I look forward to really good discussion with all of our panels. Thank you.

Faith Waidaka

Thank you, Sangbu, for that opening. So I will start by asking the panelists to introduce themselves in a very short way, and I’ll start with myself. I’m Faith Waidaka. I build the infrastructure that makes AI possible. So I build the electrical, mechanical infrastructure in data centers in Africa, and I’m also the board chair of the Africa Data Center Association. So we’ll go this way. Yann, please tell us who you are.

Yann LeCun

So I’m Yann Le Carin. I’m the executive chairman of AMI Labs, Advanced Machine Intelligence Labs, which is a new company. I’m building. to build a next generation AI system. I’m also a professor at New York University still. And just a month ago, I left my position as chief AI scientist of Meta after 12 years at Meta.

Sanjay Jain

I’m Sanjay Jain. I lead the digital public infrastructure team at the Gates Foundation.

Saurabh Garg

I’m Saurabh Garg. I’m secretary in the Ministry of Statistics and Program Implementation in the Government of India.

Chenai Chair

And I am Chennai Che, the director of Masakane African Languages Hub, which emerged from a grassroots community called Masakane, focusing on African language NLP.

Faith Waidaka

Good. So, Chennai, and coming back this way to all my panelists, what is the single biggest barrier? And I can imagine that we’re all coming from different segments from the introductions we just did. But what do we feel is the single biggest barrier today to democratizing AI compute? Chennai?

Chenai Chair

Thanks, Faith. So there are over 2 ,000 documented languages on the African continent. So our single biggest barrier is the breadth of work we actually have to do to document these languages to ensure they’re well represented and also focus on the communities that actually speak them.

Saurabh Garg

I would say access to models, open models, and AI literacy to be able to utilize those models. And the reason I say that is perhaps infrastructure is something which might get acquired over time. And hopefully the… the requirement of the size of that infrastructure may also change. And the focus, we probably need to focus much more on the models.

Sangbu Kim

I would say too much concentration of digitized data only for developed world.

Sanjay Jain

I should also go on the data point because we believe that AI will scale effectively only when data for everyone is available. So when I can get a personalized service because my personal data is accessible through some protected means to a model, so then that will allow AI to reach everyone.

Yann LeCun

I’ll just echo some of the things that were said earlier. Certainly, the availability of top -performing open models, open -weight but also open -source, would be a way to remove the barrier. or at least if not a sufficient condition at least a necessary condition and the problem is that today there is no such thing the open models are behind but there is a way to get them to surpass the proprietary system and it’s through data so the access to data was mentioned if various regions of the world collect or digitize their cultural data whatever it is and then contribute to training a global model that would constitute eventually a repository of all human knowledge then those models would be much better quality than all the proprietary system because the proprietary system would not have access to that data and this can be done technically in a way in which regions don’t need to actually communicate that data they can keep ownership of that data and then contribute to training a global model by exchanging parameter vectors I don’t want to get into the weeds of technicalities there but it’s a form of federated learning and I think this is a way to open up access to AI and it’s absolutely crucial for the future because we’re going to need a wide diversity of AI assistance for the reason that there’s a wide diversity of linguistic, cultural differences value systems, political opinions and philosophies and if our AI assistance comes from a handful of companies on the west coast of the US or China, we’re in big trouble so we absolutely need this

Faith Waidaka

Okay, so we’ve had the challenges and there are a wide range of them from inclusion to compute to data sets what we’re going to discuss today is how do we overcome those barriers from the different perspectives and the different angles that we have on this team So coming to you, Sangbo, from a World Bank perspective, what does it mean to democratize AI? And would you please give us one indicator that signals that a country is moving from consuming AI to actually building it?

Sangbu Kim

From the World Bank point of view, democratizing data computing is very important. But let’s think about this. So many people very easily talk about building data centers physically and securing more GPUs and servers from the beginning. I agree that the fundamental infrastructure is very crucial and very important. But the more important thing is how can we use that computing power for what? So we need to really think about… what would be the best way which can create demand for computing power. That is more crucial part. So without having very clear application and some solutions, nobody can really run their own computing data center business in Africa. So it is very crucial part. So I would like to say we need to think differently from even though computing power is very important, how can we really create the data demand.

So in this regard, so the clear indicator is that how can we really fully manage the data in the local. So one good thing, one good news is that anyhow local data, local context can be fully owned, controlled, managed and managed. by local country and local people. That is a very good news. Even though we see a lot of inequality in the computing infrastructure and resources, but what cannot change, even in this AI era, is that people and the local country and local community can strongly hold their context and then hold their data set. So it is a really important signal and opportunity. So I would say measuring the fully utilizing and harnessing the data set in the local will be the key indicator for this.

Faith Waidaka

Okay. Yan, you spoke about compute a few minutes ago, open compute. And I would really like, I would like to know, Is the concentration of frontier compute a temporary scaling phase or a structural feature of AI? And where do you see the biggest technical opportunity to reduce compute intensity? It’s something that Sang -Boo as well touched on.

Yann LeCun

Okay, so first of all, I think the computing requirements for training modern AI systems is temporary. It’s temporary because the type of AI systems that we build at the moment, LLMs, essentially are knowledge storage systems, right? They accumulate factual knowledge, and therefore they need enormous amounts of memories. The reason why the models are so big in terms of number of parameters, we’re talking hundreds of billions of parameters, which make them really expensive to train and to run, is the fact that they just accumulate knowledge so that it can be easily retrieved. Subtitles by the Amara .org community but there’s another way to be useful in terms of AI it’s not accumulating knowledge but actually being smart and you can replace knowledge by intelligence so current systems are not particularly intelligent but they store knowledge there is another revolution of AI coming which actually my new company is built around which intends to build systems that are smarter even if they don’t necessarily accumulate as much knowledge so those models will be smaller now the bad news with this is that perhaps at inference time they will be more expensive because they’ll reason more than current systems so we’re going to see maybe a shift in the requirements for training but the requirements for inference which is really where most of the computation goes is still going to be quite significant now to answer your second question The incentives are there for the industry to reduce the power consumption of AI system.

A lot of engineers working on AI in industry these days, even in academia, are actually focusing on how can I make this model smaller? How can I distill it in a smaller model? How can I use a mixture of experts so I have sort of a ladder of models that are more and more complex? So that to answer simple questions, I can use a simple model, et cetera. All of it is to optimize power consumption. Why? Because that’s where the money goes. That’s where you spend all the money when you operate an AI system. It goes into power and maintaining your hardware. So the incentives are there. So that’s the good news. You don’t need to have laws or regulations or anything.

They are working on it because they need to. The bad news is that it’s progressing. It’s progressing as fast as it can, and it’s not fast enough. But we’re not going to be able to make it faster unless we find some technological breakthrough at the fabrication level or the architecture or technology. There’s a lot of mileage to be had in those things still. The power efficiency is actually making progress really quickly, much faster than Moswell, but it’s still too slow. So I’m not expecting some big revolution in hardware design until we start building something else than CMOS transistors and silicon. That’s not happening for another 10 or 20 years. 10 or 20 years? Well, I mean, there’s going to be progress in the meantime.

It’s not what I mean. But if you want a real breakthrough, like some completely new way of building computing systems, there’s nothing on the visible horizon. There’s no horizon that really will allow this, whether it’s carbon nanotubes, Pintronics, or whatever it is.

Faith Waidaka

Okay, that’s very interesting to think that the training models will become smaller, yet the inference might be the one that will take up the compute. Yet we’re also looking at bringing inference to devices as close as possible to the people using it. So there’s a bit of a balance to be done in that 10 -year period. I think 10 years is a lot of time, considering what AI has shown us over the past decade. And I think in terms of research, we might see it sooner. Yeah, so Rob, you led the other digital ID, and now in statistics. How do you see digital public infrastructure enabling AI innovation? And how can countries expand access to shared AI infrastructure without creating new dependencies or compromising data sovereignty?

Saurabh Garg

Thank you. So I think two characteristics of digital public infrastructure, which are key, are to ensure that not only there is access, but also agency of the people. So most people would not like to be just consumers, but also be co -creators. And I think that’s the real issue going forward. For any system to be a DPI, I think there are a few essential characteristics. It needs to be trusted. It needs to be interoperable and shareable. And obviously. Reusable is part of it because and that’s what. is it’s able to bring these characteristics onto this. And this is what will also ensure that innovators focus on solutions rather than trying to get together the infrastructure together.

And in the democratizing AI working group, which was one of the seven working groups of this AI summit setup, which I had the privilege of chairing along with representatives from Kenya and Egypt, one of the outcomes of this, of course, there was a charter on AI diffusion. But one of the outcomes of that is what we are suggesting building initially, which might be a digital public good, but modularly it will become an infrastructure as we move ahead, is the METRI platform, which we’ve called Friendship. METRI standing for multi -stakeholder AI for a trusted and resilient infrastructure. and how we can, in a modular manner, add on the four, which I think my fellow panelists have also mentioned, components of AI, compute, data, models, and talent.

These are the four aspects, and, of course, governance mechanisms would, of course, be there. So how we can ensure that different countries are able to contribute in whatever manner to build this, if I can call it a global platform, which is, in a way, owned by all and yet looks at what are the issues of real criticality. And I’m sure there’s a major role for not only countries, for private sector and philanthropies to be able to build. So how we can build this structure together, which will meet the requirements of of countries, private sector and the philanthropies because each of them have different motivations to it and the private sector would have a profit motive and that has to be kept in view.

As far as the dependencies, that’s the second part of the question that you asked me. I think one of the areas is that we need to ensure that we follow a federated structure rather than a centralized structure. I think that would be key and that would also ensure that the variety of languages and cultural contexts that the data sets carry and which will also ensure that ownership remains wherever is contributed with the data. And yet technology and open systems exist now to be able to ensure that sharing can be done in a safe and trusted manner. So how we are able to ensure that this collaboration and cooperation is done based on trust. and what kind of mechanisms we can develop.

And they could be partly technological and partly policy -based or protocol -based. And a combination of this will ensure that we don’t generate new dependencies. Thank you.

Faith Waidaka

Sanju, when I said DPI, you nodded your head. So in terms of digital public infrastructure, we’ve seen it scale because it was interoperable. How can we ensure that data and AI systems that we build now are interoperable and open by design so that even small startups or governments, like we’ve just spoken about, can plug in and benefit? I actually

Sanjay Jain

want to go off what Dr. Goerg said. Broadly, DPI provides a way for data of all individuals, so their records, their ID, their transactions. are sort of a system of record on top of which DPI sits. So DPI provides a management layer on that and provides consented access. And so that’s something which we have seen around the world, particularly, for example, in India we see this a lot, is that now that you have access to all of this data, you can actually build on top of that through consented access lots of applications. And that’s really where a lot of the value comes in. And I think Jan mentioned about training data sets. That’s, again, the same model can be applied to allow either consented access or anonymized access so that you can do a federated learning so that the data never goes to the model, but the model comes to the data.

And so with, and India has been looking at this data empowerment and protection architecture, which is on that lines. And that, I think we are now starting to see the structural building blocks come together, which would allow for this underlying data layer to be built, but that requires strong DPI. And so we do think that there’s a lot of reason for countries around the world to adopt DPI systems so that citizens’ data can be managed in a very trusted way, access with consent. And then we have things like MCP coming up, which then allow users’ context to be taken, which then allows AI to be safe. Of course, as long as the data is, the rights on the data are quite clear that they’re not going to be stored.

So overall, I think we are moving towards this world where we are seeing the underlying pieces come together. They have to come together at a global scale. I think that’s the point that Dr. Gerg was making. And so from that perspective, I think we are in a fairly good place. But then to make sure this happens, we have to, I think, act in a unified manner. I mean, for example, we have to work together to fund efforts at the grassroots. So, for example, what you’re seeing with Masakhane, where you’re working with… With countries, with communities, so that their languages can be represented. so that that context becomes very important because finally we are going to have to serve users in their languages.

So I do think, you know, I’m very positive that we’re moving in the right direction. I just think that there’s still some ways to go. I think there are other barriers as well. But on this aspect, I think DPI provides a way for us to get past the data hurdle as long as, of course, DPI is implemented in a responsible manner in the countries and in the right way. Thank you. Chenai, you’ve

Faith Waidaka

cautioned against technology becoming extractive. How should we build data infrastructure that is trusted by communities? And would you please give us an example of what principles would make an AI project in a village or in a community, in some rural place, place in Africa, for example? Thank you. feel empowering rather than extractive? Thank you so much

Chenai Chair

Faith for that question. And I think I have the pleasure of sitting here as a representation of what it means when community is involved in building something. Masakana basically means we build together loosely translated in Isizuru. And that was then a creation of a participatory approach in knowledge building as a result of being excluded in spaces. So if we’re going to build data infrastructure that community trusts is to respond to the realities that they live in and to be participatory. So that’s the first example. And just to prove how important for something to be participatory is that 2019 -2020 there were not as many data sets around African languages. I think a source of data was the Jehovah’s Witness 300 Bible.

And they had translated the languages for their own purpose. And then so the community came together, the Masakane community came together and brought in everyone, linguists, NLP people, machine learning people, anyone who spoke the language to actually develop the scripts and do the machine translation work on top of that. And this community that was unfunded, doing everything by the bootstraps, actually won a Wikimedia Award in 2021 for their participatory action work. And I think that is then crucial to actually show that if you’re going to build trust, people have to see what the end value is and also be recognized. So this paper actually has, I think, about 20 people on it, a lot of people on it, which some people could never have been authors, but they contributed to it and they’ve got a paper published and that’s significant.

And then secondly, it’s really thinking about meeting communities where they are, regardless of what their location is. It’s realizing the inequity that we exist with. So one of the projects that we will be doing at Masakane is called Project Echo. It’s designed to be a gender -responsive project because gender transformative is also the North Star that we’re hoping to get to one day. And in that instance, it understands the realities of gendered inequality on the African continent, regardless of any technological innovation. And what we’re doing in partnership with Gates Foundation and also working with IDRC, who are working on this as well as a gendered intervention, is to actually then create, work with tech entrepreneurs developing gender -responsive use cases that focus on women’s economic empowerment as well as health to then think about how we’re creating an impactful tool when you add African languages on top that will result in better economic outputs for them or better information when it comes to health.

So again, it is thinking about designing with the communities and meeting the needs of the communities and where they are. And then lastly… And this is to say that this is, we love to say this on our team, that what we’re not doing is new. The technology may be new, but there are practices that we can borrow from other spaces to actually then ensure this is done. So I would like to reference the community network models. Last mile connectivity is a significant issue across the continent. We’ve had universal service access funds as an incentive for mobile network operators to do this. But sometimes some communities are not served well enough. And so then there have been interventions to actually result in internet connectivity that’s localized, being developed by the communities.

They’re in charge of building the mass for their community networks. They’re in charge of creating the content that people are going to need, figuring out what the necessary power is. Do you then, you know, create and have a transformative booster in one person’s home? And then people go and charge their phones there because it’s the whole life cycle of this. So if we’re going to build infrastructure that people trust, we have to borrow from what’s already been done and then ensure that people are part of the whole life cycle so that they see ownership and also it allows for sustainability because they are like, that’s my resource and I’m not going to wait for anyone else to support it but I’m going to be in charge of making sure that it continues to exist.

Interesting.

Faith Waidaka

I like that. Community ownership. And I don’t think we can do that if we don’t build small AI. So Sangbu, you’ve written a lot on small AI. What would be your playbook for scaling small AI responsibly?

Sangbu Kim

user can, you know, can, restrictions, so user cannot fully utilize some technology without get trained and learn. So, 20, 30 years ago, we talked a lot about digital literacy and some basic digital skills and how to use window and explorer, et cetera. That mean, that meant it is not very user -centric because user had to do a lot of things. But now, AI is going towards very user -centric services. So, users doesn’t need to do that much. They can only control and ask verbally about what they are curious, what they need. And then it can be automatically provided to the users. That is the philosophical concept of AI in my mind. So, in that sense, our focus is how to more bring more user centric mindset to this field along with our client because you know we have compared to develop the world we have pretty much big you know context base ground and local data and so many user interest so that’s our approach how that’s how we but are fully harness and utilize for this area

Faith Waidaka

thank you for that now that we’re speaking about communities and users Sanjay you’ve spoken about moving from digital age to digital empowerment in the context of AI what would digital empowerment look like and what should development partners like gates while bank sitting in this forum prioritize so that countries are not just consumers of AI but co -creators.

Sanjay Jain

So the thread I’m going to pick up back is the DPI thread. And broadly what we have done in that space is to look at how instead of building systems for countries, we sort of have open source systems which countries can then adopt to build systems which are adapted to their needs. So when we look at Aadhaar in India, that’s one thing, but then for the rest of the world we’re looking at MOSIP. And MOSIP is a modular open source ID platform that we have supported, which countries are taking and building with their own policy layers, building their own application versions of it. And so in Ethiopia you have FIDA, which is based on MOSIP, and it’s actually very much customized to what they need.

So the idea is you build these pieces of technology which then countries can adopt and build in a way that suits their needs, is governed by them, is local laws work on that, so all of that institutional infrastructure. legal infrastructure is then sits on top of the technology layer to do that. Similarly we have supported other open source efforts like OpenG2P for government payments, we have supported Digit for Healthcare campaigns and so the whole idea is you build open source, let countries and communities take that and adopt it. Similarly with Masakhane again the same idea is that if you have a way by which local communities can come together and collect data but then make that available for global needs.

So we have funded those kinds of efforts in India and in Africa as well so that these efforts are now there where local communities are empowered to make sure that AI systems can understand and speak their language and that is again a form of empowerment. So broadly that’s sort of the way we think about it is how do we build open standards, open source products that countries and communities can use and contribute back to and co -create essentially their versions of their systems. that then work in a unified way across the world. And so that is really empowering them to be a part of the community, and that is what we would love to see more happen.

Faith Waidaka

Thank you for that. Now, Jan, I can’t help but come back to these world models. That in my mind, I was thinking they would increase the compute power necessary so the infrastructure would be bigger. But from your explanation, it looks like being more intelligent means less compute, and we now move the power not on the grid side for the training models, but on the infant side, on the devices. So what does that actually mean for the government people, the AI ecosystem, the startups that are in this room? What does that actually mean for the government people, the AI ecosystem, and what should be their focus over the next 1, 5, 10 years? if these changes are to happen, and I do believe they will happen.

Yann LeCun

Wonderful question. Thank you. So there’s going to be another AI revolution, right? We’ve seen in recent years the deep learning revolution and the LLM revolution. And unfortunately, the type of AI systems we have access to at the moment manipulate language very well, and it fools a lot of people into thinking that we have it made, that we have systems that are as intelligent as humans because we think of language abilities as properly human. But it’s a mistake that generations after generations of computer scientists and people around them have made in AI. for the last 70 years of discovering a new paradigm for AI and assuming that this paradigm will lead us to systems that have human -level intelligence.

And it’s just false, and it’s false today as well. Our current technology is limited. It’s useful. There’s no question it’s useful. It should be deployed, developed. It’s going to help people use it all the time. But it’s limited, like previous generations of computer technologies and AI systems. So what is the next revolution? It’s the revolution of AI systems that understand the real world. And I think there is a lot of applications of that throughout the world for all kinds of domains, of market segments, if we’re talking about commercial systems, or just helping people in their daily lives. Now, it turns out that, and we’ve known this for a long time, that understanding the real world is much, much more complicated than understanding language and manipulating language.

It’s because language is a sequence of discrete symbols and it turns out that makes it easy for computers to handle. But the real world is messy, it’s high dimensional, it’s continuous, it’s noisy, and it’s just much more complicated. So I’ve been making that joke for many years to kind of try to explain this to everyone that your house cat is smarter than the biggest LLMs. And in many ways that’s true, certainly in the understanding of the physical world, your cat is way smarter than the biggest LLMs. It doesn’t mean the LLMs cannot accumulate knowledge about the real world, but they don’t really understand the underlying nature of it. So the next revolution are systems that really understand how the world works and sort of learn how the world works, a little bit like children who open their eyes.

And let me give you a… Interesting number. LLMs are pre -trained today. on basically all the text available on the internet publicly, which mostly is English or languages spoken in developed countries, which of course, as this panel has pointed out, is an issue. But it represents roughly 10 to the 14 bytes. Okay, a one with 14 zeros. That seems like a lot of data, and it is, because it would take us, any of us, about half a million years to read through it. But then compare this with the amount of data that gets to the visual cortex of a young child. In four years, a young child has been awake a total of 16 ,000 hours. And if we put a number on how much data gets to the visual cortex, it’s about 2 megabytes per second.

Do the arithmetics, that’s about 10 to the 14 bytes in four years, instead of half a million years. And so it tells you we’re never going to get to human -level intelligence or anything like that by just training on text, which is human -produced. we’re going to have to have systems that understand the real world and are trained to understand the real world through sensory input, it can be video it can be all kinds of stuff and by the way, 16 ,000 hours of video is not a lot of video, it’s about 30 minutes of YouTube uploads if you get a day of YouTube uploads, it’s about a million hours, and that’s about 100 years of video, and we have video systems that we’ve trained that have been trained with that kind of data they understand a lot more about the real world than any LLM they can tell you if something impossible happens in the video that they watch so they’ve acquired a little bit of common sense so my guess is that this is going to make a lot of progress in the future and from those kind of techniques, we can build world models, what is a world model given there’s an idea or representation of the state of the world at time t and an action or intervention that you imagine taking, a world model would predict the state of the world at time t plus one resulting from this action or intervention.

And this is how you can build an intelligent system because they would be able to predict the consequences of their actions before taking the action. And they would be able to plan and reason because reasoning is like planning. So everybody is talking about agentic systems in the industry. The way agentic systems are built today is not this way. Anyway, agentic systems today are not able to predict the consequences of their actions. And this is a terrible way of planning actions. So I think, you know, again, we’re going to see a revolution over the next few years based on world models, based on systems that can learn from the real world, messy data. And I’m not very popular in Silicon Valley when I say this, but those are not generative models.

They’re kind of a different type. And so, yeah, my colleagues who work on LLM and generative AI… don’t like me very much. For me, I’m really liking this.

Faith Waidaka

So I’m going to ask you a number question. What would it take? What kind of money would it take to make this faster?

Yann LeCun

Okay, so there’s a number of different things that need to happen. The first thing is there’s a lot of research to be done, like academic research, right? And in fact, what’s interesting as a phenomenon is that this idea of world model and this non -generative architecture, which I call JEPA, but there’s sort of various incarnations of it, are mostly worked on by academic groups who are interested in applying AI to science and mostly ignored by industry. Industry, particularly Silicon Valley, which is, you know, dominant players, is entirely focused on LLM and everybody is working on this. It’s the same thing. everybody is stealing each other’s engineers and working on the same thing because nobody can afford to do something slightly different and then run the risk of falling behind.

And so that creates kind of a monoculture that makes the industry a little blind. And so right now it’s in the hands of academia. So basically kind of propping up this kind of research in academia and preventing LLMs from sucking the oxygen out of every room you get into, I think is the first step. Second step is, of course, there is a role for governments and industry to play there in sort of pushing those models once they work. And that’s what I’m working on. That’s why I left Meta and created this company, because I think the time is right for trying to make this, make it real. And then, you know, obviously there’s going to be a lot of applications of this everywhere in the world.

There was an experiment that was run a few years ago, a couple years ago by some of my colleagues at MITA where they gave smart glasses to farmers in India, rural India. And you could talk to the assistant in, you know, Indic languages, asking them, what’s this disease on my crop? Or, you know, should I harvest now or wait a little bit? What’s the weather tomorrow? So there’s a lot of things like this that could be useful if the price, you know, could be brought down with systems that really understand the world better than we currently do. And in the future, all of us will be walking around with an AI assistant that will, you know, essentially amplify our own intelligence.

It’s like, you know, all of us will be sort of, you know, the leader, manager of a staff of virtual people who are smarter. Which is a great thing to do, by the way, working. I’m very familiar with the concept of working with people who are smarter than you. it’s the greatest thing that can happen to you so we shouldn’t feel threatened by that so it’s going to allow people to get more knowledgeable, more educated make more rational choices but we need systems that basically approach or surpass human intelligence in certain domains and understand the real world

Faith Waidaka

Thank you Yann, so we know where Yann is putting his money coming back to all my panelists not just your money if I had 500 million dollars to give and I’m not asking you for a P &L I’m not asking you to give me a profit I’m just asking you to help me democratize AI and make it accessible for everyone where would you each put your money let’s start with sanjay

Sanjay Jain

incidentally 500 million is the amount that you’re looking at as raising capital capital to get dpi everywhere in the world because we think that you know getting those underlying systems of record getting people access to their data in a digital form can actually empower them so much that they can then participate in the ai revolution in the right way with the right controls and structures in place so you know you’ve kind of just made my case that we would want to think about how we can take that money deploy it and bring everyone up to the same level in terms of digital infrastructure getting the data getting their ledgers getting the health records all of those digitized so that then they can take benefit of ai for their needs so that’s actually what we would want to do

Sangbu Kim

okay okay okay again again i would say i’ll spend that big money to develop some more use cases again and again. So we are identifying agriculture, education, healthcare, and some more. The government service can be a really promising use case field. So developing some more practical and profitable use case and which adds so much value will be the really critical one. On top of that, maybe why we are developing the use cases, more important thing is that some change user mindset and inspire users. Because one typical problem we are facing is that our low -income users and clients and people are not… do not really know what they don’t know. So inspire, even though they can do something with this type of technology, but they…

don’t clearly understand what they can do. So inspire them that they can really do this with higher productivity, with low cost. That would be very important things to remind them. Thank you.

Saurabh Garg

Given the volume of funds available, I would focus a lot more on capability development of people to be able, their ability to use AI for improving productivity. And maybe if I can add to it, just to again stress on models on the need for small domain specific niche models. Small may not be the right word to use. But domain specific and niche models, which will ensure that they use lot less power, lot less infrastructure and not have the problems of large language model.

Chenai Chair

so I’m assuming each one of us is getting 500 million yes so I co -sign on everything in addition I would say for us what is critical given the point I mentioned about the breadth of work that needs to be done is actually having open models and also investing in talent so the open models do allow for people to innovate on top of them and an example of this is crane AI which actually developed a offline first AI stack focusing on health education and agricultural services and they emerged from the Masakana community so what happens when we actually can fund a lot of people to think about this and build on top of open models and then lastly talent, talent is very important across the whole value chain, talent that actually looks at the building of the models, the uptake the business cases to motivate for people to allow for sustainability but also the talent to actually build capacity of the end users to understand so that we create an ecosystem where people are excited for these new technological innovations instead of afraid.

And that’s sort of been the biggest narrative of you’re either very excited or you’re very afraid. And coming from a South African context, everyone is afraid to lose their job to AI. So how do we ensure that we’re creating that ecosystem that’s favorable for innovation?

Faith Waidaka

So as we come to the end of our panel, with everything that’s been said, even with all the money on the table, free money, we see that it’s not a one -size -fits -all. We simply can’t just focus on one area and leave the rest. We need the talent. We need the compute. We need the data centers. We need the regulatory framework. We need the reforms. We need everything to come together to make this possible. And with that… I’m done with my questions. I have five minutes. Before I even finish my question. So would someone help me with a mic? What I’ll do, I’ll take three questions, hopefully to three different people from you guys.

And then since I see no one, I’m quite good. Thank you. Let’s start here.

Arun Sharma

Thanks, Faith. Thank you all for such a brilliant session. My name is Arun Sharma. I work with the World Bank. My question to anyone, Jan specifically, what is the lag that we have in the physical and the virtual world? It’s dominated a lot by the machinery. I mean, you gave the example of a farmer wearing glasses. But then the seeds or the fertilizer, anything that he orders still run on archaic systems. So obviously there is a lag between the hardware and the software. The software is evolving much faster. where do you see that happening and going and I ask this specifically because in the Indian system where we have not been able to deploy our resources is in the education space or in the healthcare space where we still lag in those areas so thanks

Faith Waidaka

let me take the three questions I would prefer that you throw the next question to someone else I’ll take a question from the back there

Audience

thanks a lot Daniel Dobos particle physicist from CERN originally and then a research director for Swisscom you mentioned federated learning technologically this is easy the architecture of collaboration might be difficult for that So do you have some ideas which kind of organization could coordinate this kind of collaboration? Thank you.

Faith Waidaka

Okay, and one last question, let me get from him. The guy with the red flag.

Audience

Hi, thank you. Thank you, sir. My question is to you. Like, you have said that we have the data like 10 to the power of 14 bytes and the same data that a boy consumes, likely four to five years of age. So do you think that data is the only bottleneck, despite of compute and the architecture, to get the AGI, or maybe the humans, the superintelligence, artificial superintelligence? And the next question is, when we will achieve AGI, what was the benchmark? Like, what was the benchmark? Like, how we benchmark AGI that, like, it will be definitely smarter than humans. So how humans will evaluate that? so yeah that’s it

Yann LeCun

quick answers I’ll go in reverse order so there’s no such thing as a GAI there is human level AI perhaps but human intelligence is extremely specialized and so calling this general intelligence is complete nonsense but we will get to we will build systems that are as intelligent as humans in all domains where humans are intelligent it’s just not going to be next year unlike what you know some some colleagues in the industry are claiming this is going to take a lot longer it’s not going to be an event it’s not like we’re going to discover one secret that’s going to just you know unlock intelligence it’s going to be you know progress it’s going to be much more difficult than we think it’s always been more difficult than we thought in the past and it’s still the case so no event for AGI and no AGI human level AI yet super intelligent AI yes we should call it ASI artificial super intelligence yeah well it depends so that’s the first thing and you had a second part to your question I can’t remember what it was so I’m going to answer the other one there is a number of organizations that could so first of all the thing that’s needed for this federated learning idea for an open source model should be bottom up it should be people actually kind of putting up a github and then collaborating on sort of building the infrastructure for this of course we can get help from governments and organizations and that’s required too but I think it’s going to ultimately people need to build code, write code so there’s a number of groups that have already built their own LLMs that are pretty good quality, there’s a group in Switzerland centered at EPFL and ETH so you probably know it there is a group in the UAE centered on MBZ UAI there is similar models in Korea in various other countries and they all would like they should all get together and basically join forces and then bring in other countries as well I think SEM can play a role I think UNESCO can play a role I think Switzerland should play a role they have all those organizations in Geneva maybe that’s where and the next summit is going to be there so maybe that’s the right thing to do and have a bottom up and top down one big organization that can play a role is the AI Alliance which is a group that promotes open source AI

Faith Waidaka

Jan let me cut you short we’ve run out of time and we would like to thank you all for coming yes thank you so much for all the speakers we just have a small memento from the government side to make this a memorable event. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (21)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Yann Le Carin is the executive chairman of AMI Labs.”

The transcript of the session identifies Yann Le Carin as the executive chairman of AMI Labs, which is confirmed by the speaker introduction in the knowledge base [S3].

Confirmedhigh

“Sanjay Jain leads the digital public infrastructure team at the Gates Foundation.”

Sanjay Jain’s role as the lead for digital-public-infrastructure is corroborated by the knowledge-base entry that states he heads the digital public infrastructure team at the Gates Foundation [S3].

Confirmedmedium

“Dr. Saurabh Garg outlined India’s approach to equitable compute access as part of a collaborative framework.”

The knowledge base describes Dr. Saurabh Garg presenting India’s “Maitri” platform and its six foundational pillars for shared compute, data and AI models, confirming his role and focus [S33].

Confirmedmedium

“Over 2 000 languages have been documented on the African continent.”

The statement matches the figure given in the knowledge base, which notes that more than 2 000 African languages have been documented [S6].

Confirmedhigh

“Digitised data is heavily concentrated in the developed world, creating a structural inequity.”

The knowledge base highlights a global data divide, where a few entities control most data and developing countries act mainly as data providers, confirming the reported inequity [S101].

Confirmedhigh

“AI will only scale when “data for everyone is available” and secure personal data flows enable personalised services.”

The need for universal data flow to support services for all is explicitly stated in the knowledge base discussion on operationalising data free-flow with trust [S104].

Confirmedmedium

“Top‑performing open‑weight, open‑source models are essential for equity, and open‑weight models differ from merely open‑source models.”

The distinction between open-source and open-weight models, and the importance of open-weight models for reproducibility and equity, is detailed in the knowledge base [S65].

External Sources (106)
S1
https://dig.watch/event/india-ai-impact-summit-2026/the-foundation-of-ai-democratizing-compute-data-infrastructure — And so with, and India has been looking at this data empowerment and protection architecture, which is on that lines. An…
S2
Rights and Permission — 35 Based on SME discussion with Sanjay Jain, former Chief Product Manager of UIDAI.
S3
The Foundation of AI Democratizing Compute Data Infrastructure — Thanks, Faith. Thank you all for such a brilliant session. My name is Arun Sharma. I work with the World Bank. My questi…
S5
S6
Towards a Safer South Launching the Global South AI Safety Research Network — -Ms. Chenai Chair- Director of the Masakane African Language Hub
S7
Responsible AI for Shared Prosperity — – Philip Thigo- Chenai Chair – Shekar Sivasubramanian- Chenai Chair
S8
The Foundation of AI Democratizing Compute Data Infrastructure — -Saurabh Garg: Secretary in the Ministry of Statistics and Program Implementation in the Government of India
S9
https://dig.watch/event/india-ai-impact-summit-2026/the-foundation-of-ai-democratizing-compute-data-infrastructure — And they could be partly technological and partly policy -based or protocol -based. And a combination of this will ensur…
S10
The Foundation of AI Democratizing Compute Data Infrastructure — -Faith Waidaka: Panel moderator, builds electrical and mechanical infrastructure in data centers in Africa, Board Chair …
S11
https://dig.watch/event/india-ai-impact-summit-2026/the-foundation-of-ai-democratizing-compute-data-infrastructure — Good. So, Chennai, and coming back this way to all my panelists, what is the single biggest barrier? And I can imagine t…
S12
Steering the future of AI — # Discussion Report: Yann LeCun on the Future of Artificial Intelligence ## LeCun’s Position on Large Language Models …
S13
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — She mentions advice from Yann LeCun, a professor at NYU and advisor at Meta, who advocates for this approach.
S14
Meta’s chief AI scientist Yann LeCun departs to launch world-model AI startup — Yann LeCun, one of the pioneers of deep learning and Meta’s chief AI scientist, isleavingthe company to establish a new …
S15
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S16
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S17
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S18
The Inclusion of African Women and Ecommerce (Ecommerce Forum Africa) — Policies are essential to regulate and improve internet access, which serves as the backbone of the digital economy. Cur…
S19
https://dig.watch/event/india-ai-impact-summit-2026/leaders-plenary-global-vision-for-ai-impact-and-governance-morning-session-part-1 — The incorporation of AI in education will help narrow the learning divide, while advances in telemedicine, in predictive…
S20
ISBN: — There are several barriers to greater ICT uptake and use in vulnerable countries. These include; inadequate infrastructu…
S21
AI for Social Good Using Technology to Create Real-World Impact — The World Bank’s Sangbu Kim presented concrete examples of how locally successful solutions can achieve global scale. He…
S22
How African knowledge and wisdom can inspire the development and governance of AI — H.E Muhammadou M.O. Kah:Thank you so much, and good afternoon. And apologies, I was somewhere else, being pulled in anot…
S23
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Diana Nyakundi:Yeah, thanks Fadi. So with regards to opportunities, there are a lot of AI pilot projects that are coming…
S24
AI that serves communities, not the other way round — At theWSIS+20 High-Level Eventin Geneva, a vivid discussion unfolded around how countries in the Global South can build …
S25
NRIs MAIN SESSION: DATA GOVERNANCE — It is important to ensure that data governance frameworks uphold individual rights and freedoms while addressing global …
S26
IGF 2019 – Best practice forum on gender and access — Changes in gender policies are needed. ‘We need gender-responsive not gender-sensitive policies, and e-skills training f…
S27
Organizing African talent to move humanity forward: Language technology for Africa — Open source models alone are not enough; support for communities and infrastructure is necessary
S28
https://dig.watch/event/india-ai-impact-summit-2026/building-public-interest-ai-catalytic-funding-for-equitable-compute-access — And here, India is not waiting for permission. India is not waiting for permission. India is showing that it can be done…
S29
WS #208 Democratising Access to AI with Open Source LLMs — Developing countries face challenges in implementing open source AI due to limited infrastructure and technical expertis…
S30
Multistakeholder Dialogue on National Digital Health Transformation — Importance of architecture – DPI enables interoperability, reusability, and trust
S31
Empowering People with Digital Public Infrastructure — DPI infrastructure should be developed with interoperability in mind, allowing for sharing of resources and best practic…
S32
High Level Session 2: Digital Public Goods and Global Digital Cooperation — – Thomas Davin- Henna Virkkune- Nandan Nilekani- Amandeep Singh Gill He warns that focusing too much on open source ele…
S33
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Great. Dr. Garg, any final insights? Thanks, Dr. Garg. Martin, I’ll go over to you. Through current… AI and the Paris…
S34
Collaborative AI Network – Strengthening Skills Research and Innovation — “We’re talking of AI being a possible DPI, a digital public infrastructure.”[1]. “I think those are aspects which a DPI …
S35
Open Forum #71 Advancing Rights-Respecting AI Governance and Digital Inclusion through G7 and G20 — Gilwald advocates for regulatory mechanisms that would govern access to both data and computational resources. This regu…
S36
Skilling and Education in AI — The Professor took a notably realistic turn in acknowledging that AI will inevitably create new forms of inequality, des…
S37
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Lee Tiedrich raised another challenge: the lack of data standardisation and voluntary sharing frameworks necessary for A…
S38
Driving Social Good with AI_ Evaluation and Open Source at Scale — This could lower barriers for new contributors and help with onboarding in both open source and industry contexts
S39
The Expanding Universe of Generative Models — Open-source models can accommodate new ideas and data modalities The importance of open-source models is emphasized, wi…
S40
The strategic shift toward open-source AI — The release of DeepSeek’s open-source reasoning model in January 2025, followed by the Trump administration’s July endor…
S41
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Countries around the world have made investments into digital public infrastructure (DPI) that supports vital society-wi…
S42
WS #257 Emerging Norms for Digital Public Infrastructure — 4. Transparency and accountability: Ensuring these principles in DPI development was seen as crucial for building trust …
S43
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — Governance plays a key role in the success of DPI. It is highlighted that there are different layers of governance, incl…
S44
WS #144 Bridging the Digital Divide Language Inclusion As a Pillar — ## Conclusions and Path Forward ### Government Incentives and Regulatory Frameworks Christian Daswon: Thanks Ram. I’m …
S45
Closing the Governance Gaps: New Paradigms for a Safer DNS — This voluntary action demonstrates their commitment to addressing the issue and ensuring a safer online environment. Reg…
S46
How Small AI Solutions Are Creating Big Social Change — So now what’s next? Next steps, actually we are trying to expand this to more languages. We have some collaboration, for…
S47
Digital divides &amp; Inclusion — Indigenous peoples, who are often located in remote areas, are particularly affected by this disparity, exacerbating the…
S48
OpenAI leads shift in model development — Leading AI companiesare rethinkingtheir approach to large language models as scaling existing methods faces diminishing …
S49
Focus shifts to improving AI models in 2024: size, data, and applications. — Interest in artificial intelligence (AI) surged in 2023 after the launch of Open AI’s Chat GPT, the internet’s most reno…
S50
The Foundation of AI Democratizing Compute Data Infrastructure — Federated learning approach that allows data contribution to global models while maintaining local ownership and control
S51
AI for Good Technology That Empowers People — “So to make it even faster and achieve the sub 10 milliseconds, you actually have to bring in inference and training to …
S52
Transforming Health Systems with AI From Lab to Last Mile — Implement federated learning approaches that allow local data privacy while contributing to model improvement
S53
Steering the future of AI — LeCun envisions international partnerships where future foundation models are trained in a distributed fashion, with eac…
S54
Safe and Responsible AI at Scale Practical Pathways — Both speakers advocate for federated models where data remains with local organizations while enabling interoperability,…
S55
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — She mentions advice from Yann LeCun, a professor at NYU and advisor at Meta, who advocates for this approach.
S56
AI as critical infrastructure for continuity in public services — Data sovereignty requires control over jurisdiction, keys, and infrastructure beyond just local data storage Inclusive …
S57
7th edition — It is a view commonly held within the Internet community that certain social values, such as free communication, are fac…
S58
What is it about AI that we need to regulate? — Ensuring Better Representation of Developing and Least-Developed Countries in Global Digital GovernanceThe question of h…
S59
Research Publication No. 2014-6 March 17, 2014 — – (1) Policy objectives : Our cases studies illustrate that the public sector can develop and implement cloud-relevant …
S60
Artificial intelligence (AI) – UN Security Council — Furthermore, there was a consensus on the necessity for enhanced data literacy and data management skills. As AI systems…
S61
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — “At the same time, Estonia is investing in the next generation through the AI Leap initiative, a public -private partner…
S62
How AI Is Transforming Diplomacy and Conflict Management — Adoption barriers & capacity building Capacity development | Artificial intelligence
S63
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Development | Legal and regulatory Evidence-Based Policymaking and Research Integration Part of the roadmap emphasizes…
S64
WS #83 the Relevance of Dpgs for Advancing Regional DPI Approaches — – Desire Kachenje- Rahul Matthan Based on poll results from the session, open source first principles and local talent …
S65
Democratizing AI: Open foundations and shared resources for global impact — Academic research can achieve measurable impact and scale when properly funded and supported, moving beyond traditional …
S66
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S67
Why science metters in global AI governance — Similarly, I hope that this scientific body that’s been set up by the UN would also establish systems that would, would …
S68
Policy Network on Meaningful Access: Meaningful access to include and connect | IGF 2023 — Martin Schaaper:Yes, thank you. Short as possible. I’ll try to be short. I mentioned the good news. We have a lot of dat…
S69
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 21 Stephen Marshall, ‘The E-Learning Maturity Model’, Victoria University of Wellington. Available at http://elearning….
S70
Regionalism versus Multilateralism — similar conclusion in a somewhat similar fashion, although only in the context of a temporary transition phase.
S71
How AI Drives Innovation and Economic Growth — Ufuk Akcigit introduced a crucial analytical framework distinguishing between AI’s foundational layer and application la…
S72
WS #462 Bridging the Compute Divide a Global Alliance for AI — Alisson explains that the cost of creating compute capacity varies by region due to infrastructure and latency issues, w…
S73
Building Public Interest AI Catalytic Funding for Equitable Compute Access — India is proving that you can design AI ecosystems that are both globally competitive and globally competitive. And loca…
S74
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Lee Tiedrich raised another challenge: the lack of data standardisation and voluntary sharing frameworks necessary for A…
S75
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S76
Democratising AI: the promise and pitfalls of open-source LLMs — At theInternet Governance Forum 2024 in Riyadh, the sessionDemocratising Access to AI with Open-Source LLMsexplored a tr…
S77
Driving Social Good with AI_ Evaluation and Open Source at Scale — This could lower barriers for new contributors and help with onboarding in both open source and industry contexts
S78
The Expanding Universe of Generative Models — Regarding power dynamics, Gomez supports the devolution of power from large tech companies. However, he acknowledges the…
S79
WS #208 Democratising Access to AI with Open Source LLMs — Bianca Kremer: Hi, everybody hears me? First of all, I’d like to apologize for the delay and other procedures, we’re i…
S80
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — Ryan Budish :I’m coming from Boston, Massachusetts, where it is quite late at night. So I’m going to try not to speak to…
S81
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Countries around the world have made investments into digital public infrastructure (DPI) that supports vital society-wi…
S82
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — I believe so can governments and the sovereign use it or should they? Definitely but we need to be conscious of those 4 …
S83
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — France has been at the forefront of developing digital public infrastructure (DPI), even before the term was officially …
S84
Digital divides &amp; Inclusion — Indigenous peoples, who are often located in remote areas, are particularly affected by this disparity, exacerbating the…
S85
Nepal Engagement Session — This fireside chat demonstrated how AI can serve as a democratising force when designed with inclusion and accessibility…
S86
Digital Inclusion Through a Multilingual Internet | IGF 2023 WS #297 — In conclusion, the internet can serve as a powerful tool in supporting local languages, helping to overcome barriers and…
S87
WSIS Action Line C8: Multilingualism in the Digital Age: Inclusive Strategies for a People-Centered Information Society — Community-led initiatives are most impactful when culturally grounded and supported by long-term partnerships Speakers …
S88
https://dig.watch/event/india-ai-impact-summit-2026/how-small-ai-solutions-are-creating-big-social-change — So now what’s next? Next steps, actually we are trying to expand this to more languages. We have some collaboration, for…
S89
OpenAI leads shift in model development — Leading AI companiesare rethinkingtheir approach to large language models as scaling existing methods faces diminishing …
S90
The Foundation of AI Democratizing Compute Data Infrastructure — The Q&A session revealed ongoing challenges around coordination mechanisms for global-scale federated learning, particul…
S91
Steering the future of AI — Yann LeCun: and not only that, you think they will never get there. Well, something will get there, and at this point, I…
S92
Focus shifts to improving AI models in 2024: size, data, and applications. — Interest in artificial intelligence (AI) surged in 2023 after the launch of Open AI’s Chat GPT, the internet’s most reno…
S93
How Small AI Solutions Are Creating Big Social Change — But certainly we are working across different states in India like we’re doing elsewhere in the world. And we do priorit…
S94
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — – Gong Ke, Executive Director of the Chinese Institute for the New Generation Artificial Intelligence Development Strate…
S95
MahaAI Building Safe Secure &amp; Smart Governance — His solution advocated for “intelligent governance” built upon five core principles: human-centred design, transparency …
S96
We are the AI Generation — In her conclusion, Martin articulated that the fundamental question should not be “who can build the most powerful model…
S97
Smart Regulation Rightsizing Governance for the AI Revolution — Bella Wilkinson from Chatham House provided a realistic assessment of the current geopolitical landscape, arguing that g…
S98
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — – Engineer Chidi Gwebulam – Panelist: Legal expert Adamma Isamade: Good afternoon, everyone. The question is very inte…
S99
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S100
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — The extended analysis highlights several important points related to the impact of technology and AI on the global south…
S101
AI Governance: Ensuring equity and accountability in the digital economy (UNCTAD) — Furthermore, the concentration of data collection and usage among a few global entities has led to a data divide. Many d…
S102
Setting the Rules_ Global AI Standards for Growth and Governance — So we’re talking more about safety standards, and those typically tend to trail the products. The products are out there…
S103
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — Again, I’m sure you’ll find, I’d be happy to talk about any of these for much longer, but we only have a short time. The…
S104
Operationalizing data free flow with trust | IGF 2023 WS #197 — Data flow is required for services to be available for everyone
S105
AI for agriculture Scaling Intelegence for food and climate resiliance — This comment is profoundly insightful because it cuts through the AI hype and addresses the fundamental challenge of res…
S106
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — Audience: Yeah, hello. Mr. Knut Vatne here from the Norwegian Tax Administration, so I’m representing a large public sec…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sangbu Kim
2 arguments113 words per minute793 words419 seconds
Argument 1
Concentration of digitized data and compute in high‑income countries limits access (Sangbu Kim)
EXPLANATION
Sangbu points out that the majority of digital data and computing resources are held by high‑income nations, creating a structural barrier for low‑income regions to participate in AI development. This concentration hampers efforts to democratize AI across the globe.
EVIDENCE
He notes that more than 80 % of global datasets are heavily skewed toward developed, high-income countries, while less than 2 % reside in sub-Saharan Africa, with even smaller shares for individual countries like South Africa [7-9]. He also mentions the current struggle with lack of access to computing power and data sets [5].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Barriers such as inadequate infrastructure, high costs, and limited skills are documented in [S20]; the importance of democratizing data computing and concerns about concentration are discussed in [S1]; India’s public compute plan highlights disparities between regions in [S28].
MAJOR DISCUSSION POINT
Core barriers to AI democratization
AGREED WITH
Yann LeCun, Saurabh Garg
Argument 2
Direct funds toward high‑impact use cases (agriculture, health, education) and user inspiration to drive adoption (Sangbu Kim)
EXPLANATION
Sangbu argues that investment should prioritize concrete, high‑impact applications such as agriculture, education, and healthcare, and also focus on inspiring users to adopt AI. Demonstrating clear value will generate demand for computing resources and foster sustainable AI ecosystems.
EVIDENCE
He proposes spending money on use cases in agriculture, education, and healthcare, and emphasizes the need to change user mind-sets and inspire low-income users who may not yet understand AI’s potential [291-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI applications in education and health are highlighted in [S19]; concrete World Bank examples in Nigeria illustrate high-impact use cases in [S21]; the need to develop use cases and inspire users is emphasized in the discussion notes [S3]; additional pilot projects in health, education and cultural preservation are described in [S23].
MAJOR DISCUSSION POINT
Funding allocation and priority setting
AGREED WITH
Chenai Chair, Saurabh Garg, Sanjay Jain
DISAGREED WITH
Chenai Chair, Saurabh Garg, Sanjay Jain, Yann LeCun
C
Chenai Chair
5 arguments169 words per minute1023 words361 seconds
Argument 1
Vast number of undocumented African languages hampers inclusive AI development (Chenai Chair)
EXPLANATION
The Chair highlights that over two thousand languages exist on the African continent, many of which lack documentation, making it difficult to build inclusive AI systems. The breadth of work required to document and represent these languages is the biggest barrier.
EVIDENCE
She states that there are over 2,000 documented African languages and that the primary barrier is the extensive work needed to document them for proper representation [32-33].
MAJOR DISCUSSION POINT
Core barriers to AI democratization
Argument 2
Community‑driven open models such as Crane AI demonstrate how local talent can build useful applications (Chenai Chair)
EXPLANATION
Chenai cites the example of Crane AI, an offline‑first AI stack that emerged from the Masakhane community, showing how locally developed open models can address health, education, and agriculture needs. This illustrates the power of community‑led innovation.
EVIDENCE
She mentions Crane AI as an offline-first AI stack focusing on health, education, and agricultural services, developed by the Masakhane community [304-305].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Masakhane community’s development of the offline-first Crane AI stack is reported in the session summary [S3]; community-driven AI initiatives are further described in [S24].
MAJOR DISCUSSION POINT
Open models, federated learning, and collaborative platforms
AGREED WITH
Yann LeCun, Saurabh Garg
Argument 3
Participatory, community‑owned data initiatives create trust and ensure relevance (Chenai Chair)
EXPLANATION
The Chair argues that data infrastructure must be built through participatory approaches that involve the community, ensuring trust and relevance to local realities. Successful community projects, such as Masakhane, demonstrate this principle.
EVIDENCE
She describes Masakhane’s participatory knowledge-building process, noting that the community brought together linguists, NLP experts, and speakers to develop datasets, earning a Wikimedia award in 2021, which built trust and recognition [160-169].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Participatory data-infrastructure approaches are outlined in the discussion notes [S3]; the importance of community-owned data for trust is reinforced in [S24]; broader data-governance frameworks emphasizing stakeholder engagement are presented in [S25].
MAJOR DISCUSSION POINT
Building trust and community empowerment
AGREED WITH
Saurabh Garg, Sanjay Jain, Sangbu Kim
Argument 4
Gender‑responsive, locally managed infrastructure promotes equitable benefits and sustainability (Chenai Chair)
EXPLANATION
Chenai emphasizes that projects must be gender‑responsive and designed with local contexts in mind, ensuring that women benefit economically and health‑wise. Partnerships with foundations and local entrepreneurs help achieve this.
EVIDENCE
She outlines Project Echo, a gender-responsive initiative co-led with the Gates Foundation and IDRC, aiming to develop tech solutions for women’s economic empowerment and health, integrating African languages to increase impact [173-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policies for African women’s inclusion and affordable internet access are discussed in [S18]; gender-responsive project design is highlighted in [S24]; calls for gender-responsive policies and e-skills training appear in [S26].
MAJOR DISCUSSION POINT
Building trust and community empowerment
Argument 5
Allocate resources to open‑model development, talent pipelines, and community‑led projects (Chenai Chair)
EXPLANATION
The Chair calls for funding open models and talent development, arguing that open‑source models enable local innovators to build applications, while talent pipelines ensure sustainable ecosystems. Community‑led projects are essential for adoption.
EVIDENCE
She stresses the need for open models and talent, citing the success of Crane AI and the importance of building capacity among end-users to create an ecosystem that embraces new technologies rather than fearing them [304-307].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for support to African talent and open-source models is emphasized in [S27]; calls for open-model funding and talent development are recorded in the session notes [S3]; the four-resource AI DPI framework that includes talent is described in [S34].
MAJOR DISCUSSION POINT
Funding allocation and priority setting
AGREED WITH
Saurabh Garg, Faith Waidaka
DISAGREED WITH
Sangbu Kim, Saurabh Garg, Sanjay Jain, Yann LeCun
S
Saurabh Garg
5 arguments130 words per minute700 words321 seconds
Argument 1
Lack of open models and limited AI literacy impede effective use of AI (Saurabh Garg)
EXPLANATION
Saurabh identifies two intertwined obstacles: insufficient access to open AI models and a deficit in AI literacy that prevents users from leveraging those models. He suggests that infrastructure alone will not solve the problem without model access and education.
EVIDENCE
He states that access to open models and AI literacy are essential, noting that infrastructure may be acquired over time but the focus should shift to models [34-37].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Challenges of deploying open-source LLMs in low-resource settings are documented in [S29]; skill gaps and the need for AI literacy are noted in [S20]; the discussion stresses AI literacy as a barrier in [S3].
MAJOR DISCUSSION POINT
Core barriers to AI democratization
AGREED WITH
Chenai Chair, Faith Waidaka
DISAGREED WITH
Sangbu Kim, Yann LeCun
Argument 2
DPI must be trusted, interoperable, and reusable to empower users and innovators (Saurabh Garg)
EXPLANATION
Saurabh outlines the essential qualities of digital public infrastructure: trust, interoperability, and reusability. These attributes enable citizens to co‑create solutions rather than merely consume services.
EVIDENCE
He lists the required characteristics-trusted, interoperable, reusable-and links them to empowering innovators to focus on solutions instead of building infrastructure themselves [105-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Digital public infrastructure that provides trust, interoperability and reusability is described in [S30]; cross-country interoperability and reuse are further discussed in [S31]; data-governance principles supporting trusted DPI appear in [S25].
MAJOR DISCUSSION POINT
Digital public infrastructure (DPI) and data sovereignty
Argument 3
The METRI “Friendship” platform proposes a modular, multi‑stakeholder global AI infrastructure (Saurabh Garg)
EXPLANATION
Saurabh presents the METRI (Multi‑stakeholder AI for a Trusted and Resilient Infrastructure) platform, a modular, open‑source initiative that aggregates compute, data, models, and talent components under shared governance. It aims to become a global AI infrastructure owned collectively.
EVIDENCE
He describes METRI as a digital public good that can be built modularly, incorporating the four AI components-compute, data, models, talent-and governance mechanisms, resulting from the AI democratization working group’s charter [110-113].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI-as-DPI model that integrates compute, data, models and talent is outlined in [S34]; the public-interest AI charter calling for co-creation of infrastructure is presented in [S33].
MAJOR DISCUSSION POINT
Open models, federated learning, and collaborative platforms
Argument 4
Federated structures keep data ownership with contributors, preventing new dependencies (Saurabh Garg)
EXPLANATION
Saurabh argues that a federated architecture ensures that data contributors retain ownership, avoiding the creation of new dependencies on external actors. This structure supports diverse languages and cultural contexts.
EVIDENCE
He notes that a federated structure would keep data ownership with contributors and preserve variety of languages and cultural contexts, while enabling safe, trusted sharing via technology and policy mechanisms [117-119].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Data-governance frameworks that preserve contributor ownership are discussed in [S25]; federated approaches enabling trusted sharing are highlighted in the DPI overview [S30].
MAJOR DISCUSSION POINT
Building trust and community empowerment
AGREED WITH
Chenai Chair, Sanjay Jain, Sangbu Kim
Argument 5
Prioritize capability development and domain‑specific niche models to reduce infrastructure demands (Saurabh Garg)
EXPLANATION
Saurabh recommends focusing on building people’s AI capabilities and developing smaller, domain‑specific models, which consume less compute and avoid the heavy resource needs of large language models. This approach enhances productivity while lowering infrastructure pressure.
EVIDENCE
He emphasizes capability development and the need for small, domain-specific niche models that require less power and infrastructure compared to large language models [300-303].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Skill deficits and the need for capacity building are identified in [S20]; infrastructure constraints that favor smaller, domain-specific models are noted in [S29].
MAJOR DISCUSSION POINT
Funding allocation and priority setting
AGREED WITH
Sangbu Kim, Chenai Chair, Sanjay Jain
DISAGREED WITH
Sangbu Kim, Chenai Chair, Sanjay Jain, Yann LeCun
Y
Yann LeCun
9 arguments153 words per minute2772 words1083 seconds
Argument 1
Dominance of proprietary large‑scale models creates a bottleneck for open innovation (Yann LeCun)
EXPLANATION
Yann explains that the current AI landscape is dominated by proprietary, large‑scale models, which restricts open innovation because these models are not openly accessible or shareable. Open models are needed to break this bottleneck.
EVIDENCE
He states that the lack of open-weight, open-source models is a barrier, and that proprietary systems cannot access globally contributed data, limiting model quality; he proposes federated learning as a technical solution [41-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A warning that focusing only on open-source elements without broader trust and governance can reproduce silos is made in [S32]; the need for more than just open-source models is emphasized in [S27].
MAJOR DISCUSSION POINT
Core barriers to AI democratization
AGREED WITH
Sangbu Kim, Saurabh Garg
DISAGREED WITH
Sangbu Kim, Saurabh Garg
Argument 2
Open‑weight, open‑source models are a necessary condition for equitable AI (Yann LeCun)
EXPLANATION
Yann reiterates that making model weights and source code openly available is essential for equitable AI access worldwide. Without such openness, only a few corporations can develop and deploy powerful AI systems.
EVIDENCE
He echoes earlier points that top-performing open models are a necessary condition for removing barriers, noting that today no such open models exist and that data access is crucial for building better global models [41-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of open-source models for inclusive AI is argued in the discussion of African talent and model access in [S27]; concerns about proprietary dominance are echoed in [S32].
MAJOR DISCUSSION POINT
Open models, federated learning, and collaborative platforms
AGREED WITH
Saurabh Garg, Chenai Chair
Argument 3
Federated learning enables data contribution while preserving local data privacy (Yann LeCun)
EXPLANATION
Yann describes federated learning as a method where regions can contribute to model training without sharing raw data, thereby maintaining ownership and privacy. Parameter vectors are exchanged instead of the data itself.
EVIDENCE
He explains that regions can keep ownership of their data and contribute to training a global model by exchanging parameter vectors, a form of federated learning that protects privacy [41-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Data-governance and privacy-preserving federated approaches are described in [S25]; DPI frameworks that support federated data sharing are outlined in [S30].
MAJOR DISCUSSION POINT
Open models, federated learning, and collaborative platforms
AGREED WITH
Saurabh Garg, Sanjay Jain, Chenai Chair
Argument 4
Federated learning allows regions to contribute data for model training without relinquishing ownership (Yann LeCun)
EXPLANATION
Yann emphasizes that federated learning lets different regions add their cultural and linguistic data to global models while retaining control over the raw datasets. This approach mitigates data‑sovereignty concerns.
EVIDENCE
He notes that regions can contribute data without communicating it directly, preserving ownership while still improving global model quality through parameter exchange [117-119].
MAJOR DISCUSSION POINT
Digital public infrastructure (DPI) and data sovereignty
Argument 5
Current high training compute is a temporary phase; future models will be smarter and smaller (Yann LeCun)
EXPLANATION
Yann argues that the massive compute required for training today’s large language models is a transient situation. Future AI systems will be more intelligent, requiring fewer parameters and less training compute.
EVIDENCE
He states that training requirements are temporary because current LLMs are knowledge-storage systems; future models will replace knowledge with intelligence, becoming smaller though possibly more expensive at inference time [65-69].
MAJOR DISCUSSION POINT
Future AI compute needs and paradigm shift
DISAGREED WITH
Sangbu Kim, Saurabh Garg
Argument 6
Inference workloads are likely to become the dominant compute cost as models become more reasoning‑intensive (Yann LeCun)
EXPLANATION
Yann predicts that as models shift from pure knowledge storage to reasoning, the bulk of compute will move from training to inference, making inference the primary cost driver.
EVIDENCE
He notes that while training may become cheaper, inference could be more expensive because smarter models will need to reason more, keeping overall compute demand significant [69-71].
MAJOR DISCUSSION POINT
Future AI compute needs and paradigm shift
Argument 7
The next AI revolution will focus on world models that learn from sensory data and understand the real world, moving beyond text‑only knowledge storage (Yann LeCun)
EXPLANATION
Yann envisions a new AI paradigm where systems learn from multimodal sensory inputs (vision, video) to build world models that can predict and reason about real‑world dynamics, surpassing the limitations of text‑only LLMs.
EVIDENCE
He describes world models that ingest sensory data, compare the amount of data a child experiences versus text data, and argue that future AI must understand the physical world to achieve true intelligence [234-260].
MAJOR DISCUSSION POINT
Future AI compute needs and paradigm shift
Argument 8
Support academic research on non‑LLM AI paradigms and world‑model approaches (Yann LeCun)
EXPLANATION
Yann calls for increased funding and support for academic groups working on alternative AI architectures, such as world models (JEPA), which are currently under‑explored by industry. Academic research is crucial to break the LLM monoculture.
EVIDENCE
He notes that most work on world models is happening in academia, with industry focused on LLMs, and suggests propping up academic research to prevent LLMs from monopolizing resources [267-274].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for supporting African research communities and non-LLM work are documented in [S27].
MAJOR DISCUSSION POINT
Funding allocation and priority setting
DISAGREED WITH
Sangbu Kim, Chenai Chair, Saurabh Garg, Sanjay Jain
Argument 9
International bodies (UNESCO, AI Alliance, etc.) should coordinate federated‑learning collaborations (Audience / Yann LeCun)
EXPLANATION
Yann proposes that multilateral organizations like UNESCO and the AI Alliance can play a coordinating role in federated‑learning efforts, bringing together diverse groups to develop open‑source AI responsibly.
EVIDENCE
In response to an audience question, he suggests UNESCO, AI Alliance, and other bodies could help organize bottom-up and top-down collaborations for federated learning and open-source models [267-274].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of multilateral organizations in coordinating digital public goods and AI cooperation is discussed in [S32]; DPI frameworks that enable international collaboration are outlined in [S30].
MAJOR DISCUSSION POINT
Open models, federated learning, and collaborative platforms
S
Sanjay Jain
3 arguments182 words per minute1081 words355 seconds
Argument 1
DPI provides consent‑based data access that enables scalable AI services (Sanjay Jain)
EXPLANATION
Sanjay explains that digital public infrastructure creates a layer of consent‑based data access, allowing individuals to control their data while enabling AI services to scale securely and efficiently.
EVIDENCE
He describes DPI as a management layer that provides consented access to personal records, enabling applications to be built on top of this trusted data layer, and cites examples from India where such access fuels AI services [128-135].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of DPI in offering consent-based, trusted data access for AI services is explained in [S30]; interoperability and reuse that enable scalable services are further discussed in [S31].
MAJOR DISCUSSION POINT
Digital public infrastructure (DPI) and data sovereignty
AGREED WITH
Chenai Chair, Saurabh Garg, Sangbu Kim
Argument 2
Open‑source ID platforms (e.g., MOSIP) let countries customize identity systems while retaining control (Sanjay Jain)
EXPLANATION
Sanjay highlights that open‑source identity platforms like MOSIP allow nations to build tailored digital ID systems, preserving sovereignty while benefiting from shared technology.
EVIDENCE
He references MOSIP as a modular open-source ID platform adopted in Ethiopia (FIDA) and elsewhere, enabling countries to add policy layers and customize applications while maintaining local legal control [207-210].
MAJOR DISCUSSION POINT
Digital public infrastructure (DPI) and data sovereignty
Argument 3
Invest in building DPI globally to give countries control over their data and enable AI participation (Sanjay Jain)
EXPLANATION
Sanjay argues that allocating substantial funding to expand DPI worldwide will empower nations to own their data, fostering equitable AI participation and reducing dependency on external providers.
EVIDENCE
He notes that a $500 million fund could be used to deploy DPI systems globally, giving people digital records (e.g., health, financial) that can be leveraged by AI while maintaining control [289-292].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Global DPI investment proposals and the four-resource AI infrastructure model are presented in [S34]; the importance of interoperable, trusted DPI for worldwide AI participation is highlighted in [S30] and [S31].
MAJOR DISCUSSION POINT
Funding allocation and priority setting
AGREED WITH
Sangbu Kim, Chenai Chair, Saurabh Garg
DISAGREED WITH
Sangbu Kim, Chenai Chair, Saurabh Garg, Yann LeCun
A
Arun Sharma
1 argument157 words per minute140 words53 seconds
Argument 1
Mismatch between rapid software advances and slower hardware/physical infrastructure slows deployment (Arun Sharma)
EXPLANATION
Arun points out that software innovations, such as AI‑enabled smart glasses for farmers, are outpacing the physical supply chain for inputs like seeds and fertilizer, creating a lag that hampers real‑world impact.
EVIDENCE
He asks why hardware and physical resources (seeds, fertilizer) remain archaic while software evolves quickly, highlighting the gap between software and hardware progress [326-330].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Infrastructure gaps such as inadequate hardware and high costs are identified in [S20]; the contrast between software innovation and hardware capacity is illustrated by India’s compute plan in [S28].
MAJOR DISCUSSION POINT
Core barriers to AI democratization
F
Faith Waidaka
1 argument94 words per minute1085 words691 seconds
Argument 1
Adopt a holistic approach that simultaneously advances compute, talent, regulation, and reforms (Faith Waidaka)
EXPLANATION
Faith stresses that democratizing AI requires coordinated progress across multiple fronts—computing infrastructure, talent development, regulatory frameworks, and systemic reforms—rather than isolated interventions.
EVIDENCE
She summarizes the need for talent, compute, data centers, regulatory frameworks, and reforms to work together to make AI democratization possible [308-315].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multiple barriers across compute, skills, and regulation are listed in [S20]; a holistic DPI strategy that integrates these dimensions is described in [S30]; the need for integrated approaches is warned about in [S32].
MAJOR DISCUSSION POINT
Funding allocation and priority setting
AGREED WITH
Saurabh Garg, Chenai Chair
A
Audience
1 argument146 words per minute166 words67 seconds
Argument 1
International bodies (UNESCO, AI Alliance, etc.) should coordinate federated‑learning collaborations (Audience / Yann LeCun)
EXPLANATION
An audience member asks which organizations could coordinate federated‑learning collaborations, prompting a response that multilateral bodies such as UNESCO and the AI Alliance are well‑placed to facilitate global cooperation.
EVIDENCE
The audience question requests ideas on coordinating federated learning, and Yann suggests UNESCO, AI Alliance, and other international groups as potential coordinators [333-334] and [267-274].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of multilateral organizations in coordinating digital public goods and AI cooperation is discussed in [S32]; DPI frameworks that enable international collaboration are outlined in [S30].
MAJOR DISCUSSION POINT
Open models, federated learning, and collaborative platforms
Agreements
Agreement Points
Concentration of digitized data and compute in high‑income countries limits access to AI for low‑income regions
Speakers: Sangbu Kim, Yann LeCun, Saurabh Garg
Concentration of digitized data and compute in high‑income countries limits access (Sangbu Kim) Dominance of proprietary large‑scale models creates a bottleneck for open innovation (Yann LeCun) Lack of open models and limited AI literacy impede effective use of AI (Saurabh Garg)
All three speakers point out that the current AI ecosystem is dominated by data and compute resources held in wealthy countries, which creates a structural barrier for broader participation. Sangbu quantifies the skewed data distribution and the shortage of compute [7-9][5]; Yann stresses that proprietary models lock out many users and that open-weight models are missing [41-48]; Saurabh notes that without open models and AI literacy the gap cannot be closed [34-37].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of the global compute divide highlight that most high-performance hardware is concentrated in North America and Western Europe, creating self-reinforcing barriers for low-income regions [S72] and reflecting broader concerns about restricted access to computing resources [S66].
Open‑weight, open‑source models are essential for equitable AI democratization
Speakers: Yann LeCun, Saurabh Garg, Chenai Chair
Open‑weight, open‑source models are a necessary condition for equitable AI (Yann LeCun) Lack of open models and limited AI literacy impede effective use of AI (Saurabh Garg) Community‑driven open models such as Crane AI demonstrate how local talent can build useful applications (Chenai Chair)
The panel concurs that making model weights and source code openly available is a prerequisite for inclusive AI. Yann calls for top-performing open models [41-48]; Saurabh highlights the current scarcity of such models as a barrier [34-37]; Chenai provides a concrete example of an open model (Crane AI) emerging from a community effort [304-305].
POLICY CONTEXT (KNOWLEDGE BASE)
Multistakeholder recommendations stress open-source first principles and local talent development as essential for autonomy and equitable AI, as noted in discussions on open-model priorities [S64] and calls for shared AI foundations to achieve measurable global impact [S65].
Federated learning / federated structures preserve data sovereignty while enabling global model improvement
Speakers: Yann LeCun, Saurabh Garg, Sanjay Jain, Chenai Chair
Federated learning enables data contribution while preserving local data privacy (Yann LeCun) Federated structures keep data ownership with contributors, preventing new dependencies (Saurabh Garg) DPI provides consent‑based data access that enables scalable AI services (Sanjay Jain) Participatory, community‑owned data initiatives create trust and ensure relevance (Chenai Chair)
All four speakers advocate for a federated approach that lets regions contribute to AI models without relinquishing control over raw data. Yann describes parameter-exchange federated learning [41-48]; Saurabh stresses a federated architecture to keep ownership [117-119]; Sanjay explains DPI-based consented access as a practical implementation [128-135]; Chenai underlines community ownership as essential for trust [160-169][173-176].
POLICY CONTEXT (KNOWLEDGE BASE)
Federated learning is promoted as a privacy-preserving, distributed approach that keeps data local while improving global models, consistent with technical frameworks and policy endorsements from multiple sources [S50][S51][S52][S53][S54][S55][S56].
Community participation and local ownership are key to building trusted AI data infrastructure
Speakers: Chenai Chair, Saurabh Garg, Sanjay Jain, Sangbu Kim
Participatory, community‑owned data initiatives create trust and ensure relevance (Chenai Chair) Federated structures keep data ownership with contributors, preventing new dependencies (Saurabh Garg) DPI provides consent‑based data access that enables scalable AI services (Sanjay Jain) Local data can be fully owned, controlled, and managed by local country and people (Sangbu Kim)
The panel repeatedly emphasizes that data systems must be built with and owned by the communities they serve. Chenai cites Masakhane’s participatory model [160-169]; Saurabh and Sanjay describe DPI mechanisms that retain local control [117-119][128-135]; Sangbu reiterates that local data ownership is a positive signal of democratization [55-59].
POLICY CONTEXT (KNOWLEDGE BASE)
Governance frameworks emphasize inclusive, community-level participation and local trust as essential for legitimate AI infrastructure, echoing calls for representation of developing countries in digital policy and local ownership [S56][S58][S64].
Funding should prioritize high‑impact use cases, talent development and community‑led projects
Speakers: Sangbu Kim, Chenai Chair, Saurabh Garg, Sanjay Jain
Direct funds toward high‑impact use cases (agriculture, health, education) and user inspiration to drive adoption (Sangbu Kim) Allocate resources to open‑model development, talent pipelines, and community‑led projects (Chenai Chair) Prioritize capability development and domain‑specific niche models to reduce infrastructure demands (Saurabh Garg) Invest in building DPI globally to give countries control over their data and enable AI participation (Sanjay Jain)
All speakers agree that limited resources should be channeled toward concrete applications that generate demand, build local talent, and support community-driven platforms. Sangbu stresses agriculture, health, education and user inspiration [291-298]; Chenai calls for funding open models and talent pipelines [304-307]; Saurabh highlights capability building and niche models [300-303]; Sanjay proposes a $500 million DPI rollout to empower countries [289-292].
POLICY CONTEXT (KNOWLEDGE BASE)
Funding models that combine high-impact use cases with talent development are reflected in national AI leap initiatives and capacity-building programmes, such as Estonia’s public-private AI Leap and UN-linked capacity building recommendations [S61][S62][S64].
Capacity development and AI literacy are essential for effective AI adoption
Speakers: Saurabh Garg, Chenai Chair, Faith Waidaka
Lack of open models and limited AI literacy impede effective use of AI (Saurabh Garg) Allocate resources to open‑model development, talent pipelines, and community‑led projects (Chenai Chair) Adopt a holistic approach that simultaneously advances compute, talent, regulation, and reforms (Faith Waidaka)
The need to build skills and literacy is a shared view. Saurabh explicitly links AI literacy to model access [34-37]; Chenai stresses talent pipelines as part of resource allocation [304-307]; Faith calls for a holistic strategy that includes talent development [310-311].
POLICY CONTEXT (KNOWLEDGE BASE)
UN and multistakeholder reports stress the necessity of data and AI literacy for effective adoption, linking skill development to AI governance strategies [S60][S62][S63].
Similar Viewpoints
Both see local data ownership and consent‑based access as a concrete indicator that a country is moving from merely consuming AI to building its own AI ecosystem. Sangbu highlights that locally owned data is a positive signal of democratization [55-59]; Sanjay describes DPI as the layer that makes such ownership operational for AI services [128-135].
Speakers: Sangbu Kim, Sanjay Jain
Concentration of digitized data and compute in high‑income countries limits access (Sangbu Kim) DPI provides consent‑based data access that enables scalable AI services (Sanjay Jain)
Both argue that a federated, community‑driven approach is essential to preserve data sovereignty and build trust. Chenai stresses participatory data creation [160-169]; Saurabh adds that a federated architecture safeguards ownership while enabling sharing [117-119].
Speakers: Chenai Chair, Saurabh Garg
Participatory, community‑owned data initiatives create trust and ensure relevance (Chenai Chair) Federated structures keep data ownership with contributors, preventing new dependencies (Saurabh Garg)
Both identify the scarcity of open models as a core barrier and call for more open‑source AI to enable broader participation. Yann frames open models as a prerequisite for equity [41-48]; Saurabh notes the current lack of such models as a blocker [34-37].
Speakers: Yann LeCun, Saurabh Garg
Open‑weight, open‑source models are a necessary condition for equitable AI (Yann LeCun) Lack of open models and limited AI literacy impede effective use of AI (Saurabh Garg)
Both present federated or consent‑based mechanisms as practical ways to let regions contribute data to AI systems without losing control. Yann describes federated learning with parameter exchange [41-48]; Sanjay explains DPI‑based consented access as a real‑world implementation [128-135].
Speakers: Yann LeCun, Sanjay Jain
Federated learning enables data contribution while preserving local data privacy (Yann LeCun) DPI provides consent‑based data access that enables scalable AI services (Sanjay Jain)
Unexpected Consensus
A leading AI researcher (Yann LeCun) and a development practitioner (Sanjay Jain) both endorse federated learning/DPI as the primary path to preserve data sovereignty while scaling AI services
Speakers: Yann LeCun, Sanjay Jain
Federated learning enables data contribution while preserving local data privacy (Yann LeCun) DPI provides consent‑based data access that enables scalable AI services (Sanjay Jain)
It is surprising that an academic focused on cutting-edge AI architectures and a policy-oriented DPI expert converge on the same technical-policy solution-federated learning/DPI-as the cornerstone for democratizing AI, indicating cross-disciplinary alignment on data sovereignty. Yann’s technical description of federated learning [41-48] and Sanjay’s policy-level DPI consent model [128-135] reinforce each other.
POLICY CONTEXT (KNOWLEDGE BASE)
LeCun has publicly advocated distributed training that preserves sovereignty [S53][S55], and development practitioners similarly promote federated DPI models, as documented in joint statements on interoperable federated models [S54].
Agreement that gender‑responsive, community‑owned projects are as important as high‑impact use cases for AI democratization
Speakers: Chenai Chair, Sangbu Kim
Gender‑responsive, locally managed infrastructure promotes equitable benefits and sustainability (Chenai Chair) Direct funds toward high‑impact use cases (agriculture, health, education) and user inspiration to drive adoption (Sangbu Kim)
While Sangbu emphasizes sectoral use cases, Chenai adds a gender‑responsive lens, and both concur that funding must address concrete community needs to achieve adoption. This blend of sectoral and gender‑focused priorities was not explicitly anticipated at the start of the discussion.
Overall Assessment

The panel shows strong convergence on three core themes: (1) the need to break the concentration of data and compute by promoting open‑source models; (2) the importance of federated, community‑owned data infrastructures (DPI) to preserve sovereignty and build trust; (3) the allocation of funds toward high‑impact, locally relevant use cases together with talent and capacity development.

High consensus across technical, policy and development perspectives, suggesting that future initiatives can be jointly designed around open models, federated DPI and targeted use‑case funding, thereby increasing the likelihood of coordinated action on AI democratization.

Differences
Different Viewpoints
Nature of compute barrier – structural concentration vs temporary phase
Speakers: Sangbu Kim, Yann LeCun, Saurabh Garg
Concentration of digitized data and compute only for developed world limits access (Sangbu Kim) Current high training compute is a temporary phase; future models will be smarter and smaller (Yann LeCun) Lack of open models and limited AI literacy impede effective use of AI (Saurabh Garg)
Sangbu Kim argues that the concentration of data and compute in high-income countries is a core, structural barrier to AI democratization [38]. Yann LeCun counters that the massive compute needed today is only a temporary phase, expecting future models to require far less training compute [65-69]. Saurabh Garg adds that the real obstacle is the lack of open models and AI literacy rather than compute infrastructure itself [34-37].
POLICY CONTEXT (KNOWLEDGE BASE)
The debate mirrors observations that compute concentration is a structural issue rooted in market dynamics [S72], while some analyses describe the current shortage as a transitional phase in AI infrastructure evolution [S70].
Funding priorities – high‑impact use cases vs open‑model/talent development vs DPI deployment vs academic research
Speakers: Sangbu Kim, Chenai Chair, Saurabh Garg, Sanjay Jain, Yann LeCun
Direct funds toward high‑impact use cases (agriculture, health, education) and user inspiration to drive adoption (Sangbu Kim) Allocate resources to open‑model development, talent pipelines, and community‑led projects (Chenai Chair) Prioritize capability development and domain‑specific niche models to reduce infrastructure demands (Saurabh Garg) Invest in building DPI globally to give countries control over their data and enable AI participation (Sanjay Jain) Support academic research on non‑LLM AI paradigms and world‑model approaches (Yann LeCun)
Sangbu Kim proposes spending the $500 million on concrete use-case pilots in agriculture, education and health and on inspiring users [291-298]. Chenai Chair argues the money should fund open-source models and talent pipelines to enable community-led innovation [304-307]. Saurabh Garg stresses capability building and domain-specific niche models to lower infrastructure needs [300-303]. Sanjay Jain sees the fund as a way to deploy digital public infrastructure worldwide, giving nations data sovereignty [289-292]. Yann LeCun calls for channeling resources into academic research on alternative AI architectures, which he says are currently dominated by industry [267-274].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy forums have highlighted divergent funding priorities, with some emphasizing open-source and talent development [S64], others focusing on demonstrable research outcomes and scaling [S65], and additional calls for DPI deployment in development contexts [S54].
Priority of compute versus models and AI literacy
Speakers: Sangbu Kim, Saurabh Garg, Yann LeCun
Concentration of digitized data and compute only for developed world limits access (Sangbu Kim) Lack of open models and limited AI literacy impede effective use of AI (Saurabh Garg) Dominance of proprietary large‑scale models creates a bottleneck for open innovation (Yann LeCun)
Sangbu Kim focuses on the need to create demand for compute through clear applications [49-55]. Saurabh Garg argues that without open models and AI literacy, additional compute will not translate into impact [34-37]. Yann LeCun points to the dominance of proprietary models as the main barrier, suggesting that open-weight models are a necessary condition for equitable AI [41-48]. The three speakers agree something is missing but disagree on whether the priority is more compute, more open models, or more literacy/talent.
POLICY CONTEXT (KNOWLEDGE BASE)
Analytical frameworks distinguish foundational compute resources from application-layer models and underscore the complementary need for AI literacy, reflecting discussions on the compute divide and capacity building [S71][S66][S60].
Unexpected Differences
Compute as a structural barrier versus a temporary technical phase
Speakers: Sangbu Kim, Yann LeCun
Concentration of digitized data and compute only for developed world limits access (Sangbu Kim) Current high training compute is a temporary phase; future models will be smarter and smaller (Yann LeCun)
It is surprising that participants view compute needs so differently: Sangbu treats the concentration of compute resources as a long-term structural obstacle, while Yann sees the current compute intensity as a fleeting phase that will diminish with smarter models. This divergence affects how each proposes to allocate resources [38][65-69].
POLICY CONTEXT (KNOWLEDGE BASE)
Literature notes both a structural compute gap and the possibility of a temporary transition as infrastructure catches up, echoing observations on the compute divide and transitional phases [S72][S70].
Community‑driven open‑model development versus top‑down use‑case funding
Speakers: Chenai Chair, Sangbu Kim
Allocate resources to open‑model development, talent pipelines, and community‑led projects (Chenai Chair) Direct funds toward high‑impact use cases (agriculture, health, education) and user inspiration to drive adoption (Sangbu Kim)
While both aim to democratize AI, Chenai emphasizes grassroots, community-owned model development, whereas Sangbu pushes for funding of specific sectoral pilots. The contrast between bottom-up model creation and top-down application funding was not anticipated given their shared focus on impact [304-307][291-298].
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholder consultations prioritize open-source, community-led model creation over top-down, use-case driven funding, as captured in multistakeholder recommendations for open-first principles and local talent empowerment [S64][S65].
Overall Assessment

The panel shows considerable convergence on the need for open models, digital public infrastructure, and community participation, but diverges sharply on where limited resources should be directed—whether toward building compute demand via sectoral pilots, investing in open‑source model and talent ecosystems, scaling DPI worldwide, or supporting academic research on new AI paradigms. The most pronounced disagreements revolve around the nature of the compute barrier and the optimal funding strategy.

Moderate to high disagreement; while participants share common goals, the lack of consensus on priority actions could impede coordinated policy and investment decisions, leading to fragmented efforts in AI democratization.

Partial Agreements
All three emphasize the importance of open models for democratizing AI, but Yann stresses open‑weight models as a necessary condition, Saurabh highlights the need for open models together with AI literacy, and Chenai focuses on funding open‑model development and talent pipelines. They share the goal of open‑model availability but differ on the mechanisms—policy, literacy, or talent investment [41-48][34-37][304-307].
Speakers: Yann LeCun, Saurabh Garg, Chenai Chair
Dominance of proprietary large‑scale models creates a bottleneck for open innovation (Yann LeCun) Lack of open models and limited AI literacy impede effective use of AI (Saurabh Garg) Allocate resources to open‑model development, talent pipelines, and community‑led projects (Chenai Chair)
All agree that digital public infrastructure is central to AI democratization. Saurabh outlines the required qualities of DPI (trust, interoperability, reuse) [105-108]. Sanjay proposes scaling DPI worldwide as a funding priority [289-292]. Faith calls for a holistic, multi‑dimensional approach that includes DPI among other pillars [308-315]. They differ on emphasis—technical attributes versus scaling versus integration with broader reforms.
Speakers: Saurabh Garg, Sanjay Jain, Faith Waidaka
DPI must be trusted, interoperable, and reusable to empower users and innovators (Saurabh Garg) Invest in building DPI globally to give countries control over their data and enable AI participation (Sanjay Jain) Adopt a holistic approach that simultaneously advances compute, talent, regulation, and reforms (Faith Waidaka)
Takeaways
Key takeaways
AI democratization is blocked by concentration of digitized data and compute in high‑income countries, and by the lack of documented African languages. Open‑weight, open‑source models and federated learning are seen as essential technical pathways to give low‑resource regions access to AI without surrendering data ownership. Digital public infrastructure (DPI) that is trusted, interoperable, and reusable can provide consent‑based data access, enabling local innovation and preserving data sovereignty. Current high compute requirements for training large language models are a temporary phase; future AI will shift toward smaller, smarter models that are inference‑intensive and will rely on world‑model approaches that learn from sensory data. Funding must be allocated holistically: support high‑impact use cases (agriculture, health, education), develop domain‑specific niche models, build DPI globally, invest in talent pipelines, and back academic research on non‑LLM paradigms. Community‑led, participatory data initiatives (e.g., Masakhane, Project Echo) build trust, ensure relevance, and reduce extractive dynamics.
Resolutions and action items
Proposal to develop the METRI “Friendship” platform as a modular, multi‑stakeholder global AI infrastructure that integrates compute, data, models, and talent components. Commitment to expand open‑source ID platforms (e.g., MOSIP) and other DPI tools (OpenG2P, Digit) to more countries, allowing local customization and data control. Suggested allocation of a hypothetical $500 million fund: (a) build DPI and data‑record systems worldwide; (b) create and scale high‑value use cases in agriculture, health, education, and government services; (c) fund domain‑specific small models and AI literacy programs; (d) invest in open‑model research and talent development, especially in African language NLP. Call for international coordination bodies (UNESCO, AI Alliance, SEM) to facilitate federated‑learning collaborations and open‑model repositories. Encouragement for governments and development partners to adopt a participatory, gender‑responsive approach when designing community data infrastructures.
Unresolved issues
Concrete governance and technical standards for federated‑learning collaborations across countries remain undefined. Metrics for measuring when a country moves from AI consumer to AI builder (beyond “local data utilization”) were discussed but not finalized. Timeline and concrete pathway for breakthrough hardware improvements (beyond incremental CMOS gains) are still uncertain. How to effectively bridge the lag between rapid software advances (e.g., AI assistants) and slower physical infrastructure (e.g., seeds, fertilizer distribution) was raised but not answered. Benchmarks and evaluation criteria for achieving human‑level or super‑intelligent AI were questioned without a clear consensus. Specific mechanisms to ensure that open‑model development does not create new dependencies or power imbalances were not fully detailed.
Suggested compromises
Balancing the push for more compute with the need to generate clear, locally relevant AI applications that drive demand for infrastructure. Adopting a federated rather than centralized data/model architecture to preserve local ownership while enabling global model improvement. Combining technical solutions (open models, federated learning) with policy and protocol frameworks to protect data sovereignty and prevent extractive practices. Integrating talent development, community participation, and open‑source tooling so that both large‑scale providers and small startups can benefit.
Thought Provoking Comments
The computing requirements for training modern AI systems is temporary. Current LLMs are knowledge‑storage systems that need huge memory, but the next revolution will be smarter systems that don’t have to accumulate as much knowledge; they will reason more at inference time.
This reframes the dominant narrative that AI progress is limited by a permanent shortage of compute. It suggests that the real breakthrough will come from algorithmic advances that reduce training compute, shifting focus to model efficiency and intelligence rather than sheer scale.
It prompted the moderator to ask about the balance between training and inference compute and led other panelists (e.g., Saurabh Garg, Sangbu Kim) to discuss model accessibility, open‑weight models, and the need for new research directions rather than just building more data centers.
Speaker: Yann LeCun
Digital public infrastructure must be trusted, interoperable, shareable and give agency to people. We are building the METRI platform – a modular, multi‑stakeholder AI infrastructure that can add compute, data, models and talent as plug‑ins while keeping governance mechanisms local.
Introduces a concrete, governance‑focused framework (METRI) that moves the conversation from abstract barriers to a practical architecture for democratizing AI, emphasizing federation over centralisation.
Shifted the discussion toward concrete implementation strategies. Sanjay Jain echoed the DPI concept with examples (MOSIP, consented data access), and Chenai Chair later linked community‑driven data collection to this federated vision.
Speaker: Saurabh Garg
If we want data infrastructure that communities trust, we must be participatory: build together, let communities own the data lifecycle, and design gender‑responsive projects like Project Echo that empower rather than extract.
Highlights the social‑technical dimension of AI democratization, stressing community ownership, participatory design, and gender considerations—points often missing in technical debates.
Prompted a deeper look at how trust is earned, leading Faith to connect community ownership with the need for “small AI”. It also reinforced the earlier call for federated, locally‑controlled data models.
Speaker: Chenai Chair
The next AI revolution will be systems that understand the real world through sensory data, not just text. A child’s visual cortex sees ~10^14 bytes in four years—far more efficient than reading all internet text. World models that predict the consequences of actions are the path to true intelligence.
Introduces a paradigm shift from language‑only models to multimodal, world‑model AI, grounding the debate in cognitive science and providing a concrete metric (data volume) to illustrate the limitation of current LLMs.
Steered the conversation toward future research priorities and funding needs. Later, when asked about money, Yann emphasized supporting academic research on world models, influencing the panel’s view on where investment should go.
Speaker: Yann LeCun
The key indicator that a country is moving from consuming AI to building it is the ability to fully manage and own its local data sets.
Provides a measurable signal of AI sovereignty, linking data ownership directly to the transition from user to creator, and tying it back to the earlier point about demand for compute.
Guided the moderator’s follow‑up on small AI and user‑centric services, and reinforced the theme that data, not just hardware, is the catalyst for local AI ecosystems.
Speaker: Sangbu Kim
Open, open‑weight models are a necessary condition for democratizing AI. If regions can contribute data without giving up ownership—using federated learning and parameter exchange—we can build a global model that is better than proprietary systems.
Combines technical feasibility (federated learning) with a policy stance (data ownership), offering a concrete pathway to reduce the data‑centric power imbalance.
Inspired Sanjay Jain’s discussion of consented, federated access to personal records and reinforced the panel’s consensus on the importance of open models and federated architectures.
Speaker: Yann LeCun
We need to focus on AI literacy and open models rather than just more GPUs. Infrastructure can be acquired over time, but without people who can use the models, the barrier remains.
Challenges the assumption that compute scarcity is the primary obstacle, shifting attention to human capital and model accessibility.
Prompted other speakers (e.g., Sangbu Kim, Faith) to discuss user‑centric AI, talent development, and the role of education in creating demand for compute.
Speaker: Saurabh Garg
There is no such thing as a General AI (GAI). Human‑level AI may eventually appear, but it will not be a single breakthrough event; progress will be incremental and domain‑specific.
Counters hype around imminent AGI, grounding expectations and redirecting focus toward realistic, incremental advances.
Closed the session with a sobering note that tempered earlier optimism, influencing the audience’s final questions about benchmarks and timelines for AGI.
Speaker: Yann LeCun
Overall Assessment

The discussion began with a broad framing of compute and data scarcity, but pivotal comments—especially from Yann LeCun, Saurabh Garg, and Chenai Chair—reoriented the conversation toward governance, federated architectures, community ownership, and a shift from brute‑force compute to smarter, more efficient models. These insights introduced new frameworks (METRI, federated learning), highlighted the importance of trust and participation, and challenged the prevailing narrative that hardware alone will democratize AI. As a result, the panel moved from identifying problems to proposing concrete, multi‑layered solutions that blend technical, policy, and social dimensions, ultimately shaping a more nuanced and actionable roadmap for AI democratization.

Follow-up Questions
What is the lag between physical hardware (e.g., seeds, fertilizer distribution) and virtual AI software, and how can it be addressed?
Understanding this lag is crucial to ensure that AI-driven recommendations can be acted upon in real time, especially in agriculture, education, and healthcare in low‑income settings.
Speaker: Arun Sharma
Which organizations could coordinate federated learning collaborations for open AI models, and what governance structure should they adopt?
Effective coordination is needed to overcome technical and policy challenges of federated learning, ensuring data sovereignty while enabling global model improvement.
Speaker: Audience member (particle physicist from CERN) and Yann LeCun
Is data the only bottleneck for achieving AGI, and what benchmarks should be used to evaluate AGI/ASI?
Clarifying the role of data versus compute and defining measurable benchmarks are essential for tracking progress toward human‑level or super‑intelligent AI.
Speaker: Audience member (particle physicist from CERN) and Yann LeCun
How can open‑weight, open‑source AI models be developed that can surpass proprietary systems?
Open models are a key lever for democratizing AI access; research is needed on model architectures, training pipelines, and community governance that keep them competitive.
Speaker: Yann LeCun, Saurabh Garg, Chenai Chair
What technical breakthroughs at the hardware/fabrication level are required to significantly reduce AI compute intensity?
Current reliance on CMOS limits long‑term compute efficiency; breakthroughs (e.g., carbon nanotubes, photonics) could lower energy costs and broaden access.
Speaker: Yann LeCun
How can federated learning be implemented to preserve data sovereignty while enabling global model training?
Designing protocols that keep raw data local yet allow model updates is vital for regions wary of data extraction, supporting trustworthy AI collaboration.
Speaker: Yann LeCun, Saurabh Garg
What effective methods can be used to build AI talent and capacity in low‑income regions?
Talent pipelines are repeatedly cited as a barrier; research into curricula, mentorship models, and community‑driven training is needed.
Speaker: Sangbu Kim, Saurabh Garg, Chenai Chair
How can small, domain‑specific niche models be created to reduce compute requirements and improve relevance?
Domain‑focused models can achieve high performance with less infrastructure, making AI feasible for resource‑constrained environments.
Speaker: Saurabh Garg
What metrics or indicators best signal a country’s transition from AI consumer to AI builder?
Identifying measurable signals (e.g., local data ownership, model development) helps track progress toward AI self‑sufficiency.
Speaker: Sangbu Kim
How can digital public infrastructure be designed to be interoperable and open by design for startups and governments?
Interoperability enables small actors to plug into shared AI services, accelerating ecosystem growth without creating new dependencies.
Speaker: Sanjay Jain
What governance mechanisms and modular platforms (e.g., METRI) are needed to coordinate multi‑stakeholder AI infrastructure?
A modular, federated platform could align governments, private sector, and philanthropies, ensuring trust, resilience, and shared ownership.
Speaker: Saurabh Garg
How can community‑driven data collection be structured to ensure trust and avoid extractive practices?
Participatory approaches and local ownership are essential for sustainable, trusted data ecosystems, especially for under‑represented languages.
Speaker: Chenai Chair
Which use cases (agriculture, education, healthcare, government services) should be prioritized for AI deployment in low‑income contexts?
Prioritizing high‑impact, user‑centric applications can demonstrate value, drive adoption, and justify further investment.
Speaker: Sangbu Kim, Sanjay Jain
What is the optimal allocation strategy for a $500 million fund to maximize AI democratization impact?
Strategic distribution across infrastructure, talent, open models, and sectoral pilots is needed but requires further analysis to avoid one‑size‑fits‑all approaches.
Speaker: All panelists (responses from Sanjay Jain, Sangbu Kim, Saurabh Garg, Chenai Chair)
What should governments, AI ecosystems, and startups focus on over the next 1, 5, and 10 years given the shift toward smarter inference on devices?
Long‑term strategic planning is required to align policy, investment, and research priorities with evolving compute patterns.
Speaker: Yann LeCun

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Setting the Rules_ Global AI Standards for Growth and Governance

Setting the Rules_ Global AI Standards for Growth and Governance

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened to discuss why AI standards are essential for aligning global AI development with safety, trust, and inclusive outcomes [4-5]. Participants defined standards variously as benchmarking methodologies that quantify risk uncertainty (ML Commons), safety guidelines that follow product release (Qualcomm), and normative governance frameworks that set global “good” baselines (Singapore government) [13-15][17-24][28-33]. Microsoft described its internal Responsible AI Standard as a tool to align product, engineering, and sales teams around common expectations, while urging external standards to create a shared language across the ecosystem [41-46]. OpenAI’s AI Standards Lead emphasized translating internal risk-management practices into a common language for customers and building interoperability to foster consumer trust [56-59]. The Indian Bureau of Standards highlighted standards as mechanisms for consumer confidence and quality assurance, linking national work to ISO’s SC42 efforts [61-64].


A recurring challenge identified was determining “what is good enough,” requiring consensus that includes industry, regulators, and broader stakeholders rather than a single perspective [96-103][108-114]. Panelists agreed that standards must be open and inclusive so smaller firms can adopt them without building proprietary processes, a point underscored by Qualcomm’s call for open governance models [169-185]. Measuring AI performance was described as developing taxonomies, datasets, and evaluators that estimate uncertainty under defined assumptions, recognizing that different sectors may accept different risk thresholds [251-259]. The group noted that standards should complement, not replace, regulation, providing technical expectations that regulators can reference even when formal rules are absent [77-90][214-223].


Looking ahead, participants expect a rise in certification schemes that signal consensus on “good enough” and modular, interoperable standards that can evolve with advancing models [336-340][388-392]. Future-proofing will rely on process-oriented standards that remain applicable as AI capabilities change, while specific evaluation methods will be updated over time [346-354]. Accelerated development of testing methodologies within ISO processes was cited as a priority to keep pace with rapid AI innovation [378-382]. The panel concluded that despite the nascent state of AI standardisation, collective action across industry, policy, and standards bodies is vital to build trust and enable responsible AI deployment [462-470].


Keypoints


Major discussion points


Standards are seen as essential for building trust and aligning “what good looks like” across the AI ecosystem.


The moderator frames the need to demystify standard-setting and stresses global cooperation and inclusion [4-5]. Panelists echo this: Rebecca describes benchmarking as a way to measure risk, a major adoption barrier [13-15]; Amanda explains Microsoft’s internal responsible-AI standard that aligns product, engineering and sales teams and calls for a common external language [41-46]; Chris notes that standards solve a collective-action problem and give legitimacy that pure industry or government actions lack [108-114]; Esther adds that standards translate risk-management practices into a language of consumer trust and interoperability [57-59].


Defining and measuring standards is technically difficult and requires consensus on “good enough.”


Rebecca points out the recurring question of what constitutes “good enough” and stresses the need for a broad, multi-stakeholder consensus [97-102]. Lee lists concrete focus areas-testing, transparency disclosures, and incident reporting-as early priorities for standardisation [80-90]. Rebecca further explains that benchmarking must provide a methodology, taxonomy and reference implementations, yet the core challenge is estimating uncertainty under defined assumptions [250-259]. Chris expands on this by distinguishing high-level process standards from technical benchmark standards that must evolve with model capabilities [291-298].


Inclusive, global cooperation among industry, policy makers, and standards bodies is crucial.


Bhushan highlights the mix of “standard setters and measurers” from industry and policy [34-36]. Kshitij describes India’s AI governance framework and the inter-connectedness of ISO, ML Commons, IEEE and other bodies, stressing the need to adapt global standards to local use-cases [207-212]. Etienne stresses that open, inclusive governance (e.g., ML Commons, ISO) lets smaller firms participate and rely on standards without building their own risk-management systems [176-185]. Lee notes that regulators can reference technical standards to define expectations, and even without regulation standards help differentiate trustworthy providers [214-218].


Future outlook: faster development, certification, interoperable modular standards, and addressing concrete challenges such as language bias.


Bhushan envisions certification that signals “good enough” and a move toward consensus-based benchmarks within two years [337-340]. Chris argues that process standards are relatively future-proof, while specific evaluations must be updated as models advance [347-354]. Lee reports ongoing work on testing methodologies that she hopes to push through ISO within a year [378-382]. Amanda calls for a modular, interoperable standards ecosystem that avoids reinventing the wheel for each new use-case [388-393]. Audience concerns about language bias are addressed by Esther (multilingual evaluation suites) and Etienne (need for reusable safety tests across languages) [441-447][452-460].


Overall purpose / goal of the discussion


The panel was convened to demystify AI standard-setting, explain why standards matter for safety, trust and market adoption, identify the technical and governance challenges in creating and measuring those standards, and outline a coordinated path forward that brings together industry, regulators, standards organisations and civil-society stakeholders.


Overall tone


The conversation maintains a collaborative and solution-focused tone throughout. Early remarks are introductory and aspirational, quickly moving to constructive exchanges about concrete challenges (testing, transparency, measurement). When discussing obstacles-such as the “good enough” dilemma, speed of standard development, and skill gaps-the tone becomes more urgent but remains collegial. The closing segment retains optimism, emphasizing shared commitment to faster, interoperable standards and collective action. No major shifts to conflict or negativity are observed; the tone stays professional, forward-looking, and inclusive.


Speakers

Speakers (from the provided list)


Bhushan Sethi – AI transformation consultant; moderator of the panel.


Rebecca Weiss – Executive Director of ML Commons, an AI benchmarking organization and engineering consortium. [S1]


Etienne Chaponniere – Vice President of Technical Standards at Qualcomm. [S4]


Lee Wan Sie – Singapore government official working on AI governance and policy; focuses on setting global AI norms. [S8]


Amanda Craig – Leader of Microsoft’s Public Policy team for AI and the Office of Responsible AI. [S2]


Joslyn Barnhart – Works at Google DeepMind on AI standards, governance, and policy. [S10]


Chris Meserole – Executive Director of the Frontier Model Forum, advancing Frontier AI safety and security. [S12]


Esther Tetruashvily – AI Standards Lead at OpenAI. [S6]


Kshitij Bathla – Representative of the Bureau of Indian Standards (BIS), National Standards Body of India; represents ISO ICJTC1 SC42. [S17]


Audience – Various audience members asking questions (e.g., on language bias, auditability, privacy governance). [S19][S20][S21]


Additional speakers (not in the provided list)


Juan C. – Unnamed panel participant referenced by Amanda Craig; contributed a comment on standards aligning around “what good looks like.”


Full session reportComprehensive analysis and detailed insights

Opening & Goal – Bhushan Sethi, an AI-transformation consultant, opened the session by stating that the panel’s aim was to demystify AI standard-setting, explore global cooperation, and define “what good looks like” for AI development [2-3][4-5][8-10].


Speaker definitions (in speaking order)


Rebecca Weiss (ML Commons) – Standards are benchmarking methodologies that define risk-measurement and provide technical artefacts for integration into development pipelines [13-15][250-261].


Etienne Chaponnière (Qualcomm) – Unlike telecom standards, which are mandatory for product shipment, AI safety standards usually follow product releases and focus on safety [16-24].


Lee Wan Sie (Singapore) – Standards set global norms and common technical processes for AI governance, aligning “what good looks like” across jurisdictions [26-33][80-90].


Amanda Craig (Microsoft) – Microsoft’s internal Responsible AI Standard aligns product, engineering and sales functions; external standards are needed to create a shared market language [41-46].


Joslyn Barnhart (Google DeepMind) – Regulation is already referencing standards that have not yet been created, creating an urgent need for industry-driven standardisation [48-51].


Chris Meserole (Frontier Model Forum) – Standards solve a collective-action problem by providing an open, credible process that levels the playing field [52-55][108-110].


Esther Tetruashvily (OpenAI) – Standards translate internal risk-management practices into a common language for customers and enable ecosystem interoperability [56-59].


Kshitij Bathla (Bureau of Indian Standards) – Standards are tools that build consumer trust, assure quality, and must be adaptable to Indian-specific use-cases while aligning with ISO [61-64][207-212][206-210].


Core themes


Trust & “good enough” – The panel repeatedly stressed the need for credible, non-subjective reporting and a consensus on what constitutes “good enough” for different sectors [96-102][112-114][134-136][215-224].


Measurement & benchmarking – Rebecca detailed the benchmark components (taxonomy, dataset, evaluator) and highlighted uncertainty estimation as the main technical challenge [252-261][255-259]; Chris distinguished high-level process standards from the scientific benchmarks needed to operationalise them [291-298]; Amanda emphasized that shared metrics are essential to assess progress beyond the “nascent” stage [274-277].


Inclusivity & open governance – Etienne, Kshitij and Lee emphasized that open governance models (ML Commons, ISO, IEEE) enable smaller firms to adopt standards without building bespoke risk-management systems [176-185][207-212][215-224].


Regulation vs. market – Joslyn noted that regulators cite yet-to-exist standards; Chris explained that regulators often off-load risk-management requirements to the standards process, making standards a de-facto regulatory tool [48-51][194-196]; Lee argued that standards can serve as market differentiators even without legal mandates [215-224].


Regional perspectives


India (Manav mission & BIS) – Kshitij described the “Manav” human-centric vision, India’s AI Governance Guidelines, and BIS’s work to align national standards with ISO/IEC JTC1/SC42 outputs while incorporating India-specific risk considerations [207-212][206-210].


Singapore – Lee reported ongoing work on testing methodologies that she aims to submit to ISO within the next year, underscoring the panel’s consensus on the need for accelerated timelines [376-382].


Audience Q&A


Skill-gap & auditability – An audience member asked how governments can audit industry-driven assurance programmes given technical skill gaps [398-405]; Chris replied that the openness and legitimacy of formal standard-setting bodies mitigate this risk [112-114]; Lee added that certification can provide an independent assurance mechanism [215-224].


Language bias – A participant queried multilingual bias in India; Esther explained OpenAI’s use of multilingual evaluation suites (MMLU and Indian-dialect tests) and called for broader community participation [441-447]; Etienne noted that reusable safety-test frameworks are needed for many languages [452-460].


Minimum-consensus vs. absolute requirements – Joslyn answered that regulators will likely accept standards that provide a concrete “minimum bar” rather than overly abstract criteria [436-438].


Two-year outlook & action items


– Bhushan envisaged a rise in certification schemes that codify consensus on “good enough” within the next two years [336-342].


– Chris advocated for future-proof, process-oriented standards with evaluation methods that evolve as model capabilities advance [347-354].


– Lee aims to accelerate ISO-level testing work within a year [376-382].


– Amanda called for a modular, interoperable standards ecosystem to avoid reinventing the wheel for each new use case [388-393].


– Etienne reiterated the importance of open standards that keep costs manageable for smaller companies [176-185].


Conclusion – The panel concluded that AI standards are essential for translating high-level norms into verifiable practices, building trust across consumers, enterprises and regulators, and addressing the collective-action problem of rapid AI innovation [462-470]. Unresolved challenges include auditability, defining “good enough” for diverse risk tolerances, and developing comprehensive multilingual evaluation frameworks. The discussion underscored that coordinated, multistakeholder effort is vital for standards to become a durable foundation for responsible AI deployment.


Session transcriptComplete transcript of the session
Bhushan Sethi

I’m going to provide a brief introduction and then I’ll have my panelists introduce themselves and we’ll get into the discussion. So I’m a consultant around AI transformation. I help companies implement AI, drive the return on investment in a responsible way with AI. What’s really important about this discussion is we need to demystify what we mean by standard setting. There’s been a whole lot of discussion at this week’s summit around the importance of global cooperation, that the importance of inclusion around AI, driving solutions that meet everybody’s needs. The tech CEOs spoke about it yesterday. World leaders have spoken about it. We’re here in India where it’s about planet and people and prosperity. So that’s what the discussion is going to be about.

And we are going to have time for Q &A at the end. But I’m going to have my panelists introduce themselves first in the order that they’re sitting to introduce themselves and also talk about what standards mean for them? What lens they’re looking at from a standard perspective around AI?

Rebecca Weiss

Hello, my name is Rebecca Weiss I’m the executive director of ML Commons we are an AI benchmarking organization we are an engineering consortium that focuses on that problem and so for us as a technical standards organization around benchmarking what that means for us is two things one, we want to define the methodology for measurement and two, we want to create the technical artifacts that allow for engineers to integrate this methodology into their development life cycle. So for us, when we see what’s happening in the world today, the ability to measure risk is a big barrier to adoption and that ability to understand and estimate the uncertainty around the behavior of an AI system is something where we think benchmarking can help.

So, I will actually we have a large panel so I’m going to let everyone else have a chance to talk and I’m sure more will come out in our dialogue.

Etienne Chaponniere

My name is Etienne Chaponniere I work for Qualcomm. I’m a vice president of technical standards And so what we do within that role is, effectively, we have a team going to technical standards for AI, and we actually try to coordinate where is it that we need to go, how is it that we need to make sure that we understand what it means to be compliant. I come from a world of telecom, as Qualcomm can evoke to some folks. And for us, it’s a very different thing, right? For the telecom world, you cannot ship a product unless you comply to a standard because you need it for interoperability. In the world of AI standards, it’s a bit different.

So we’re talking more about safety standards, and those typically tend to trail the products. The products are out there, and then they’re going to comply to standards at some point when the standards are available. What matters, however, what is common in all of this is that the standards need to be available at scale for everyone and in a way that engineering teams can do it easily, at least from the product side. So I think I’ll leave it at that, and, yeah, that’s it.

Lee Wan Sie

I’m Wan Lee from Singapore government. I work in AI governance and policy. So many things, but specifically for standards, what it means to us is setting norms. That means alignment globally on what good looks like. And specifically in the area of AI governance, then a lot of it has to do at this stage in terms of common methodologies and processes that we have to follow. So, but it’s still technical. It’s not a checkbox, but hopefully that helps us all align to what good looks like. Thanks.

Bhushan Sethi

And maybe before the next introduction, just so you can get a flavor, we have standard setters and measurers. We have people in industry and we have people who play in the policy and the regulatory environment. And that’s the importance around this topic.

Amanda Craig

Thank you. Hi, everyone. I’m Amanda from Microsoft. I lead the public policy team with AI. And the Office of Responsible AI at Microsoft. I think Juan C. said it well when she described standards as really, like, aligning around what good looks like. And I would offer, you know, we actually at Microsoft in our office, we define something called our responsible AI standard that applies to all of our internal kind of product groups, our engineering function, our sales function. And if you think about, like, the role of that internal standard is to align all of the internal stakeholders we have around what good looks like. Like, externally, we need the same sort of mechanism, right? And that’s the role that standards can play in the broader ecosystem.

So we want to partner with our industry colleagues, and we want to partner with governments and others around the world to be able to define what good looks like so we can all have that common language instead of expectations.

Joslyn Barnhart

Hello. Jocelyn, Google DeepMind, where I also work on issues of AI standards, governance, and policy. building on what’s been said. So I think that was an interesting point that often technical standards come first and process and safety standards often come later. In the space of AI at the moment, actually, regulation has gone ahead and jumped to, you know, we’ve regulated and essentially made reference to standards that do not yet exist. So for places like Google DeepMind who have not invested heavily in the standard space in the past, this is now of an utmost priority because we actually need this to assist with implementation and compliance. So that is a primary goal on our side.

Chris Meserole

I’m Chris Meserole,. I’m the executive director of the Frontier Model Forum. Our mission is to advance Frontier AI safety and security, and we work with many of the leading Frontier AI developers and employers, including several colleagues on the stage today, to advance, you know, best practices for risk management. For Frontier AI in particular, there’s a kind of unique and a set of unique and novel risks that over the last couple of years. the community has really started to develop and converge around a set of best practices that now I think need to start to graduate into actual formal standards, and I think that’s kind of why we’re here. That’s why we’re very interested in the standard -setting space.

Esther Tetruashvily

Hi, everyone. My name is Esther Tetruashvily, and I’m the AI Standards Lead at OpenAI. Echoing many of the things that have already been said, I think standards for us, especially as a frontier AI lab, is about translating some of our practices for risk management into the language of risk management for customers across the supply chain, and it’s also about creating a language for consumer trust and assurance. It’s also about, in the age of agents, thinking about interoperability and helping everyone benefit from this ecosystem that we’re developing here. So I’m really excited to be here and to talk about these issues with you all. Thank you.

Kshitij Bathla

Hello, everyone. I’m Kshitij Bathla from Bureau of Indian Standards, the NETS. National Standards Body of India, and here representing ISO ICJTC1 SC42, because BIS, European Standards, is a part of the SC42. and for us I would say standards are the tools which enables consumers’ trust in whatever ecosystem for which they are developed as well as enable us for the industry to get it done to ensure the quality and the consumer trust. That’s the main focus area for us. Thank you.

Bhushan Sethi

So let’s start with why we need standards. Why are we even here? Because there’s a lot of confusion between standards, regulation, legislation. Are we going to get global cooperation around these things? Maybe should it just from a standard setting perspective and then maybe from a regulatory perspective. Why are we here? What’s the problem we’re solving and for whom?

Kshitij Bathla

So I would say the problems, there are multiples in the standards domain. Specifically, it always starts with what we are tackling with. What is AI? That was the primary focus of the JTC1 and SE42 when it started. So it defined what is AI. what is generative AI now they are talking about what is agenting AI as of now talking about so I think the most of the specific points that needs to be taken care is what is coming next and to keep pace with that and apart from once it comes to that when we have kind of mentioned that what it is all about then how do we verify and validate whatever is being said that this is a system which is having AI for example I would say someone says they have an equipment call it washing machine or is equipped with AI but is it actually equipped with AI or it’s just a normal logic system so this is something that we are trying to do the standardization.

Bhushan Sethi

So it’s about trust it’s about verifying the tech firms here represented are moving very fast with the model development so it’s like we need standards there from aregulatory perspective what would you add there?

Lee Wan Sie

I think the most important thing I wouldn’t say from a regulatory perspective. Maybe in terms of why, from an AI policy perspective, we think standards are helpful. Like I said, it’s about defining alignment in what should be in, let’s say, transparency. So I think if you say what would be the top three things today that we want to think about testing, setting for standards would be one, testing. How do you do testing for AI? Whether it’s AI models or AI applications, I think that’s one area. Because then it defines what good testing can look like. Two, perhaps in transparency, what would disclosure look like? Everyone has their own way of sharing the information that they want to share.

One way is to standardize it so it’s easier for the readers, people who are consuming this information to understand. And I’m saying this in very, very broad terms. I mean, it depends on which reader you’re talking about, who’s going to consume. just in broad terms, perhaps one way of standardizing it. Maybe the third way could be in how you’re reporting or monitoring incidents. But it’s still very, very early days. But that’s where standards, again, in terms of alignment, that might be one that would be useful to find alignment in these areas.

Bhushan Sethi

So ,how do we report? How do we disclose? How do we make it credible? And so it’s not a subjective tick -the -box exercise, etc. From a standard setting, Chris and Rebecca, from a standard setting perspective, what would you add to that before we have kind of the industry view?

Rebecca Weiss

I’m happy to add to this. So I think there’s been a theme that has come across in this panel a couple of times, which is what is good enough? And I think in order to define that, a standard represents a consensus about what is good enough. The problem that we have is who contributes to that consensus. It shouldn’t probably be exclusively an industry perspective. You need to have more stakeholders or more constituencies that need to be represented in that definition. And then on top of that, what is good enough, as I think Jocelyn mentioned earlier when we were talking before this panel, there’s a scientific element to that. How do you define the characteristics of a system such that you can actually create?

the kind of uncertainty estimation that lives up to a statistical guarantee, but then there’s also the political element to that, which represents a whole set of issues that I’m actually not qualified to talk about, so I will pass it to Chris.

Chris Meserole

I think it’s worth backing up from this thing. One of the original questions was, what are standards for? Is Chris’s mind working? I was just saying, one of the things we should maybe do is back up a little bit to this question of what are standards for, and I think a big part of what standards are for is to try and solve this collective action problem. There’s a kind of unique set of risks that we are worried about. We want to make sure everyone’s on the same page so that no one kind of actor is disadvantaged or advantaged compared to others. Having standards for how we’re going to manage risks across an ecosystem are extremely useful for that, so there’s a policy dimension to it.

There’s also an adoption dimension to it, right, because people want to know that there’s kind of… of a common way across industry of handling a certain class of risk. And I think being able to set standards and have a formal standard -setting body, to one of the points that was made earlier, by definition a standard -setting body is open, right? So there’s a legitimacy and a credibility to standard -setting bodies that you don’t have if it’s just industry or just government in many cases. And I think, you know, all of those kind of factors coming together are exactly why we’re so keen on kind of pushing forward the standards discussions.

Bhushan Sethi

Yep. So maybe from a hyperscaler perspective, maybe Esther, then Jocelyn, and we can kind of like play it clear, the difference, how is this showing up kind of at your firms and how are you thinking about this?

Esther Tetruashvily

Yeah, no, that’s a great question. I think from sort of a market adoption perspective, a lot of our technology, like general purpose AI models or foundation models, are being integrated into existing ecosystems or on top of. stacks. And there’s a lot of confusion in terms of risk controls and risk management about what that means. We have our own risk management processes. They have their own risk management processes. And one of the barriers to adoption is having a common language to talk about how do you map those controls onto one another. There’s a separate challenge, I think, of who is best positioned to control a particular risk. What are the risks? What are the net new risks?

What are the risks that are already existing where we don’t need to create something net new? And so for us, it’s both an imperative in some ways to kind of translate what we’re doing in terms of managing risks into the language of upstream, downstream customers so that they can understand and map those same practices onto their controls. And then we kind of can create a universal language that can ease trust and assurance in an easy, rockable way across the market. There’s also just space for, I think several people have talked about. Regulations moving ahead. of the standards, where we are still developing methodologies, what is standardizable in what we’re doing, recognizing where the science is not cut up yet, and where we maybe are in a place of more maturity.

Bhushan Sethi

And maybe just to bring it to life for the audience, given the huge amount of subscribers you have in India, around the world, growing every day, what’s changed in the standard vernacular at OpenAI?

Esther Tetruashvily

In terms of our adoption, or in terms of how we’re distributing it?

Bhushan Sethi

Yeah, the prominence of it, how people are thinking about it, the importance of the topic.

Esther Tetruashvily

So I think there’s both an aspect of it that’s like, what does already exist that we can use that can reassure customers that we are following the best practices for the industry, say for privacy or cybersecurity. There’s an existing risk management standard, ISO 42001, that OpenAI just got certified in. And that definitely signals something to the market. And to customers. Then there’s also sort of a transparency. element, right? We have our safety frameworks, we update them, we disclose information about in our model cards performance on a variety of metrics. And then there’s certain things we do to kind of elevate and help stakeholders across the spectrum in terms of how to build evaluations. So we currently published a safety hub that gets updated regularly that kind of tells how we’re performing in a variety of metrics and what are the best methodologies and how to work with this.

Bhushan Sethi

Great. So Joslyn, can you bring to life how Google DeepMinds are thinking about standard setting in that context?

Joslyn Barnhart

Yes. I’ll take it back to what Chris was talking about in terms of collective action problems. So some of the mitigations we’re talking about associated with some of the more extreme risks that Frontier AI poses can be quite costly. And so I do think that there is just a strong industry incentive to work together to resolve this collective action problem. Again, as Chris said, doing this through standards through an open, legitimate process seems to be incredibly impactful. Again, like the… The worst… thing for adoption would be a safety incident. So again, we have a collective incentive as an industry to make sure that we raise the floor to avoid that on all of our behalves.

So I do think that that is seen, you know, I think standards at this point are seen as a very clear and important strategic play for making, you know, essentially clearing the path for rapid adoption.

Bhushan Sethi

Amanda, how do they show up at Microsoft right now? Can you hear the question? How do they, how do these standards show up at Microsoft? Amanda’s going to speak about Microsoft experience.

Amanda Craig

Thank you. Yeah, I was going to start by just thinking about, at Microsoft, at Google, at other places, it’s not a totally new kind of process that we’re going through, right, in terms of thinking about standards and the importance of standards for adoption of this technology, sufficient trust in order to have adoption and in order to really enable compliance. I mean, I think Esther made a really good point. and sort of acknowledging that, you know, especially as we are deploying this technology, we are working with customers that have their own set of standards and regulation, and part of the challenge that we find ourselves, like, facing right now in AI governance is we have a lot of high -level norms and expectations that, again, are not so different from the patterns we’ve seen before.

Basically, we want to know how AI providers are managing risk, but we are in the early days of defining really what that means in practice in a really detailed way, especially, like, across the AI value chain. So what are model developers really responsible for doing for risk management? What are application developers really responsible for doing? How does that dock in to what deployers of those applications that are oftentimes implementing existing standards and meeting existing regulatory requirements? How does all that fit together? And, again, you know, we’ve done this with other digital technologies as well, like software, like cloud services, where we’re ultimately trying to define in practice what are the challenges that we’re facing right now.

is everyone responsible for doing? How do we have a common language to be able to talk to each other among sort of providers or the supply chain of technology and those that are ultimately deploying it? But we actually really do need the standards to support that, right? Because otherwise we are stuck at the sort of like high level conversation about norms around we want to evaluate risk. We want to figure out what the kind of right transparency practices are. Or we can find ourselves in this sort of deep technical weeds but like sort of having a place in between that is really at the level of standards, of technical standards, really helps drive that kind of common set of expectations so that you can have trusted.

Bhushan Sethi

So we need them. They’re important. We’ve got to drive adoption. There’s a collective action agreement here. From a Qualcomm perspective, SCM, bring to life the business model, how you use this in engineering your products.

Etienne Chaponniere

Yeah, so I think there’s one thing that I’d like to note. I think there’s one thing that I’d like to note here. As Qualcomm, we basically provide chipsets, right? We’re not building chipsets. We’re not building chipsets. We’re not building big models. What matters to us still is the fact and the reason why we’re engaged in those standards, whether it’s in ISO, Sentinelic for Europe, ML Commons, when it’s other type of standards, is effectively the fact that it provides scale in the sense of providing scale not only across the globe but also allows any different type of companies to benefit from it. I mean, let’s be clear, right? If you look at the companies who have the type of resources to either set up their own standards and risk management systems internally, they’re typically pretty big companies.

Now, the thing with AI is that there’s a huge amount of companies who are being created every day, and they don’t have the resources to put this together. And so there’s two conditions for making sure that the type of standards that are being put together are, one, inclusive, is that they’re open, as Rebecca, you were alluding to before. And so, whether it’s ML Commons, which has a very open governance model, or ISO, or Sentinelic in Europe, there needs to be an opportunity for everyone to participate. So that’s the first step. However, we know, and that’s the reality, that not everyone has the means to participate. Because they’re like super focused, they need to bring up their own LLM for that particular use case or maybe very general use case, and they just don’t have the resources to do this.

So from that standpoint, having the standard as effectively a mechanism for them to go directly to product and know that they’re going to comply with what the, effectively, world or the community has set up is really important. So from Qualcomm, the reason why we want to participate is to enable this type of accessibility to companies which are not always the biggest one.

Bhushan Sethi

Yep. So agreement that we need them. Before we go into how we set standards, how we measure and benchmark them, and Rebecca will bring that to life, a wildcard question is, there could be a lot of people listening to this to say, the world is not connected and cooperating around this. We don’t have global regulations on AI. But yet we have… industry leaders, standard setters, vehemently agreeing. How should the audience think about that? Is there a disconnect there or would anyone like to comment on that?

Chris Meserole

I would actually put, so part of one of the reasons why I think we’re all so interested in standards is one of the things you have, one of the things you’re seeing is multiple jurisdictions saying some version of we think that there are new risks with frontier AI. We as the government are concerned on behalf of our citizens that we are kind of attending to those risks across industry. Those risks and how to manage those risks are probably best left to be developed or kind of managed through the standard setting process, but they aren’t always setting the standards. So in the United States, there’s a couple of different states, for example, within the United States that have passed requirements for frontier AI developers.

to have a frontier AI framework, but they don’t specify what should actually be in the framework. They kind of offload some of that to the standards process, which is why I think it’s so important to have these standards in place. Like, there’s a clear kind of policy and regulatory interest in there being mechanisms by which some of the risks that may come with frontier AI are managed, but we need to kind of color in the lines a little bit exactly, like, you know, how we’re all going

Bhushan Sethi

And before we go to Rebecca, just from an India perspective, PM Modiji talked about Manav yesterday and the AI vision. Through there, there was a lot of focus on validity and governance, so standards were implied there. Do you want to just bring to life kind of how India thinks about this before we go to Rebecca and talk about measurement?

Kshitij Bathla

So I would say the Manav mission, it’s welfare, human -centric, and all those aspects are there. And from the governance perspective, also what is going on is that the government is not going to be able to do anything about it. we as of now the India AI governance guidelines are there. This is providing you a framework that these are the things that you should look into. Just providing a reference to. So in this direction the Indian government as of now is moving into. Coming into the from the perspective of standardization and at the national level as well as the ISO level I am adding to the question that you asked previously. That standards bodies are interconnected with each other.

The ISO there is a license mechanisms. We have the ML Commons as the license there. The IEEE is there. All bodies are there. So they are all interconnected there and whatever is coming as of out of these bodies is an outcome which is based on the studies. but done by various forums it’s not only the one I would say just the ISO body or not so in this direction the Indian standards that we are working on we are developing are also in the direction because here is something which is global we can’t have cells was specifically for India there could be the risks there could be specific use cases that are India specific for that those we need to have some specific guidance but more or less everything is the global thing that we are trying to look

Bhushan Sethi

into and then adapt those with the specific use cases that we need to right so we need global we need to adapt that to kind of local kind of conditions and use cases so let’s get a bit more technical Rebecca like why is this hard how do we measure it like how does it compare to benchmarking maybe Rebecca and then and then from a regulatory perspective did you want to make

Lee Wan Sie

I just want to respond to Chris comment and your question about you know if there’s no regulations then why do we care about standards right I mean, sure, I think there will be regulators who will say, yes, turn to the technical standards to define the expectations, which I think is a fair point that Chris made. But even when there’s no regulations, I think the standards still are useful. I mean, Esther just mentioned that OpenAI is certified for 42 ,001. You didn’t need to do that, but why did you do it, right? And Entropy has done that as well. And I think the idea is that perhaps there’s also a way to differentiate for organizations, for enterprises. And it doesn’t have to be the frontier model labs only.

It could be app developers and so on. A way to differentiate themselves and say that, look, I’m adhering to a global standard. I’m demonstrating that I have actually implemented something that’s good enough. I’ve addressed a risk in this way. I think that’s one good…

Bhushan Sethi

Do you want to make a quick comment? Yes, do you want to make a response to everything we’re getting to? Sorry, Rebecca. Please.

Lee Wan Sie

I just want to respond to Chris’ comment and your question about, you know, if there’s no regulations, then why do we care about standards, right? I mean, sure, I think there will be regulators who will say, yes, turn to the technical standards to define the expectations, which I think is the fair point that Chris made. But even when there’s no regulations, I think the standards still are useful. I mean, Esther just mentioned that OpenAI is certified for 42 ,001. You didn’t need to do that, but why did you do it, right? And Entropy has done that as well. And I think the idea is that perhaps there’s also a way to differentiate for organizations, for enterprises. And it doesn’t have to be the frontier model labs only.

It could be app developers and so on. A way to differentiate themselves and say that, look, I’m adhering to a global standard. I’m demonstrating that I have actually implemented something that’s good enough. I’ve addressed it. I’ve risen this way. I think that’s one good… reason for standards, even if there’s no regulatory cover. So the certification assurance part is helpful. Yeah, I just wanted to add that as a little bit of colour just to give some benefits to the standards community that is still kind of very…

Bhushan Sethi

Thank you. Bringing the regulatory perspective and kind of the Singapore experience. So let’s get into measure. And the fellow panellists, if you want to respond to anything, just give me the signal. We’re going to make this an interactive conversation. So Rebecca, how do we measure this?

Rebecca Weiss

Well, solve all the problems in one definition. No, I’m kidding. But as I said earlier, benchmarking consists of two things. It consists of a methodology, at least from our perspective, the way that we do benchmarking consists of a measurement methodology, and it consists of reference builds, implementations of that methodology so that engineers can use that. And the definition of a benchmark, as we’ve been trying to operationalize this in places like ISO and others, is a taxonomy, a data set, and an evaluator system. And the point of all of that construct is, as Etienne pointed out, this allows for you to scale this kind of approach towards the type of deployments that we’re expecting to see in these types of AI settings.

The challenge behind all of this is that what you’re really trying to do is estimate uncertainty. Uncertainty. You’re trying to provide a sense of, I’m not going to tell you that your system is, quote -unquote, safe or not. What I’m going to tell you is, under these considerations, under these conditions, under these assumptions, the estimated likelihood of a particular risky behavior is X. And then it is up to you as a risk management professional, a deployer, a developer, it’s up for you to decide, is that enough? Is that good enough for your needs? And I don’t think it’s going to be the same for different sectors. I think sometimes. Sectors will have a much higher bar for the amount of uncertainty.

that needs to be estimated, and then other sectors will probably be like, that’s good enough for me. I don’t necessarily need to get much further than what you are offering right off the date. So we can go into all of the different questions that are made open, but those particular areas related to developing that taxonomy, developing those data sets, and developing those evaluators, the best practices and the standards to make it clear that this is the best in the industry, this is the way that it is, that’s what we need to get better at.

Bhushan Sethi

Yeah, so what I’m hearing is we need clarity. Clarity of the taxonomy, clarity of what we’re measuring, and it needs to be verifiable and credible. From an industry perspective, would anyone like to pick up, like, how’s that going to work? What’s in place now? What some of the challenges might be? How do you get organizational buy -in? Anything to add from an industry? Amanda, do you want to start us off?

Amanda Craig

Sure. I mean, I think there’s work to do across all the elements that Rebecca just laid out, and it’s really a reason why we are really invested in working with M .L. Cummins, because I think we need places that are bringing industry and and and civil society and stakeholders together to actually work through these problems and resolve these hard questions in ways that are really going to be sort of valid and reliable broadly. And so I think that’s really the work still ahead, but I think we are also making good progress, right? And thanks to ML Commons for helping to facilitate that. My thought on this is that we’ve been talking for years now about how nascent this field is and that actually to judge if we are actually making progress, this too could be standardized, right?

Like we don’t have common ways of assessing are we still in a nascent stage? What levels of uncertainty do we have? So to Rebecca’s point, I think this is absolutely essential so we can all align exactly on have we made some progress? We’ve made sufficient progress to start relying on these things. To what degree can we rely on them for important decision -making around deployments?

Esther Tetruashvily

yeah I think I’ll just add if we take this back down to the basics I think whether you’re an enterprise customer or you’re a consumer of our products you just want to know is this thing going to be accurate can I rely on this thing is this going to get me into trouble if I incorporate this in my workflows am I going to carry some sort of liability and at the core of standards is figuring out a way to have a common mechanism to provide an answer of reassurance you can trust us here’s a measurement certified by somebody else that this thing is reliable that this thing is accurate that I can rely on this thing and I can use this thing and I think we’re in this moment where we’re still trying to figure out as an industry and as a community about what that’s going to look like and so whether it’s advancing the measurement science because we currently don’t have enough of that in order to make sure that we can give an estimate of what is accurate what is reliable what is safe for specific risks or on the other side, what are the risks that we care about?

I think some risks might be some countries, some jurisdictions might have one list of risks. Other countries might have a different list of risks. And then there’s going to be a question of, like, how do you control for that, right? And that’s kind of what Rebecca Nemel -Commons and many others are working on, is how do you provide some sort of mechanism of credibility that says we’ve measured this, this thing is safe, that can then be certified, could be, you know, understood in the same way for everyone. So at the end of the day, in order for us to really unlock the value of this new technology that is transformative, I think many of us who are here today for the Indian Impact Summit recognize that potential.

We all also need to kind of answer those questions, and standards are the way you facilitate it.

Bhushan Sethi

Yeah, and so there’s a theme of trust that’s going through this. So maybe, Chris, add to that, and then I’ll add to that into a comment from a quote,

Chris Meserole

Yeah, just briefly, I think I also just want to situate how kind of benchmarking standards and some of the scientific questions we’ve been talking about fit in. Like there’s I think we’ve been talking a lot about different types of standards. I just want to clarify that there’s like a kind of broader, high -level set of process standards where you kind of say, all right, for this class of risk, what we’re going to do is we’re going to identify what the risk is. We’re then going to evaluate what that risk might actually be. And then we’re going to put in place certain kinds of mitigations and controls. Those are kind of, it’s a process for how you’re going to walk through risk management for something.

That absolutely needs to be standardized. But then even within that, once we get to, all right, once we have agreed on what the risk is that we’re trying to evaluate, how do we actually do that? And that’s where the standards come in for the benchmarks that we want to see developed. And that’s where some of these scientific questions, I think, really come into play because we need to have, you know, those kind of credible scientific evaluations and tests for the whole kind of broader risk management effort to hang together. And it’s, you know, again, critical, I think, for this whole process.

Bhushan Sethi

Yes, this has got to live next to the risk. Risk management, identification, mitigation strategy in any company. Go ahead, Jocelyn.

Joslyn Barnhart

I just had briefly. I think the possibility for comparison across models is also something that’s super important here. I think there’s an important safety dimension there. If we actually are all measuring the same thing and can give consumers some relative assessment of safety, of quality, this is actually going to potentially contribute to a race to the top as opposed to the bottom. And so we’re solving it.

Bhushan Sethi

So that’s the question of who we’re solving for. Two of the panelists have mentioned consumers. It’s not just about enterprise. It’s not just about government. It’s all about consumer trust. Essie, what would you add?

Etienne Chaponniere

What I wanted to add is the fact that here when we’re talking in general about trying to create standards to resolve the type of safety risk that we’re going to see, it’s just also to reassure the audience that it’s not that we’re trying to solve every single risk that happens. There is a huge amount of existing standard bodies, whether it’s in ISO and SensenELEC and other places, where they already have identified risk for their particular verticals or their particular… not silos, but the particular industries, those are already at work, right? So how they’re going to use AI, how the AI is going to be effectively, the AI safety is going to be translated to their own processes.

Those things are already happening, right? So it’s not only the people on this panel who are working on this, the entire community of standards, whether it’s in automotive, radio equipment directive, everything is already, everybody’s already looking at that, right? In the end, the difficult part is going to be to make sure that there is a commonality in terms of the type of techniques that we’re using whenever there’s an automated technique that we can use. Because from an industry standpoint, what is really useful, in particular if you’re a smaller company, is to make sure that you can run something efficiently and it addresses as much as the use cases that you run as possible. So that is an important thing that we need to keep in mind when we’re doing this.

So it’s why, I mean, from Qualcomm, obviously, we don’t address every single thing, but we want to make sure that at least in the areas we’re involved, there’s going to be as much as a commonality in terms of the measurement techniques that we’re going to use.

Bhushan Sethi

So consensus around the need to do it, consensus around the fact that it’s hard, but it’s important for consumers and business and investors. But Jocelyn made a point that we’ve been talking about how this is a nascent topic, et cetera. I want to look forward. What over the next two years does this look like? What have we got to get right? The models are changing. There could be regulation that changes. There could be changes around China, U .S. operating in different ways. What does this topic look like? How do we make sure we stay the course on this topic? Anyone want to offer a perspective as we look forward? And then we’ll start wrapping up.

And thinking about questions so we can get questions from the audience. I’ll take a crack at it. So at least from my perspective, there are a couple of things that I hope to see over the next couple of years. One is that I think this idea of benchmarks and other standards representing consensus, we should be seeing more things like certification that represent more types of consensus. If benchmarking represents consensus around how to estimate and measure a thing, certification could end up representing agreement. A definition of what is good enough deserves some form of certification. I don’t know necessarily what that’s going to look like today, but I have to imagine that those sort of represent truces, the temporary agreements about this is good enough for my industry, this is good enough for my deployment, this is good enough for my use case.

So that’s what I’m hoping we start to see over the next two years. Anyone else want to add to that? Because, I mean, Chris, jump in, but we’ve seen some of these disclosures in the past, and people commit to environmental goals or DEI goals or other set of standards or disclosures. Stakeholder capitalism was a big deal, and now it’s more about shareholders. So I’d love to understand our perspective on how do we stay the course.

Chris Meserole

Yeah, I might distinguish a little bit between how do we future -proof these standards and then how do we kind of ensure that they’re implemented over time. And I think the way that we future -proof them is to some extent to go back to the point I was making earlier about process standards, right? The process is somewhat agnostic to the actual kind of, you know, AI system itself and the capabilities it has. If you have a good process for identifying risks, evaluating risks, that process can kind of be a bit future -proofed. The specific evals you run are probably going to have to be updated over time to account for the greater capabilities of models as they advance, right?

And I think… similar with some of the controls that might need to be kind of used to manage some of the risks if there’s certain thresholds or kind of if the evaluations kind of indicate a certain level of risk, right? So the subcomponents of it might need to be evaluated. The overarching framework hopefully can kind of have some legs behind it over time in terms of future -proofing it. So we must commit to a process. We can’t future -proof because we can’t predict the future, but the process is so important. Even a good example of this would be something like the, I think, 40 ,001 has come up a few times. Like there’s a certain class of AI that 40 ,001 is very kind of tailored to, but even that AI has changed over time.

But 40 ,001 is still a very good kind of standard for managing those kinds of risks for those kind of applications of AI across a broad array of machine learning algorithms. But the other point that I would make in terms of, you know, you alluded to some of the kind of implementation of standards over time and making sure that they have the same currency to them. And there, I think we can rely on some of the incentives and the need, again, for there to be collective action on this that we’ve talked about before. Some of the incentive to make sure that there’s a collective action problem is going to rest with policymakers, which is why you’ve seen some regulatory activity.

Even in areas where there’s not, to Juan C.’s point, there’s a clear market need for these standards to be developed and implemented over time because consumers want to see, you know, they want to trust that the, you know, whether it’s individual consumers or enterprise, they want to trust that the model is actually safe and secure to use. And so I don’t see kind of the standards, the importance of standards diminishing over time. In fact, if anything, as the capabilities advance, consumers and enterprises are going to be more and more interested in making sure that they

Bhushan Sethi

Yes, it’s going to be consumer -driven. Juan C., just from a regulatory perspective, any thoughts? Chris mentioned implementation. Which is the hard stuff of where lots of this stuff gets stuck. Any perspective on implementation or from your experience as a regulator to add here?

Lee Wan Sie

Implementation of standards? Yes. I mean, Chris put it very well, right? One, regulators could say, I expect you to comply with certain requirements and this is how you do it. And that’s where the standards set on how you do it. Or regulators can don’t provide certain requirements or certain expectations. And the market sets out these requirements and these expectations. If you do it, then we will buy your product, for example. So I think from an implementation point of view, I think there will be some momentum, either from the market or from regulations, to move standards. But I think where I think, back to your original question, what’s going to happen in two years, I hope we can actually move faster on standards in terms of the definitions of standards.

I think that would be super useful. We’re leading some work on testing, well, benchmarking and rate teaming, primarily methodology definition. But… Yeah. We hope that in the next one year that can be done and sorted and accepted within the ISO process. But the experience has shown us that it takes a while. So in the next few years, hopefully we will find a way in which we can move to standards faster.

Bhushan Sethi

So we need to move with speed from a regulatory perspective. Amanda is going to have the last word and then we’re going to go to questions. So please prepare them. Amanda?

Amanda Craig

I didn’t realize that. No, the one thing I wanted to add in terms of like a goal for where we can find ourselves two years from now is thinking about like a system of standards that are interoperable where we have a sort of modular approach, right, where across like general purpose technology and, for example, in different sort of deployment scenarios, different use cases, different sectors, we actually can get some efficiency from, you know, these standards are all going to need to continuously evolve and improve and we’re going to learn from the science. And we’re going to keep evolving the benchmarks and the kind of methodology around the evaluations. But we don’t want to like keep starting from scratch with every piece of that, you know, puzzle.

And so we need to figure out a way to actually ensure that. like where we are making progress on the evaluation science and how we are doing this in the context of like evaluating AI models or systems and then how we are evaluating AI and deployment in like critical sectors, for example, we actually have some synergy built into the standards ecosystem so that we are making kind of more dynamic progress across everything at the same time.

Bhushan Sethi

Yeah, so it needs to be interoperable and we can’t keep reinventing the wheel. So audience, questions? I’m going to collect questions, maybe three to five. So the gentleman at the front, the gentleman at the back, and then the lady with the hand up.

Audience

Hi there. Thanks for taking my question. Maybe I have a bit of a tricky question for you. You know, on the panel, obviously, we have a lot of commercial interests. My question is this. How do we know in your assurance program or whatever you’re proposing that it’s going to be done since it’s driven primarily by industry, how do we know that you’re not just going to create something that cheaply satisfies the industry in front of… of us versus what the public actually needs. And assuming you do have a program that you’re going to talk about, how does a government or external agency audit such a program, given the skill gap to create such a very sophisticated compliance program, how can world governments come?

Because I’ve been on a lot of panels this week. The fear, uncertainty, and doubt is not only just the policy gap. It’s actually the technical gap, the inability of world governments to audit properly whatever you have. Thank you.

Bhushan Sethi

Thank you. So keep the questions brief. Thank you for that. So that’s about, like, how do we make it real? How do we make it not performative? I’m going to collect two other questions, and then we’ll throw them to the panelists. So keep your hands raised. We have a gentleman at the back. And I think there was a lady or a gentleman with a tie. Yeah, hi.

Audience

So… As a recent computer science student, I’m interested in building AI for India. Specifically with such a distinguished panel, I thought I’d shoot my shot. I’m a little nervous, so I apologize about that. I want to talk specifically about language bias. Being in India, there are 22 official languages, and I’m constantly thinking in two to three different languages. And when I utilize tools, such amazing tools built by everybody here, I’m wondering how you guys would go about tackling language bias and building guardrails around that to ensure that, you know, a small model like a student like me is making does not go haywire. Yeah, great

Bhushan Sethi

question about language. Thank you, sir. And then, gentleman with a tie. Which doesn’t mean, like, more gentlemen wear ties, but, yes, please. Hi, Jules

Audience

Polonetsky at the Future of Privacy Forum and our AI Governance Center. The standards always say… seem to be an easier path when they are more technical than… and challenging social policy, and AI governance seems to capture the most broad potential collections of social policy. And given that there’s a lot of disagreement and some debate over whether one should even measure certain areas, do you imagine that we’re talking about minimum viable consensus with the broadest number of stakeholders, or is there a path to in some way address some issues that some stakeholders see as absolutely necessary and others don’t want on the table? Yep. All

Bhushan Sethi

right. Soundbite responses panel. Like how do we make it real? How do we deal with the skills gap? How do we deal with the MVP? Anyone? Go on, Jocelyn. On the

Joslyn Barnhart

performative question, I think now that standards have been referred to within actual regulation, I think to the extent that we want to use these standards as evidence of conformity with those particular regulations, that’s set up a lot of the work that we’re doing. that’s a kind of minimum bar at the very least, because I think if we make these things too high level, too abstract, or too essentially lowest common denominator, I don’t think regulators are going to look at those standards as evidence of conformity. So I think there is that kind of interlocking pressure created by the regulation itself for some sort of degree of quality. Thank you.

Bhushan Sethi

And Esther, do you want to comment on the language perspective and how you’re thinking about that at OpenAI? Thank you.

Esther Tetruashvily

Yes, we do a series of evaluations like MMLU for determining how well our models perform on a variety of languages. We also have a specific test actually in QA. There’s also a specific test in QA that we also kind of test our models on that has a variety of dialects within India. So I think the short answer is that this is an area where we need more participants. And I believe ML Commons is playing an active role in helping further our capacity building. And I think working with local ecosystems to help clean and collect good data so that we can do this appropriately. This is another area, right, just like we’ve been saying, where we need to work in partnership to figure out how do we both collect the type of information, how do we measure this stuff, how do we build the evaluations, and then how do we build an industry standard where all of the actors are kind of held to that standard.

And it’s going to have to be a collective effort. Yeah. Okay.

Etienne Chaponniere

Just to add a little bit on the question regarding the language. In the end, I don’t think there’s like a – there’s no silver bullet solution, right? There’s going to be a need to have this type of – Either safety test or safety prompt. which are required for different type of languages. And you’re not going to be able to address every single thing because there’s just a huge amount of diversity. I mean, take me. I’m French from cultural background. I speak English and think in French and English all the time. There’s weird stuff that I say that will not be captured by a model that’s only for American English, right? So there’s going to be a need for more than one language which are captured, and probably a lot of them, but this is where the community of basically everybody needs to come and say, hey, this is what I want to capture for my type of language.

What matters to make sure that there is scale and that it still remains efficient is that hopefully the tool and the software framework around it can be reused. And that’s really a big advantage for that. Thank you.

Bhushan Sethi

So in summary, and thank you, dear panelists, for the great discussion. So you heard today that standards are important. This is a fast -moving world. We’ve got to be designing for consumers, for business people. There’s a commitment. There’s a commitment here around measurement. It’s both art and science. We need to have the process that’s consistent. But across regulators, across standard -setters, around policymakers, and the business and the tech community, there’s a consistent understanding. So it’s going to be an emerging topic, which I know we’ll continue to discuss. Thank you, panelists, and thank you to the audience. Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (13)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Standards solve a collective‑action problem by providing an open, credible process that levels the playing field”

The knowledge base notes that open standards act as a common technical language that can level the playing field for small companies and promote fairness, confirming the panel’s description of standards as a collective-action solution [S30] and highlights the role of standards in governance and risk-management coordination [S38].

Additional Contextmedium

“Kshitij Bathla said standards are tools that build consumer trust, assure quality, must be adaptable to Indian‑specific use‑cases while aligning with ISO”

Kshitij Bathla’s participation is recorded in the transcript (introductory remark) and the discussion references ISO-based frameworks such as ISO 42001 that provide a common set of requirements for national bodies, adding context to his emphasis on Indian-specific adaptation and ISO alignment [S71] and [S75].

Additional Contextmedium

“Open governance models (ML Commons, ISO, IEEE) enable smaller firms to adopt standards without building bespoke risk‑management systems”

Several knowledge-base entries describe how open, inclusive standards lower barriers for smaller actors, promote participation from diverse stakeholders, and are promoted through multistakeholder collaborations, supporting the panel’s point about open governance models [S30] and [S67] and the broader multistakeholder cooperation described in [S24].

Additional Contextlow

“Regulators are already referencing standards that have not yet been created, creating an urgent need for industry‑driven standardisation”

The knowledge base discusses how regulators rely on industry standards as part of AI governance and often look to standards processes to fill regulatory gaps, providing context for the claim that regulators cite yet-to-be-finalised standards, though it does not explicitly confirm the non-existence of those standards [S38] and [S79].

External Sources (81)
S1
Setting the Rules_ Global AI Standards for Growth and Governance — – Kshitij Bathla- Chris Meserole- Etienne Chaponniere- Rebecca Weiss- Bhushan Sethi – Kshitij Bathla- Chris Meserole- L…
S2
https://dig.watch/event/india-ai-impact-summit-2026/how-trust-and-safety-drive-innovation-and-sustainable-growth — I just have the image of the U.K. Information Commissioner doom -scrolling TikTok in my head now. Let’s do a quick round…
S3
How Trust and Safety Drive Innovation and Sustainable Growth — – Alexandra Reeve Givens- Amanda Craig – Denise Wong- Amanda Craig
S4
Setting the Rules_ Global AI Standards for Growth and Governance — -Etienne Chaponniere- Vice president of technical standards at Qualcomm
S5
https://dig.watch/event/india-ai-impact-summit-2026/setting-the-rules_-global-ai-standards-for-growth-and-governance — Just to add a little bit on the question regarding the language. In the end, I don’t think there’s like a – there’s no s…
S6
https://dig.watch/event/india-ai-impact-summit-2026/setting-the-rules_-global-ai-standards-for-growth-and-governance — And it’s going to have to be a collective effort. Yeah. Okay. Hi, everyone. My name is Esther Tetruashvily, and I’m the…
S7
Setting the Rules_ Global AI Standards for Growth and Governance — – Lee Wan Sie- Esther Tetruashvily- Chris Meserole – Rebecca Weiss- Esther Tetruashvily- Amanda Craig
S9
https://dig.watch/event/india-ai-impact-summit-2026/setting-the-rules_-global-ai-standards-for-growth-and-governance — And it doesn’t have to be the frontier model labs only. It could be app developers and so on. A way to differentiate the…
S10
S11
https://dig.watch/event/india-ai-impact-summit-2026/setting-the-rules_-global-ai-standards-for-growth-and-governance — And I think… similar with some of the controls that might need to be kind of used to manage some of the risks if there…
S12
Setting the Rules_ Global AI Standards for Growth and Governance — I’m Chris Meserole,. I’m the executive director of the Frontier Model Forum. Our mission is to advance Frontier AI safet…
S13
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — -Chris Meserole- CEO of FMF (organization not fully specified in transcript)
S14
Setting the Rules_ Global AI Standards for Growth and Governance — – Kshitij Bathla- Chris Meserole- Etienne Chaponniere- Rebecca Weiss- Bhushan Sethi
S15
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Ashwini Vaishnaw- Role/Title: Honorable Minister (appears to be instrumental in India’s semiconductor industry developm…
S16
ElevenLabs Voice AI Session &amp; NCRB/NPMFireside Chat — -Shailendra Pal Singh: Role/title not explicitly mentioned, but appears to be a co-presenter/expert on Bhashini translat…
S17
https://dig.watch/event/india-ai-impact-summit-2026/setting-the-rules_-global-ai-standards-for-growth-and-governance — Hello, everyone. I’m Kshitij Bathla from Bureau of Indian Standards, the NETS. National Standards Body of India, and her…
S18
Setting the Rules_ Global AI Standards for Growth and Governance — -Kshitij Bathla- Works at Bureau of Indian Standards (BIS), the National Standards Body of India, representing ISO ICJTC…
S19
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S20
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S21
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S22
WS #69 Beyond Tokenism Disability Inclusive Leadership in Ig — Astbrink highlights the complexity of implementing high-level global instruments at the national level. She emphasizes t…
S23
AI That Empowers Safety Growth and Social Inclusion in Action — And standards turn principles into action. They shape risk management, they clarify accountability, they guide human ove…
S24
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Additionally, it provides e-learning materials to enhance understanding of AI standards. Moreover, the AI Standards Hub …
S25
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — The discussion reveals extraordinary consensus among all speakers on the fundamental principles of AI agent standards de…
S26
Harmonizing High-Tech: The role of AI standards as an implementation tool — Conversations around standards are transparent and inclusive Experts take decisions by consensus Renowned for their lo…
S27
Standardisation – The Key to Unlock the Sustainable Development Goals (SDGs) — The SDGs tackle challenges of global proportions and it would be ill-advised to look for solutions that are not globally…
S28
Artificial Intelligence &amp; Emerging Tech — Efforts should be made to avoid reinventing the wheel and use existing good/best practices Efforts to coordinate in the…
S29
Advancing Scientific AI with Safety Ethics and Responsibility — Thank you, Shyam. I think this is a very important question. And it’s also a topic that I’m really passionate about as w…
S30
Better governance for fairer digital markets: unlocking the innovation potential and leveling the playing field (UNCTAD) — Access to open markets through regulation is highlighted as beneficial for small messaging companies. This provides oppo…
S31
Day 0 Event #171 Legalization of data governance — Wolfgang Kleinwächter: Okay, thank you. Thank you very much and thank you for the invitation and thank you all the pri…
S32
The role of standards in shaping a safe and sustainable AI-driven future — Onoe acknowledged the rise of a novel AI innovation ecosystem and the indispensable role of standards in extending this …
S33
Artificial intelligence — Despite their technical nature – or rather because of that – standards have an important role to play in bridging techno…
S34
AI as critical infrastructure for continuity in public services — Thank you very much. Standards are a very important pillar of building trust. Another is inclusive governance. Changatai…
S35
Internet standards and human rights | IGF 2023 WS #460 — Ignacio Castro:Thank you. My name is Ignacio Castro, and I’m a lecturer in Queen Mary University of London, and I also c…
S36
High Level Dialogue: Strengthening the Resilience of Telecommunication Submarine Cables — Very high consensus with strong implications for effective policy coordination. The alignment suggests that the ITU’s In…
S37
Closing Session  — Sustained collaboration between governments, industry, and other stakeholders is essential for translating recommendatio…
S38
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Collaboration with industry was emphasized as crucial, and various arguments and evidence were presented throughout the …
S39
Keynote-Julie Sweet — She stresses that human leaders, not automated loops, must decide how AI tools are deployed responsibly, and that global…
S40
WS #103 Aligning strategies, protecting critical infrastructure — International cooperation and alignment of policies/standards is crucial
S41
Global Standards for a Sustainable Digital Future — This comment challenges the traditional static nature of standards development and proposes a paradigm shift toward dyna…
S42
Strengthen Digital Governance and International Cooperation to Build an Inclusive Digital Future — The forum revealed both the promise and complexity of international cooperation on digital governance. The strong consen…
S43
Setting the Rules_ Global AI Standards for Growth and Governance — The discussion revealed relatively low levels of fundamental disagreement among panelists, with most tensions arising ar…
S44
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S45
Global AI Policy Framework: International Cooperation and Historical Perspectives — High level of consensus on fundamental principles and approaches, with differences mainly in emphasis and specific imple…
S46
Main Session | Policy Network on Artificial Intelligence — Panelists debated the feasibility of a global AI governance regime, acknowledging the challenges of multilateralism but …
S47
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S48
A Digital Future for All (afternoon sessions) — AI governance requires a multi-stakeholder approach due to the diverse nature of opportunities, risks, and inclusivity c…
S49
From principles to practice: Governing advanced AI in action — Chris emphasizes the importance of coordinating globally to standardize frontier AI risk management frameworks. He notes…
S50
Policymaker’s Guide to International AI Safety Coordination — Moderate disagreement with significant implications – while speakers share common concerns about AI safety, their differ…
S51
The geopolitics of digital standards: China’s role in standard-setting organisations — But if standardisation processes become overly politicised, this could slow them down. It could also mean that discussio…
S52
Navigating the Digital Future: Standards-led Digital Economy (BSI) — In conclusion, voluntary standards have a positive impact on globally diverse organizations, promoting economic efficien…
S53
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — These key comments transformed what could have been a dry technical discussion into a compelling narrative about the str…
S54
The role of standards in shaping a safe and sustainable AI-driven future — Seizo Onoe:Thank you very much. Good morning, everyone, and very warm welcome to you all. Our discussions at this summit…
S55
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Furthermore, the analysis explores the role of regulation in the AI landscape. It suggests that regulation should not on…
S56
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Context is highlighted as a crucial element for effective engagement in standards development. Australia’s experts have …
S57
Open Forum #34 How Do Technical Standards Shape Connectivity and Inclusion — Both audience members criticized the panel for discussing technical standards without including actual technical standar…
S58
The role of standards in shaping a safe and sustainable AI-driven future — Onoe acknowledged the rise of a novel AI innovation ecosystem and the indispensable role of standards in extending this …
S59
Setting the Rules_ Global AI Standards for Growth and Governance — I think it’s worth backing up from this thing. One of the original questions was, what are standards for? Is Chris’s min…
S60
AI as critical infrastructure for continuity in public services — “Trust also can influence economic confidence and cross -border collaboration.”[54]. “Standards are a very important pil…
S61
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Standards are voluntary codes of best practice that companies adhere to. They assure quality, safety, environmental targ…
S62
Internet standards and human rights | IGF 2023 WS #460 — Ignacio Castro:Thank you. My name is Ignacio Castro, and I’m a lecturer in Queen Mary University of London, and I also c…
S63
WS #438 Digital Dilemmaai Ethical Foresight Vs Regulatory Roulette — von Knebel Moritz: Yeah, thank you and thanks for having me. People have often asked this question, what are the regulat…
S64
Closing Session  — Sustained collaboration between governments, industry, and other stakeholders is essential for translating recommendatio…
S65
Keynote-Julie Sweet — She stresses that human leaders, not automated loops, must decide how AI tools are deployed responsibly, and that global…
S66
WS #103 Aligning strategies, protecting critical infrastructure — International cooperation and alignment of policies/standards is crucial
S67
International Standards: A Commitment to Inclusivity — Charlyne Restivo:Ladies and gentlemen, distinguished guests, good afternoon, and welcome to this WSIS High-Level Dialogu…
S68
Resilient infrastructure for a sustainable world — Benjamin Frisch offered CERN’s perspective on open collaboration, explaining how creating open ecosystems around technol…
S69
Global Standards for a Sustainable Digital Future — ### Dynamic Standards for Rapidly Evolving Technologies Dimitrios Kalogeropoulos, an expert in AI applications in healt…
S70
YouthLead: Inclusive digital future for all — Melissa Michelle Munoz Suro: When I was 25, I found myself standing in a room full of policymakers, developers, designer…
S71
Democratizing AI: Open foundations and shared resources for global impact — Bernard Maissen: Yes, thank you. Hello, everybody, dear panelists. Nina, thank you for giving me the floor. In the globa…
S72
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — 3. Global collaboration: Li Junhua stressed the importance of cooperation among all stakeholders. His Excellency Dr. Ab…
S73
High-level AI Standards panel — ## Challenges and Future Considerations 3. **Include**: Engaging diverse stakeholders beyond traditional technical comm…
S74
WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World — Auke Pals: Thank you very much, Lisa. So I hope at this point, the EU AI Act could steer us in the right direction a…
S75
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — It doesn’t mean that countries can’t have their own perspectives or sovereign outlooks, but there is sort of a… a move…
S76
Bridging the AI innovation gap — This comment provides a profound reframing of technical standards from bureaucratic requirements to tools of global equi…
S77
Importance of Professional standards for AI development and testing — Don Gotterbarn: Thank you, Stephen. The previous assertion to Stephen’s that says essentially, because there’s differenc…
S78
Google and Microsoft impress investors with AI growth — Microsoft Corp. and Google owner Alphabet Inc.impressedinvestors surpassing Wall Street expectations with robust quarter…
S79
Closing the Governance Gaps: New Paradigms for a Safer DNS — Although regulation in the DNS industry is inevitable, it should aim to avoid fragmented jurisdictional approaches. If t…
S80
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — Current regulation approaches are inadequate and lag behind technological development Legal and regulatory | Economic …
S81
[WebDebate #12 summary] Standardisation: Practical solutions for strained negotiations, or an arena for realpolitik? — However, it is important to note that the main goal of a standard is to benefit the actors it applies to. For example, s…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
R
Rebecca Weiss
2 arguments205 words per minute679 words197 seconds
Argument 1
Benchmarking methodology as core of standards (Rebecca Weiss)
EXPLANATION
Rebecca explains that a technical AI standard must first define a clear measurement methodology and then provide the technical artifacts that let engineers embed this methodology into their development pipelines. This two‑part approach ensures that standards are not just abstract rules but actionable tools for consistent evaluation.
EVIDENCE
She states that ML Commons wants to “define the methodology for measurement and … create the technical artifacts that allow for engineers to integrate this methodology into their development life cycle” [13-14] and later describes a benchmark as consisting of a taxonomy, dataset, and evaluator system together with a measurement methodology and reference implementations [252-261].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Rebecca Weiss’s description of benchmarking components (taxonomy, dataset, evaluator) and the need for a clear measurement methodology is corroborated by S1, which outlines these three essential elements.
MAJOR DISCUSSION POINT
Definition and Purpose of AI Standards
AGREED WITH
Chris Meserole, Amanda Craig, Bhushan Sethi
Argument 2
Benchmark defined by taxonomy, dataset, evaluator; methodology essential (Rebecca Weiss)
EXPLANATION
Rebecca details the components that make up a benchmark: a well‑structured taxonomy, a representative data set, and an evaluation system, all tied together by a rigorous measurement methodology. These elements allow the benchmark to be reproducible and scalable across diverse AI deployments.
EVIDENCE
She outlines that “the definition of a benchmark … is a taxonomy, a data set, and an evaluator system” and that the methodology and reference builds enable engineers to scale the approach [252-254]. She also notes the challenge of estimating uncertainty and providing probabilistic risk estimates under defined assumptions [255-260].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 details the same definition of a benchmark—taxonomy, data set, evaluator system—tied together by a rigorous methodology, supporting Weiss’s claim.
MAJOR DISCUSSION POINT
Measurement and Benchmarking
A
Amanda Craig
4 arguments180 words per minute984 words327 seconds
Argument 1
Translating high‑level norms into practice aligns with policy (Amanda Craig)
EXPLANATION
Amanda argues that the difficulty lies in turning broad AI governance norms into concrete, actionable practices across the AI value chain. She stresses that standards are needed to bridge the gap between high‑level expectations and day‑to‑day risk‑management responsibilities of developers, deployers, and users.
EVIDENCE
She notes that “we want to know how AI providers are managing risk, but we are in the early days of defining really what that means in practice” and that this translation is essential for aligning with policy expectations [153-160].
MAJOR DISCUSSION POINT
Standards vs Regulation/Policy
Argument 2
Internal responsible AI standard aligns stakeholders; external standards provide common language (Amanda Craig)
EXPLANATION
Amanda describes Microsoft’s internal Responsible AI Standard, which aligns product, engineering, and sales teams around what “good” looks like. She adds that external standards are needed to give the broader ecosystem a shared language and expectations.
EVIDENCE
She explains that Microsoft defines a “responsible AI standard that applies to all of our internal … product groups, our engineering function, our sales function” to align internal stakeholders, and calls for partnership with industry and governments to define external standards [42-46].
MAJOR DISCUSSION POINT
Trust and Consumer Confidence
AGREED WITH
Bhushan Sethi, Esther Tetruashvily, Kshitij Bathla, Lee Wan Sie, Joslyn Barnhart, Chris Meserole
Argument 3
Standards needed to assess progress, uncertainty levels across sectors (Amanda Craig)
EXPLANATION
Amanda points out that without common standards it is hard to gauge whether the AI field has moved beyond its nascent stage or to what degree uncertainty is acceptable in different sectors. She calls for standardized ways to measure progress and determine when AI systems are reliable enough for deployment.
EVIDENCE
She asks “how do we know if we have made sufficient progress?” and argues that “we need to standardize how we assess progress, uncertainty levels, and when we can rely on these systems” [274-277].
MAJOR DISCUSSION POINT
Measurement and Benchmarking
AGREED WITH
Rebecca Weiss, Chris Meserole, Bhushan Sethi
Argument 4
Interoperable, modular standards avoid reinventing the wheel (Amanda Craig)
EXPLANATION
Amanda envisions a future where standards are modular and interoperable, allowing different sectors and use‑cases to reuse common components rather than building new ones from scratch. This approach would accelerate progress and keep standards up‑to‑date with evolving science.
EVIDENCE
She describes a “system of standards that are interoperable where we have a sort of modular approach” and stresses the need to avoid “starting from scratch with every piece of that puzzle” while evolving benchmarks and methodology [388-392].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for interoperable, modular standards is reinforced by S25, which stresses open interoperability and avoiding vendor lock‑in, and by S28, which calls for avoiding reinventing the wheel and reusing existing best practices.
MAJOR DISCUSSION POINT
Implementation, Future‑Proofing, Outlook
AGREED WITH
Chris Meserole, Lee Wan Sie, Bhushan Sethi
E
Etienne Chaponniere
3 arguments194 words per minute1066 words328 seconds
Argument 1
AI safety standards trail products, differ from telecom compliance (Etienne Chaponniere)
EXPLANATION
Etienne contrasts the telecom world, where products cannot be shipped without compliance, with AI, where safety standards typically appear after products are already on the market. He emphasizes that AI standards must become widely available and easy for engineering teams to adopt.
EVIDENCE
He notes that “you cannot ship a product unless you comply to a standard” in telecom, whereas “in the world of AI standards, it’s a bit different… safety standards typically trail the products” [20-24].
MAJOR DISCUSSION POINT
Definition and Purpose of AI Standards
Argument 2
Open governance models ensure small firms can comply (Etienne Chaponniere)
EXPLANATION
Etienne argues that for standards to be inclusive, they must be open and governed in a way that allows participation from smaller companies that lack resources to develop their own standards. Open models like ML Commons and ISO enable this inclusivity.
EVIDENCE
He states that “there needs to be an opportunity for everyone to participate” and cites open governance models such as ML Commons, ISO, and Sentinelic as mechanisms for inclusive participation [179-182].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S24 highlights inclusive access to AI standards and broad stakeholder participation, while S30 discusses how small companies benefit from open, democratic standards, aligning with Etienne’s point.
MAJOR DISCUSSION POINT
Inclusivity and Global Cooperation
AGREED WITH
Rebecca Weiss, Lee Wan Sie, Chris Meserole
Argument 3
No silver bullet; multiple language safety tests required (Etienne Chaponniere)
EXPLANATION
Etienne acknowledges that language bias cannot be solved by a single solution; instead, a variety of safety tests and prompts must be created for different languages and dialects. He stresses community involvement to define language‑specific requirements while keeping tools reusable.
EVIDENCE
He says “there’s no silver bullet solution” and that “there’s going to be a need for more than one language” and that the community must decide what to capture for each language, while keeping the tooling efficient and reusable [451-460].
MAJOR DISCUSSION POINT
Language Bias and Technical Specifics
AGREED WITH
Esther Tetruashvily, Audience
E
Esther Tetruashvily
3 arguments180 words per minute1072 words355 seconds
Argument 1
Standards translate risk management into trust language, ISO certification (Esther Tetruashvily)
EXPLANATION
Esther explains that standards help convert OpenAI’s internal risk‑management practices into a language that customers can understand, building trust. She highlights OpenAI’s ISO 42001 certification as a concrete signal of compliance and reliability.
EVIDENCE
She says standards “translate some of our practices for risk management into the language of risk management for customers” and notes that OpenAI is “certified in ISO 42001” which signals trust to the market [56-59][134-136].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S23 explains how standards turn high‑level risk‑management principles into actionable tools that build trust with customers, supporting the claim about translating risk practices and the role of ISO certification.
MAJOR DISCUSSION POINT
Trust and Consumer Confidence
AGREED WITH
Bhushan Sethi, Amanda Craig, Kshitij Bathla, Lee Wan Sie, Joslyn Barnhart, Chris Meserole
Argument 2
OpenAI’s safety hub, model cards, ISO 42001 certification as measurement tools (Esther Tetruashvily)
EXPLANATION
Esther describes concrete measurement artifacts OpenAI uses: model cards detailing performance, a publicly updated safety hub, and ISO 42001 certification. These tools provide transparent evidence of safety and reliability for users and regulators.
EVIDENCE
She mentions “model cards performance on a variety of metrics” and a “safety hub that gets updated regularly” as well as the ISO 42001 certification that signals adherence to industry best practices [133-139].
MAJOR DISCUSSION POINT
Measurement and Benchmarking
Argument 3
MMLU and Indian dialect tests illustrate language evaluation efforts (Esther Tetruashvily)
EXPLANATION
Esther notes that OpenAI evaluates multilingual capabilities using the MMLU benchmark and specific tests covering Indian dialects, demonstrating an active approach to language bias assessment. She calls for broader participation to improve these evaluations.
EVIDENCE
She states “we do a series of evaluations like MMLU for determining how well our models perform on a variety of languages” and that they also test “a specific test in QA that has a variety of dialects within India” [441-444].
MAJOR DISCUSSION POINT
Language Bias and Technical Specifics
AGREED WITH
Etienne Chaponniere, Audience
L
Lee Wan Sie
4 arguments171 words per minute917 words320 seconds
Argument 1
Global norms define “good” for AI (Lee Wan Sie)
EXPLANATION
Lee describes standards as a way to set global norms that define what “good” looks like in AI governance, aligning expectations across jurisdictions. She emphasizes that these norms are technical, not merely checklist items.
EVIDENCE
She says standards mean “setting norms” and that this “means alignment globally on what good looks like” and that it involves “common methodologies and processes” [28-30].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S27 argues that international standards provide globally coherent metrics that define what “good” looks like in AI, echoing Lee’s statement about global norms.
MAJOR DISCUSSION POINT
Definition and Purpose of AI Standards
Argument 2
Standards useful without regulation; certification as market signal (Lee Wan Sie)
EXPLANATION
Lee argues that even in the absence of formal regulation, standards serve a valuable role by providing a market signal of quality and compliance. Certification, such as ISO 42001, allows organizations to differentiate themselves and demonstrate that they meet a recognized level of risk mitigation.
EVIDENCE
She notes that “even when there’s no regulations, I think the standards still are useful” and cites OpenAI’s ISO 42001 certification as an example of a market differentiator, stating that companies can “demonstrate that I have actually implemented something that’s good enough” [215-224].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S23 notes that standards give companies concrete tools to demonstrate compliance and act as market differentiators, supporting Lee’s view of certification as a market signal.
MAJOR DISCUSSION POINT
Standards vs Regulation/Policy
AGREED WITH
Chris Meserole
DISAGREED WITH
Chris Meserole, Joslyn Barnhart
Argument 3
Interconnected standards bodies require faster coordination (Lee Wan Sie)
EXPLANATION
Lee points out that many standards organizations (ISO, ML Commons, IEEE) are interlinked, and that faster coordination among them is needed to produce timely global standards. She mentions ongoing work to accelerate testing and benchmarking within the ISO process.
EVIDENCE
She lists the interconnected bodies – ISO, ML Commons, IEEE – and says “we hope that in the next one year that can be done and sorted and accepted within the ISO process” while acknowledging the typical slowness of standard development [206-212][376-382].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 calls for faster movement on testing and benchmarking methodologies, and S25 emphasizes the need for accelerated coordination among ISO, ML Commons, IEEE, matching Lee’s call for quicker coordination.
MAJOR DISCUSSION POINT
Inclusivity and Global Cooperation
AGREED WITH
Etienne Chaponniere, Rebecca Weiss, Chris Meserole
Argument 4
Need faster standard definition; market or regulator pressure drives adoption (Lee Wan Sie)
EXPLANATION
Lee stresses that both market demand and regulatory expectations can accelerate the creation and adoption of standards. She expresses optimism that within a year progress can be made on testing methodologies and that momentum will increase.
EVIDENCE
She says “there will be some momentum, either from the market or from regulations, to move standards” and that they hope to complete work on testing and benchmarking within a year and get it accepted in ISO [376-382].
MAJOR DISCUSSION POINT
Implementation, Future‑Proofing, Outlook
AGREED WITH
Chris Meserole, Amanda Craig, Bhushan Sethi
J
Joslyn Barnhart
3 arguments188 words per minute459 words146 seconds
Argument 1
Regulation cites non‑existent standards, creating compliance need (Joslyn Barnhart)
EXPLANATION
Joslyn observes that current regulations often refer to standards that have not yet been developed, forcing companies to anticipate or create those standards to achieve compliance. This creates pressure for organizations like Google DeepMind to prioritize standard‑setting activities.
EVIDENCE
She notes that “regulation has gone ahead and jumped to… we have regulated and essentially made reference to standards that do not yet exist” and that this makes standard development an “utmost priority” for Google DeepMind [49-51].
MAJOR DISCUSSION POINT
Standards vs Regulation/Policy
Argument 2
Standards raise safety floor, prevent race to the bottom (Joslyn Barnhart)
EXPLANATION
Joslyn argues that establishing common safety standards lifts the minimum level of safety across the industry, discouraging a race to the bottom where firms might cut corners. Collective incentives drive firms to adopt higher safety baselines.
EVIDENCE
She says “the worst thing for adoption would be a safety incident” and that there is a “collective incentive as an industry to make sure that we raise the floor to avoid that” [145-147].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S23 notes that standards raise the baseline of safety across the industry and help avoid a race to the bottom, supporting Barnhart’s argument.
MAJOR DISCUSSION POINT
Trust and Consumer Confidence
AGREED WITH
Bhushan Sethi, Amanda Craig, Esther Tetruashvily, Kshitij Bathla, Lee Wan Sie, Chris Meserole
Argument 3
Performance of standards as evidence of conformity (Joslyn Barnhart)
EXPLANATION
In response to audience concerns, Joslyn explains that when standards are referenced in regulation, they become a minimum bar that regulators can accept as evidence of conformity, ensuring that standards are not merely abstract but have practical regulatory weight.
EVIDENCE
She states that “if we make these things too high level… regulators are not going to look at those standards as evidence of conformity” and that standards create “interlocking pressure” from regulation [436-438].
MAJOR DISCUSSION POINT
Industry‑driven standards risk performativity; auditability challenge
AGREED WITH
Rebecca Weiss, Bhushan Sethi, Chris Meserole
C
Chris Meserole
3 arguments204 words per minute1311 words385 seconds
Argument 1
Standards give legitimacy and fill policy gaps (Chris Meserole)
EXPLANATION
Chris emphasizes that formal standard‑setting bodies provide legitimacy, openness, and credibility that pure industry or government efforts lack, thereby bridging policy gaps and supporting collective action on AI risk.
EVIDENCE
He notes that “standard-setting bodies are open” and that they bring “legitimacy and credibility” which are missing when standards are set only by industry or government [112-114].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S26 highlights that standard‑setting bodies bring legitimacy, openness, and credibility, directly supporting Meserole’s claim.
MAJOR DISCUSSION POINT
Standards vs Regulation/Policy
AGREED WITH
Etienne Chaponniere, Rebecca Weiss, Lee Wan Sie
Argument 2
Process standards need scientific benchmarks for risk evaluation (Chris Meserole)
EXPLANATION
Chris distinguishes high‑level process standards (identifying, evaluating, mitigating risks) from the scientific benchmarks needed to actually measure those risks. He argues that both layers are essential for a coherent risk‑management framework.
EVIDENCE
He describes the process of “identifying risks, evaluating risks, putting in place mitigations” as needing standardization, and then adds that “once we have agreed on what the risk is… we need scientific benchmarks” [295-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 outlines the benchmark methodology (taxonomy, dataset, evaluator) as the scientific foundation needed to evaluate AI risks, aligning with Meserole’s argument.
MAJOR DISCUSSION POINT
Measurement and Benchmarking
AGREED WITH
Rebecca Weiss, Amanda Craig, Bhushan Sethi
Argument 3
Process standards future‑proof; evaluations must evolve with model capabilities (Chris Meserole)
EXPLANATION
Chris contends that while the overarching risk‑management process can remain stable over time, the specific evaluation methods must be updated as AI models become more capable. This ensures standards stay relevant without needing complete redesign.
EVIDENCE
He states that “the process is somewhat agnostic” and that “specific evals you run are probably going to have to be updated over time to account for the greater capabilities of models” [348-351].
MAJOR DISCUSSION POINT
Implementation, Future‑Proofing, Outlook
AGREED WITH
Amanda Craig, Lee Wan Sie, Bhushan Sethi
B
Bhushan Sethi
3 arguments110 words per minute1735 words943 seconds
Argument 1
Standards demystify AI, distinguish from regulation (Bhushan Sethi)
EXPLANATION
Bhushan highlights the need to clarify what AI standards are, separating them from broader regulatory or legislative frameworks. He points out the confusion that exists among stakeholders about these concepts.
EVIDENCE
He says “we need to demystify what we mean by standard setting” and later notes the “confusion between standards, regulation, legislation” [4][67-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S23 describes how standards translate principles into concrete practices and clarify accountability, helping demystify AI standards versus regulation.
MAJOR DISCUSSION POINT
Definition and Purpose of AI Standards
Argument 2
Trust and verification are central to AI adoption (Bhushan Sethi)
EXPLANATION
Bhushan stresses that for AI to be widely adopted, there must be trustworthy, verifiable processes for reporting, disclosure, and credibility, moving beyond superficial check‑boxes.
EVIDENCE
He asks “How do we report? How do we disclose? How do we make it credible?” emphasizing the need for non-subjective verification [92-95].
MAJOR DISCUSSION POINT
Trust and Consumer Confidence
AGREED WITH
Amanda Craig, Esther Tetruashvily, Kshitij Bathla, Lee Wan Sie, Joslyn Barnhart, Chris Meserole
Argument 3
Two‑year goal: certification as consensus on “good enough” (Bhushan Sethi)
EXPLANATION
Bhushan envisions that within the next two years, the industry will see certifications that embody a consensus on what constitutes “good enough” for AI systems, providing a clear benchmark for compliance and trust.
EVIDENCE
He states his hope to see “certification that represent more types of consensus” and that “definition of what is good enough deserves some form of certification” over the next two years [337-340].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S5 discusses the concept of “good enough” as a consensus defined by standards, directly relating to Bhushan’s two‑year certification goal.
MAJOR DISCUSSION POINT
Implementation, Future‑Proofing, Outlook
AGREED WITH
Rebecca Weiss, Chris Meserole, Joslyn Barnhart
K
Kshitij Bathla
3 arguments149 words per minute526 words210 seconds
Argument 1
Standards verify AI claims, enable consumer trust (Kshitij Bathla)
EXPLANATION
Kshitij describes standards as tools that build consumer trust by ensuring that AI products meet quality expectations, thereby facilitating industry adoption and consumer confidence.
EVIDENCE
He says standards “are the tools which enables consumers’ trust in whatever ecosystem for which they are developed” and also help industry ensure quality [62-63].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S23 states that standards provide tools for risk management and help build consumer trust, supporting Bathla’s claim.
MAJOR DISCUSSION POINT
Definition and Purpose of AI Standards
AGREED WITH
Bhushan Sethi, Amanda Craig, Esther Tetruashvily, Lee Wan Sie, Joslyn Barnhart, Chris Meserole
Argument 2
Standards enable consumer trust and industry quality (Kshitij Bathla)
EXPLANATION
He reiterates that standards serve as a mechanism for both consumers to trust AI systems and for industries to maintain consistent quality across products.
EVIDENCE
He repeats that standards “enable consumer trust” and “ensure the quality and the consumer trust” [62-63].
MAJOR DISCUSSION POINT
Trust and Consumer Confidence
Argument 3
Global standards must be adaptable to local contexts (Kshitij Bathla)
EXPLANATION
Kshitij explains that while standards should be globally consistent, they must also be flexible enough to address India‑specific risks and use‑cases, such as those highlighted in the Manav mission.
EVIDENCE
He references the “Manav mission” as human-centric, notes that India’s governance guidelines provide a framework, and stresses the need to adapt global standards to local conditions while also developing India-specific guidance [201-208][212-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S24 emphasizes inclusive participation and adaptation to diverse contexts, while S30 notes the importance of small‑firm and local considerations, aligning with Bathla’s point.
MAJOR DISCUSSION POINT
Inclusivity and Global Cooperation
A
Audience
3 arguments159 words per minute387 words145 seconds
Argument 1
Industry‑driven standards risk performativity; auditability challenge (Audience)
EXPLANATION
An audience member questions whether industry‑led standards might become superficial, serving industry interests rather than public needs, and raises concerns about how governments can audit such programs given skill gaps.
EVIDENCE
The audience asks “How do we know … you’re not just creating something that cheaply satisfies the industry … how does a government or external agency audit such a program, given the skill gap?” [398-405].
MAJOR DISCUSSION POINT
Standards vs Regulation/Policy
Argument 2
Balancing minimum viable consensus with diverse stakeholder demands (Audience)
EXPLANATION
Another audience participant asks whether standards should aim for a minimal viable consensus that includes the broadest set of stakeholders, or whether they should attempt to satisfy all stakeholder demands, even when they conflict.
EVIDENCE
The audience asks “do you imagine that we’re talking about minimum viable consensus with the broadest number of stakeholders, or is there a path to address issues that some stakeholders see as absolutely necessary and others don’t?” [428-429].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S24 stresses the importance of inclusive, multistakeholder participation in standards development, providing context for the discussion on minimum viable consensus.
MAJOR DISCUSSION POINT
Inclusivity and Global Cooperation
Argument 3
Audience query on handling multilingual bias in AI development (Audience)
EXPLANATION
A student from India raises a question about how to address language bias in AI models given India’s linguistic diversity, seeking guidance on building guardrails for multilingual development.
EVIDENCE
The audience member states “there are 22 official languages… how do you go about tackling language bias and building guardrails…” [418-420].
MAJOR DISCUSSION POINT
Language Bias and Technical Specifics
AGREED WITH
Esther Tetruashvily, Etienne Chaponniere
Agreements
Agreement Points
Standards are essential to build trust, credibility and consumer confidence in AI systems
Speakers: Bhushan Sethi, Amanda Craig, Esther Tetruashvily, Kshitij Bathla, Lee Wan Sie, Joslyn Barnhart, Chris Meserole
Trust and verification are central to AI adoption (Bhushan Sethi) Internal responsible AI standard aligns stakeholders; external standards provide common language (Amanda Craig) Standards translate risk management into trust language, ISO certification (Esther Tetruashvily) Standards verify AI claims, enable consumer trust (Kshitij Bathla) Standards useful without regulation; certification as market signal (Lee Wan Sie) Standards raise safety floor, prevent race to the bottom (Joslyn Barnhart) Standards give legitimacy and fill policy gaps (Chris Meserole)
Multiple panelists emphasized that standards provide a common language, certification and measurable assurances that help consumers and enterprises trust AI products, even serving as market differentiators and raising the overall safety floor [92-95][42-46][56-59][62-63][215-224][145-147][112-114].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with the strategic framing of open AI standards as essential for trust and market confidence highlighted in the U.S. AI Standards discussion, which draws parallels to historic standardisation in the internet and automotive sectors [S53], and reflects the ITU’s emphasis on building a trustworthy AI ecosystem through extensive standard publication [S54].
Defining “good enough” through consensus‑based standards and certification
Speakers: Rebecca Weiss, Bhushan Sethi, Chris Meserole, Joslyn Barnhart
Benchmarking methodology as core of standards (Rebecca Weiss) Two‑year goal: certification as consensus on “good enough” (Bhushan Sethi) Standards give legitimacy and fill policy gaps (Chris Meserole) Performance of standards as evidence of conformity (Joslyn Barnhart)
Speakers agreed that standards must embody a consensus on what is “good enough”, with benchmarking defining that threshold and certification signalling it to the market and regulators [97-102][337-340][108-110][112-114][436-438].
POLICY CONTEXT (KNOWLEDGE BASE)
Consensus-based approaches to defining adequacy are echoed in the low-disagreement findings of the Global AI Standards panel, where the balance between industry leadership and broader stakeholder input was stressed [S43], and in the high-level consensus on methodology for AI governance [S44].
Open, inclusive, multistakeholder governance is needed for AI standards to be accessible to small firms
Speakers: Etienne Chaponniere, Rebecca Weiss, Lee Wan Sie, Chris Meserole
Open governance models ensure small firms can comply (Etienne Chaponniere) Benchmarking methodology … need more stakeholders (Rebecca Weiss) Interconnected standards bodies require faster coordination (Lee Wan Sie) Standards give legitimacy and fill policy gaps (Chris Meserole)
Panelists highlighted that standards must be developed through open processes that allow participation from diverse stakeholders, ensuring smaller companies can adopt them without prohibitive costs [179-182][99-102][206-212][376-382][112-114].
POLICY CONTEXT (KNOWLEDGE BASE)
Multistakeholder governance is repeatedly underscored as critical for inclusive AI standards, from the IGF’s call for diverse participation across sectors [S48] to the interdisciplinary coordination involving UNESCO, OECD and others [S47], and the emphasis on contextual engagement to empower smaller actors [S56].
Benchmarking methodology and clear measurement taxonomy are foundational for effective AI standards
Speakers: Rebecca Weiss, Chris Meserole, Amanda Craig, Bhushan Sethi
Benchmarking methodology as core of standards (Rebecca Weiss) Process standards need scientific benchmarks for risk evaluation (Chris Meserole) Standards needed to assess progress, uncertainty levels across sectors (Amanda Craig) Clarity of taxonomy, measurement needed (Bhushan Sethi)
All agreed that a rigorous benchmark-comprising taxonomy, dataset and evaluator-paired with a clear methodology is essential to quantify AI risk and progress [252-261][295-298][274-277][262-264].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for robust benchmarking and taxonomy is highlighted in discussions about coordinating frontier AI risk-management frameworks, where clear evaluation methods are deemed essential for effective standardisation [S49] and for aligning global AI safety coordination efforts [S50].
Standards provide value even in the absence of formal regulation
Speakers: Lee Wan Sie, Chris Meserole
Standards useful without regulation; certification as market signal (Lee Wan Sie) Regulators offload to standards (Chris Meserole)
Both speakers argued that standards serve as market signals and can be leveraged by regulators to define compliance expectations, even when explicit regulations are lacking [215-224][194-196].
POLICY CONTEXT (KNOWLEDGE BASE)
Voluntary standards are recognised for delivering economic and innovation benefits without regulatory mandates, as noted in analyses of digital-economy standards [S52] and arguments for market-oriented regulatory approaches that still rely on standards for trust [S55].
Future‑proofing AI standards through modular, interoperable designs and evolving evaluation methods
Speakers: Chris Meserole, Amanda Craig, Lee Wan Sie, Bhushan Sethi
Process standards future‑proof; evaluations must evolve with model capabilities (Chris Meserole) Interoperable, modular standards avoid reinventing the wheel (Amanda Craig) Need faster standard definition; market or regulator pressure drives adoption (Lee Wan Sie) Two‑year outlook, certification etc. (Bhushan Sethi)
Panelists concurred that while high-level processes can remain stable, specific benchmarks must be updated as AI models advance, and modular standards can accelerate adoption across sectors [348-351][388-392][376-382][336-342].
POLICY CONTEXT (KNOWLEDGE BASE)
Future-proofing is advocated in calls for faster, modular standard development to keep pace with frontier AI, emphasizing interoperable designs and iterative evaluation [S49], and reflected in the broader consensus on adaptable governance mechanisms [S45].
Addressing language bias requires multilingual evaluation efforts and community involvement
Speakers: Esther Tetruashvily, Etienne Chaponniere, Audience
MMLU and Indian dialect tests illustrate language evaluation efforts (Esther Tetruashvily) No silver bullet; multiple language safety tests required (Etienne Chaponniere) Audience query on handling multilingual bias in AI development (Audience)
All three highlighted the challenge of multilingual bias, noting existing tests (MMLU, Indian dialects) and the need for multiple language-specific safety tests developed collaboratively [441-444][451-460][418-420].
Similar Viewpoints
Both see standards as a tool for regulators and markets to define and meet AI risk expectations even when formal regulations are not yet in place [215-224][194-196].
Speakers: Lee Wan Sie, Chris Meserole
Standards useful without regulation; certification as market signal (Lee Wan Sie) Regulators offload to standards (Chris Meserole)
Both stress the need for open, coordinated standards bodies that enable participation from smaller companies and accelerate standard development [179-182][376-382].
Speakers: Etienne Chaponniere, Lee Wan Sie
Open governance models ensure small firms can comply (Etienne Chaponniere) Interconnected standards bodies require faster coordination (Lee Wan Sie)
Both argue that scientific benchmarking is a necessary component of AI standards to enable reliable risk assessment [252-261][295-298].
Speakers: Rebecca Weiss, Chris Meserole
Benchmarking methodology as core of standards (Rebecca Weiss) Process standards need scientific benchmarks for risk evaluation (Chris Meserole)
Both view standards as a bridge translating internal risk practices into a common external language that builds trust with customers and regulators [42-46][56-59].
Speakers: Amanda Craig, Esther Tetruashvily
Internal responsible AI standard aligns stakeholders; external standards provide common language (Amanda Craig) Standards translate risk management into trust language, ISO certification (Esther Tetruashvily)
Unexpected Consensus
Broad agreement that standards are needed and valuable even though global AI regulation is still fragmented
Speakers: Lee Wan Sie, Chris Meserole, Rebecca Weiss, Bhushan Sethi
Standards useful without regulation; certification as market signal (Lee Wan Sie) Regulators offload to standards (Chris Meserole) Benchmarking methodology as core of standards (Rebecca Weiss) Trust and verification are central to AI adoption (Bhushan Sethi)
Despite the lack of coordinated global AI legislation, industry leaders and standard-setting advocates converged on the necessity of standards to fill policy gaps, provide market signals, and establish trust, which was not an obvious expectation at the start of the discussion [215-224][194-196][13-14][92-95].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple sources report high consensus on the necessity of standards despite fragmented regulation, including the Global AI Policy Framework’s emphasis on common principles [S45] and the IGF’s observation of strong alignment on AI governance direction [S44].
Overall Assessment

The panel displayed strong consensus that AI standards are critical for building trust, defining “good enough”, ensuring inclusivity, and providing measurable benchmarks, with agreement across industry, policy, and technical perspectives. There is also shared recognition of the need for open, multistakeholder processes and future‑proof, modular designs.

High consensus across most thematic areas, indicating a unified stance that standards will be a cornerstone for responsible AI deployment and that coordinated, inclusive efforts are essential for their success.

Differences
Different Viewpoints
Whether AI standards should be primarily driven by regulation or can be valuable and market‑driven in the absence of regulation
Speakers: Lee Wan Sie, Chris Meserole, Joslyn Barnhart
Standards useful without regulation; certification as market signal (Lee Wan Sie) Standards fill policy gaps; regulators may offload requirements to standards (Chris Meserole) Standards become evidence of conformity when referenced in regulation (Joslyn Barnhart)
Lee argues that standards remain useful even when no formal regulations exist, serving as a market differentiator through certification [215-224]. Chris contends that standards are needed to fill policy gaps, with regulators often delegating risk-management requirements to the standards process [106-110]. Joslyn points out that standards gain regulatory weight only when they are explicitly referenced in law, serving as a minimum bar for compliance [436-438]. These positions reflect a tension between viewing standards as independent market tools versus as extensions of regulatory frameworks.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between regulation-led and market-driven standards is documented in debates over a market-oriented regulatory model [S55] and in moderate disagreement about divergent safety approaches that could lead to fragmented policy responses [S50].
Desired speed and coordination of AI standard development
Speakers: Lee Wan Sie, Etienne Chaponniere, Chris Meserole
Need faster coordination among standards bodies and hope to finalize testing work within a year (Lee Wan Sie) Acknowledges that standard development is slow and takes time (Etienne Chaponniere) Emphasises that standards must be open and credible, which can lengthen the process (Chris Meserole)
Lee pushes for accelerated progress, stating a goal to complete testing and benchmarking work within a year and be accepted by ISO [376-382]. Etienne notes the reality that standard-setting “takes a while” and cannot be rushed [383-384]. Chris adds that openness and legitimacy of standard-setting bodies are essential, implying that thorough, inclusive processes may limit speed [112-114]. The speakers thus differ on how quickly standards should be produced versus the practical constraints of inclusive, credible development.
POLICY CONTEXT (KNOWLEDGE BASE)
Disagreement over pace is highlighted in the Global AI Standards panel, where participants noted differing views on how quickly standards should be produced [S43], and in calls for accelerated formal standards to match rapid AI advances [S49].
Risk of industry‑driven standards being performative versus their legitimacy and auditability
Speakers: Audience, Chris Meserole, Lee Wan Sie
Industry‑driven standards may be superficial; governments lack capacity to audit (Audience) Standard‑setting bodies provide legitimacy and openness missing from pure industry or government efforts (Chris Meserole) Certification offers a market signal of quality even without regulation (Lee Wan Sie)
An audience member questions whether standards created mainly by industry will be merely performative and how governments can audit them given skill gaps [398-405]. Chris counters that formal standard-setting bodies bring openness, legitimacy, and credibility that pure industry or government actions lack [112-114]. Lee reinforces that certification can serve as an independent market signal of compliance, even absent regulation [215-224]. This reflects a disagreement between external skepticism about performativity and internal confidence in the legitimacy of standards processes.
POLICY CONTEXT (KNOWLEDGE BASE)
Concerns about the legitimacy of industry-led standards appear in analyses of politicised standard-setting that may compromise technical merit [S51] and in observations of challenges around industry engagement and auditability [S52].
Unexpected Differences
Audience skepticism about the performative nature of industry‑driven standards versus panel confidence in their legitimacy
Speakers: Audience, Chris Meserole, Lee Wan Sie
Industry‑driven standards risk being superficial; governments lack audit capacity (Audience) Standard‑setting bodies provide legitimacy and openness missing from pure industry or government (Chris Meserole) Certification offers a market signal of quality even without regulation (Lee Wan Sie)
The audience’s concern that standards may be crafted to satisfy industry interests rather than public needs, and that governments lack the technical capacity to audit them, was not directly addressed by the panelists, who instead emphasized the inherent legitimacy of standard‑setting bodies and the value of certification as an independent quality signal. This gap between external critique and internal assurance was not anticipated in the earlier discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
Direct audience criticism of a panel’s lack of technical practitioner representation, questioning the legitimacy of discussed standards, was recorded at the IGF workshop [S57].
Overall Assessment

The panel largely converged on the importance of AI standards for trust, risk management, and global cooperation, but diverged on the relationship between standards and regulation, the pace of standard development, and the risk of performative, industry‑driven processes. These disagreements highlight the need for clearer governance frameworks, faster yet inclusive standard‑setting mechanisms, and stronger auditability to ensure standards serve public interests.

Moderate – while there is broad consensus on goals, the differing views on implementation pathways and regulatory interplay suggest potential friction that could affect the timely and effective adoption of AI standards.

Partial Agreements
All speakers agree that building trust and ensuring verifiable, reliable AI systems is essential, but they differ on the mechanisms: Bhushan emphasizes reporting and disclosure frameworks; Amanda highlights internal corporate standards and external common language; Esther points to ISO certification and safety hubs; Joslyn focuses on industry‑wide safety baselines; Chris stresses high‑level process standards coupled with scientific benchmarks. This shows consensus on the goal of trust, with varied pathways to achieve it.
Speakers: Bhushan Sethi, Amanda Craig, Esther Tetruashvily, Joslyn Barnhart, Chris Meserole
Trust and verification are central to AI adoption (Bhushan Sethi) Internal responsible AI standard aligns stakeholders; external standards provide common language (Amanda Craig) Standards translate risk management into trust language; ISO certification signals reliability (Esther Tetruashvily) Standards raise safety floor, prevent race to the bottom (Joslyn Barnhart) Process standards needed for risk identification, evaluation, mitigation (Chris Meserole)
The speakers share the objective of creating inclusive, widely applicable AI standards, but differ on emphasis: Rebecca stresses a technical benchmark methodology and the need for diverse stakeholder input [96-101]; Etienne highlights open governance and participation of smaller companies as essential for accessibility [179-182]; Lee focuses on establishing global norms and market‑driven certification as a signal of quality [215-224]. Thus, they agree on inclusivity but propose different primary levers.
Speakers: Rebecca Weiss, Etienne Chaponniere, Lee Wan Sie
Benchmarking methodology as core of standards; need broad stakeholder consensus (Rebecca Weiss) Open governance models ensure small firms can comply (Etienne Chaponniere) Global norms define “good” for AI; standards useful even without regulation (Lee Wan Sie)
Takeaways
Key takeaways
AI standards are essential to translate high‑level norms into concrete, verifiable practices and to build consumer and enterprise trust. Benchmarking methodology—taxonomy, dataset, and evaluator—is the technical core of AI standards and enables measurement of uncertainty and risk. Standards differ from regulation but can complement it; they provide legitimacy, a common language, and a market signal even where no formal rules exist. Global cooperation and open‑governance models are needed so that standards are inclusive, scalable, and usable by both large and small firms. Process‑oriented standards (risk identification, evaluation, mitigation) are more future‑proof, while specific technical benchmarks must evolve with model capabilities. Certification (e.g., ISO 42001) serves as evidence of “good enough” compliance and can differentiate trustworthy products. Addressing language bias requires dedicated multilingual test suites and community participation; no single solution will cover all languages. A modular, interoperable standards ecosystem is needed to avoid reinventing the wheel as AI applications diversify across sectors.
Resolutions and action items
Panelists agreed to deepen collaboration with ML Commons to develop benchmark methodologies and reference implementations. Qualcomm (Etienne) will advocate for open, accessible standards that enable smaller companies to achieve compliance without building their own frameworks. OpenAI (Esther) will continue publishing its safety hub, model cards, and pursue ISO 42001 certification as a benchmark for trust. Microsoft (Amanda) will work with industry and civil‑society stakeholders to define measurable progress indicators for AI maturity. Singapore (Lee) will push forward ISO‑level testing and benchmarking work, aiming for a draft within the next year. India (Kshitij) will align national standards with global ISO/IEC SC42 outputs while identifying India‑specific use‑case guidance. All participants committed to advancing modular, interoperable standards that can be updated as model capabilities evolve.
Unresolved issues
How governments and external auditors can effectively verify industry‑driven standards given the technical skill gap. The extent to which standards should be prescriptive versus allowing flexibility for diverse stakeholder needs. Mechanisms for achieving consensus on contentious risk categories where some jurisdictions consider certain risks essential and others do not. Concrete timelines and processes for rapidly developing and ratifying new standards to keep pace with fast‑moving AI models. Specific approaches for comprehensive multilingual evaluation and mitigation of language bias beyond existing test suites.
Suggested compromises
Adopt standards as a minimum floor (baseline) while allowing higher‑level or sector‑specific standards to be layered on top. Use a modular standards architecture so that common core components are shared, and specialized extensions can address local or domain‑specific requirements. Combine market‑driven adoption incentives with regulatory expectations to ensure standards attain sufficient rigor without being overly burdensome. Maintain open governance and inclusive participation to balance the interests of large tech firms with those of smaller companies and civil‑society groups.
Thought Provoking Comments
Regulation has jumped ahead and referenced standards that do not yet exist. For places like Google DeepMind who have not invested heavily in the standard space in the past, this is now of utmost priority because we actually need this to assist with implementation and compliance.
Highlights a critical mismatch where policymakers are demanding compliance to standards that are still under development, creating urgency for the industry to engage in standard‑setting.
Shifted the conversation from abstract benefits of standards to a concrete pressure point, prompting other panelists to discuss how their organisations are accelerating standard‑development and underscoring the need for rapid, collaborative action.
Speaker: Joslyn Barnhart (Google DeepMind)
A big part of what standards are for is to try and solve this collective action problem… having a formal standard‑setting body is open, so there’s legitimacy and credibility you don’t have if it’s just industry or just government.
Frames standards as a mechanism to align diverse actors and overcome the classic collective‑action dilemma, emphasizing openness and legitimacy as essential qualities.
Provided a theoretical foundation that other speakers (e.g., Rebecca, Etienne) referenced when discussing inclusivity and openness, steering the dialogue toward the governance structure of standard bodies rather than just technical details.
Speaker: Chris Meserole (Frontier Model Forum)
What is good enough? A standard represents a consensus about what is good enough. The problem is who contributes to that consensus – it shouldn’t be exclusively an industry perspective; you need broader stakeholder representation, and there’s both a scientific element (statistical guarantees) and a political element.
Introduces the nuanced question of “good enough” and points out the dual scientific‑political nature of standards, challenging the panel to think beyond technical metrics.
Prompted deeper discussion on inclusivity (Etienne’s point about smaller companies) and on the need for multi‑stakeholder processes, influencing later remarks about openness and the role of regulators.
Speaker: Rebecca Weiss (ML Commons)
In the telecom world you cannot ship a product unless you comply to a standard because you need it for interoperability. In AI standards we’re talking more about safety standards, and those typically trail the products. The products are out there, and then they’re going to comply to standards at some point when the standards are available.
Draws a clear contrast between mature, mandatory standards in telecom and the nascent, reactive nature of AI safety standards, exposing a timing gap that affects adoption.
Set the stage for later comments about the need for faster standard development (Lee’s regulatory timing, Amanda’s modular approach) and highlighted why AI standards are currently “behind” the technology curve.
Speaker: Etienne Chaponniere (Qualcomm)
If there are no regulations, standards are still useful – they can be a way for organisations to differentiate themselves, demonstrate that they have implemented something that is ‘good enough’, and provide certification assurance.
Counters the assumption that standards only matter when mandated, positioning them as market‑driven signals of trust and quality.
Re‑oriented the discussion toward the commercial value of standards, leading to Amanda’s and Etienne’s remarks about standards as a competitive advantage and the need for open, accessible frameworks.
Speaker: Lee Wan Sie (Singapore Government)
Future‑proofing standards is best done at the process‑level – a good risk‑identification and mitigation process can stay relevant even as model capabilities evolve; the specific evaluations will need updating, but the overarching framework can endure.
Provides a strategic lens for designing standards that remain relevant amid rapid AI advances, separating stable process elements from mutable technical tests.
Guided the conversation toward long‑term planning, influencing Amanda’s modular‑interoperable vision and reinforcing the importance of separating process standards from benchmark specifics.
Speaker: Chris Meserole (Frontier Model Forum)
We need a system of interoperable, modular standards so we don’t reinvent the wheel for every new use‑case or sector; the standards ecosystem should have synergy and evolve together.
Advocates for a cohesive, reusable standards architecture that balances evolution with efficiency, addressing the earlier concern about speed and duplication.
Synthesised earlier points about speed, openness, and modularity, and set a concrete direction for future work, prompting agreement from other panelists about avoiding siloed efforts.
Speaker: Amanda Craig (Microsoft)
If we make standards too high‑level or lowest‑common‑denominator, regulators won’t accept them as evidence of conformity. There needs to be a minimum bar of quality, otherwise standards become performative.
Highlights the risk of “performative” standards and stresses the need for substantive, regulator‑acceptable criteria, adding a pragmatic constraint to the earlier optimism.
Re‑focused the dialogue on the balance between accessibility and rigor, influencing later remarks about certification, credibility, and the role of regulators in enforcing standards.
Speaker: Joslyn Barnhart (Google DeepMind)
Overall Assessment

The discussion was driven forward by a handful of pivotal remarks that moved it from a generic endorsement of standards to a concrete, problem‑oriented dialogue. Joslyn’s observation about regulators demanding non‑existent standards created urgency; Chris’s framing of standards as a collective‑action solution gave the conversation a governance backbone; Rebecca’s ‘good enough’ question forced the panel to confront the scientific‑political trade‑offs and stakeholder inclusion; Etienne’s telecom analogy exposed the timing mismatch between product rollout and safety standards; Lee’s point about market‑driven differentiation showed standards’ value even without regulation; Chris’s future‑proofing insight introduced a strategic design principle; Amanda’s call for modular, interoperable standards offered a practical roadmap; and Joslyn’s warning against performative standards reminded everyone of the need for rigor. Together, these comments shifted the tone from abstract optimism to a nuanced, action‑oriented plan, shaping the panel’s consensus around speed, openness, legitimacy, and the balance between accessibility and regulatory credibility.

Follow-up Questions
What are the top three priority areas where standards are needed today (e.g., testing methodologies, transparency/disclosure formats, incident reporting and monitoring)?
Identifies key domains where standardisation can create alignment and trust across AI deployments.
Speaker: Lee Wan Sie
How should AI organisations report, disclose, and make compliance credible without it becoming a subjective tick‑box exercise?
Seeks concrete guidance on credible, verifiable reporting mechanisms to ensure trust and avoid perfunctory compliance.
Speaker: Bhushan Sethi
What additional considerations should be brought in from a standard‑setting perspective before the industry view is formed?
Requests input on foundational standard‑setting issues that may be overlooked by industry practitioners.
Speaker: Bhushan Sethi (to Chris Meserole and Rebecca Weiss)
Is there a disconnect between the strong industry consensus on AI standards and the lack of coordinated global regulations, and how should the audience interpret this gap?
Addresses the tension between voluntary standard adoption and the absence of binding regulatory frameworks.
Speaker: Bhushan Sethi (wild‑card question)
How can governments or external agencies effectively audit industry‑driven AI assurance programs given the technical skill gap?
Raises concern about public oversight and the ability of regulators to verify compliance with industry‑created standards.
Speaker: Audience member (unnamed)
What approaches can be used to detect and mitigate language bias in AI models, especially for multilingual contexts like India’s 22 official languages, and how can guardrails be built for small‑scale developers?
Highlights the need for multilingual fairness, evaluation data, and accessible tooling for developers.
Speaker: Audience member (computer‑science student)
Should AI standards aim for a minimum‑viable consensus that includes the broadest stakeholder base, or can they also address issues that some stakeholders consider essential but others deem unnecessary?
Explores the scope and inclusivity of standards‑setting processes versus targeted, high‑bar requirements.
Speaker: Audience member (Polonetsky)
What is needed to develop a clear, verifiable taxonomy, reference datasets, and evaluator systems for AI benchmarking, and how can these be standardized across industries?
Identifies foundational research required to create scalable, trustworthy benchmarking infrastructure.
Speaker: Rebecca Weiss
How can the speed of standards development be increased within bodies like ISO to keep pace with rapid AI advances?
Calls for process improvements to reduce lag between technology emergence and standard availability.
Speaker: Lee Wan Sie
How can standards be designed to be interoperable and modular across different sectors, use‑cases, and deployment scenarios, avoiding reinventing the wheel?
Emphasises the need for a cohesive, reusable standards ecosystem that adapts to varied applications.
Speaker: Amanda Craig
What mechanisms can future‑proof AI standards so they remain relevant as model capabilities evolve?
Seeks strategies to ensure standards retain applicability despite rapid technological change.
Speaker: Chris Meserole
How can smaller companies be included in the standard‑setting process and benefit from open, accessible standards despite limited resources?
Addresses inclusivity and the need for open governance models that lower participation barriers.
Speaker: Etienne Chaponniere
How can global standards be adapted to local contexts (e.g., India‑specific risks and use‑cases) while maintaining overall consistency?
Highlights the challenge of balancing worldwide harmonisation with region‑specific requirements.
Speaker: Kshitij Bathla
How can a consensus on what constitutes ‘good enough’ be reached across sectors with differing risk tolerances and expectations?
Points to the difficulty of defining acceptable risk thresholds that satisfy diverse industries.
Speaker: Rebecca Weiss
What is the role and impact of certification schemes (e.g., ISO 42001) on market trust and adoption of AI systems?
Investigates how formal certification can signal compliance and build stakeholder confidence.
Speaker: Esther Tetruashvily

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Secure Finance Risk-Based AI Policy for the Banking Sector

Secure Finance Risk-Based AI Policy for the Banking Sector

Session at a glanceSummary, keypoints, and speakers overview

Summary

The summit opened with a focus on embedding AI governance within existing technology oversight rather than treating it as a separate domain [2]. Ajay Kumar Chaudhary argued that AI must be governed throughout its life-cycle, from design to deployment, and that this governance should be built into the system rather than added later [7-8][24-28]. He outlined four pillars-proportionality, fairness, explainability and accountability-and advocated a risk-based approach that addresses model integrity, concentration risk, data stewardship and cybersecurity [50-54][66-78]. Chaudhary also stressed inclusion, data sovereignty and supply-chain resilience, proposing concrete checkpoints across the AI pipeline and positioning trust as the strategic outcome of embedded governance [85-92][94-106][112-115][118-124][129-131].


Economic advisor Sanjeev Sanyal cautioned that AI, like past general-purpose technologies, does not guarantee first-mover advantage and that risk-based regulation may be either too restrictive or too lax, urging instead ex-ante accountability and compartmentalized “firewalls” for AI systems [149-162][170-182]. He warned against treating AI as a monolithic “internet of everything,” recommending bounded problem scopes, auditability and clear liability for algorithm creators, and later highlighted emerging legal questions around data ownership and copyright in AI-generated outputs [238-250][317-324]. Praveen Kamat described the GIF City IFSC as a “clean-slate” jurisdiction where a sandbox environment can experiment with AI governance, noting that such pilots must respect cross-regulatory legal constraints while allowing risk-capped innovation [192-201][263-291]. Murlidhar Manchala added that regulators should grant supervisory relief to firms that implement robust guardrails, turning AI systems into “glass boxes” with transparent incident reporting rather than opaque black boxes [204-206][296-301].


Vikram Kishore emphasized that generative AI lowers attack barriers but does not fundamentally alter cybersecurity fundamentals, urging organizations to adopt multi-factor authentication, standards like ISO/NIST and to use AI for faster threat detection and automated reporting [215-224][226-233]. The panel collectively agreed that over-regulation can stifle innovation while under-regulation leaves systemic risk unchecked, proposing coordinated sandbox frameworks across RBI, SEBI, IRDAI and IFSC to balance experimentation with oversight [263-291][392-397]. Throughout, participants highlighted the need for continuous monitoring, explainability audits and “skin-in-the-game” accountability to ensure AI enhances financial inclusion without reinforcing bias or concentration [85-92][238-250][311-313].


The discussion concluded that AI’s emergent nature demands a skeptical yet proactive governance model that focuses on bounded applications, transparent oversight and resilient infrastructure to preserve trust in the financial system [326-344]. Overall, the summit underscored that embedding governance into AI from inception is essential for India to harness AI’s benefits while safeguarding stability, inclusion and sovereignty [129-131][317-324].


Keypoints


Major discussion points


Embedding AI governance throughout the AI life-cycle – The keynote stresses that AI must not be an after-thought compliance layer but built-in from design to deployment and monitoring. Ajay Kumar Chaudhary defines “embedded governance” as integrating accountability, transparency and risk-management into every stage of the AI life-cycle and lists its four pillars: proportionality, fairness, explainability and accountability [45-48][50-54].


India’s unique digital foundation and the need for sovereign AI infrastructure – India’s population-scale digital public infrastructure (UPI, digital identity, etc.) provides a platform for AI to become a core financial utility. The speaker warns that AI is now a systemic component that must be governed like any critical utility and highlights the five-layer AI stack (chips, cloud, data, foundation models, applications) and the strategic risk of dependence on foreign chip and model suppliers [14-21][44-45][99-106].


Different regulatory philosophies and the limits of risk-based approaches – Sanjeev Sanyal contrasts the European risk-based model, the Chinese state-led model and the US ex-post tort-law model, arguing that AI’s emergent nature makes precise risk-bucketing impossible and that any framework must be “agnostic” and focus on ex-ante accountability, compartmentalisation and “skin-in-the-game” [149-166][170-176].


Experimentation zones and sandboxing (GIFT City) as a way to balance innovation and oversight – Representatives from GIFT City explain that, as a newly created IFSC, it can act as a “lab” for AI governance because it starts with a clean-slate regulator, can run sandboxes for pilots, and must respect a gestation period before scaling [192-200][263-286].


Cyber-security challenges in the age of generative AI – The cloud-service perspective stresses that generative AI lowers attack barriers but does not fundamentally change security fundamentals; organisations must adopt multi-factor authentication, standards (ISO/NIST), active threat-hunting and AI-in-the-loop automation to stay resilient [215-224][225-232].


Overall purpose / goal of the discussion


The panel was convened to explore how India can embed robust, risk-based governance into AI systems that are becoming integral to the nation’s financial infrastructure, while leveraging its digital public-goods foundation to drive inclusion, economic sovereignty, and sustainable innovation. Speakers repeatedly linked governance to trust, resilience, and the broader summit theme of “people, planet and progress.”


Overall tone and its evolution


– The session opens with an optimistic, forward-looking tone, celebrating India’s digital achievements and the transformative potential of AI [14-21].


– As the conversation moves to governance, the tone becomes cautiously analytical, highlighting unknown risks, the need for embedded safeguards, and the shortcomings of existing risk-based models [45-48][149-166].


– When discussing labs, sandboxes, and cybersecurity, the tone shifts to pragmatic collaboration, offering concrete mechanisms and emphasizing partnership between regulators, industry, and technology providers [192-200][215-232].


– The closing remarks return to an hopeful, constructive tone, reaffirming confidence that with disciplined foresight India can align AI innovation with ethical responsibility and trust [126-132][317-324].


Overall, the discussion moves from enthusiasm about AI’s promise, through a sober assessment of governance challenges, to a collaborative roadmap for responsible implementation.


Speakers

Moderator – Session moderator who opened the panel and introduced the keynote speaker. [S9]


Ajay Kumar Chaudhary – Keynote speaker delivering the opening address on AI governance in finance. [S2]


Priyanka Jain – Panel moderator and discussion facilitator; associated with 5Money and experienced with RBI sandbox programmes. [S6]


Sanjeev Sanyal – Economic Advisor to the Prime Minister of India; described as a macro-thinker, historian and strategic geopolitical analyst. [S4][S5]


Praveen Kamat – Official from GIFT City International Financial Services Centre (IFSC); expertise in financial regulation, innovation and sandbox experimentation. [S3]


Murlidhar Manchala – RBI official involved in the AI framework and supervisory guidance; contributes to discussions on risk-based controls and safe-harbor regimes. [S8]


Vikram Kishore Bhattacharya – Cloud service-provider representative; specialist in cybersecurity, cloud infrastructure and the impact of generative AI on threat vectors. [S1]


Audience – General audience members participating in the Q&A, e.g., Aditya, founder of First Tile, and other attendees. [S12]


Additional speakers:


Aditya – Founder of First Tile (a customer-data platform); asked a question during the audience segment about sovereign data assets and AI stack utilization. (No external source citation available)


Full session reportComprehensive analysis and detailed insights

Opening & moderator remarks


The session began with the moderator reminding participants that the summit’s overarching aim was to treat AI governance not as a separate silo but as an embedded layer within the existing technology-oversight framework that already regulates other digital tools [2].


Ajay Kumar Chaudhary – keynote


Optimism tempered by caution & “Mano” proposal – Chaudhary opened with optimism about the four-day summit and warned that rapid AI scaling will bring both known and unknown risks that must be managed through embedded governance [4-11][13-21]. He cited the Prime Minister’s one-word summary “Mano” (humanity) and proposed* that it could replace the term “responsible AI”, evolving into a “human AI” framing that captures moral, ethical, sovereign, inclusive and accountable dimensions [9-12][15].


* India’s digital public-infrastructure – He highlighted India’s population-scale public digital infrastructure such as UPI and other platforms, showing how interoperability, transparency and scale have reshaped financial participation [14-16].


* AI’s structural shift – AI is now being super-imposed on this foundation, integrating with payment systems, credit-risk platforms, supervisory frameworks and cybersecurity architectures that already operate at national scale [17-20]. This marks a structural shift: unlike earlier automation, AI introduces adaptive, learning systems that can dynamically influence outcomes [21-23].


Core question – The question is no longer whether* AI will transform finance-it already is-but whether governance can keep pace and be designed into the system from inception rather than added later as a compliance overlay [24-28][30-33].


Quotes – He invoked Peter Drucker: governance in AI-enabled finance must be about “doing the right things at the right time” to preserve trust, resilience and inclusion [29-32]; and quoted Christiane Lagarde: “Innovation and regulations are not adversity, they are partners in progress.”* [34-36].


* Embedded governance pillars – Chaudhary defined embedded governance as integrating accountability, transparency and risk-management into every stage of the AI life-cycle – from conceptualisation and data acquisition to model development, deployment and continuous monitoring [45-52]. He distilled this into four pillars:


1. Proportionality – governance intensity should be risk-based [50-51];


2. Fairness & non-discrimination[52];


3. Explainability & transparency[53];


4. Accountability (clearly defined)[54-55].


These pillars must be embedded by design, not retro-fitted, because AI systems affecting credit access or financial behaviour cannot remain opaque black boxes [26-28][31-33].


* Risk-based governance & concrete benefits – He advocated a proportional, risk-based approach that treats AI as a systemic financial utility [45-52]. Key risk dimensions he highlighted were:


Model integrity – ongoing validation and stress-testing across extreme but plausible scenarios [66-70];


Operational concentration risk – the systemic danger of a few providers dominating AI infrastructure [71-75];


Data governance – ensuring data integrity, consent, purpose limitation and minimisation [75-78];


Cybersecurity – AI can amplify attack vectors (adversarial AI) and therefore requires anticipatory safeguards [78-81].


He illustrated the quantitative impact of AI-enabled detection, noting that in high-value payment environments (NPCI) fraud-loss reductions of 25-30 % are already being realised [66-70]. He also stressed that AI accelerates compliance and broadens access and inclusion by automating routine checks and expanding service reach [67-69]. Finally, he highlighted that regulators are leveraging advanced analytics to monitor systemic patterns, identify anomalies and strengthen early-warning mechanisms[70-73].


* Inclusion, bias mitigation & “glass-box” model – AI can expand financial inclusion by providing granular, dynamic risk assessments that reduce reliance on heavy collateral and static credit histories [82-84]. However, without intentional design, AI could perpetuate structural inequalities (e.g., gender-biased data distorting credit outcomes) [87-90]. Chaudhary called for representative training datasets, periodic impact audits, community-level feedback and transparent redress pathways that turn opaque systems into “glass-boxes” for customers [91-93][204-207].


* Five-layer AI stack & sovereign infrastructure – He described a five-layer stack: (1) specialised semiconductor chips, (2) cloud & data-centric infrastructure, (3) large data sets that fuel the system, (4) foundation models, and (5) application-level services [99-104]. Over-reliance on foreign chips (over 90 % controlled by a single firm) and a handful of cloud and model providers threatens economic sovereignty, financial stability and national security [104-106]. Chaudhary urged diversification through domestic innovation, international collaboration, consent-based data sharing and the promotion of home-grown AI entities [107-110].


* Operationalising embedded governance – He outlined concrete governance checkpoints across the AI pipeline: risk-based classification of systemic impact, independent review, auditable documentation, cross-functional governance committees, continuous monitoring with feedback loops, and consumer-centric safeguards such as transparent disclosures, clear appeal processes and human-in-the-loop interventions [112-115][124-131]. He framed trust as the strategic outcome of these measures, asserting that finance rests on confidence that systems are fair, stable and accountable [118-124][126-132].


Panelist perspectives


* Sanjeev Sanyal – Drew historical analogies, warning that first-mover advantage is not guaranteed for general-purpose technologies and that the European-style risk-based regulation may become either over-restrictive or under-protective because AI’s emergent nature defies ex-ante risk-bucketting [149-166][238-250]. He advocated ex-ante accountability (“skin-in-the-game”), clear liability for algorithm creators, and compartmentalised “firewalls” to prevent systemic spill-over [170-182][242-250]. He also raised novel IP questions about ownership of AI-generated outputs, calling for a judicial framework [317-324].


* Praveen Kamat – Presented GIFT City IFSC as a “clean-slate” jurisdiction (established 2015, regulator 2020) offering regulatory “legroom” for sandbox pilots that cap risk while allowing iterative learning [192-200][263-286]. He highlighted an inter-operable sandbox linking RBI, SEBI, IRDAI and the IFSC, while noting legal constraints such as currency incompatibility (INR not permitted in the IFSC) that must be resolved [392-401][404-410].


* Murlidhar Manchala – Echoed the AI-mission report’s suggestion that firms implementing comprehensive guardrails (model inventories, bias testing, continuous monitoring) should receive supervisory relief (“safe-harbour”)[204-207][296-301]. He stressed senior-management accountability and that incident-reporting mechanisms should turn black-box systems into transparent “glass-boxes” [204-207][296-301].


* Vikram Kishore Bhattacharya – As a cloud-service scientist, he acknowledged that generative AI lowers barriers for phishing, credential theft and malicious code, but maintained that core cybersecurity principles (MFA, strong passwords, regular patching) remain unchanged [215-224]. He urged the adoption of AI-in-the-loop tools for faster threat detection, automated scanning and real-time reporting, alongside skill-building and standards compliance (ISO, NIST, third-party audits) [225-233].


Agreement & disagreement matrix


Common ground – All speakers agreed that trust, inclusion and resilience are essential for AI-enabled finance and that embedded governance is preferable to retro-fitted compliance [45-52][170-182][204-207][215-224].


Points of disagreement


1. Risk-based regulation – Chaudhary champions a proportional, risk-based framework [50-54]; Sanyal argues that AI’s unknown risks make any ex-ante risk-bucket ineffective and potentially stifling [160-166][238-250].


2. Sandbox purpose – Kamat views the IFSC sandbox as a proactive experimental space for AI pilots [263-291]; Manchala sees the current sandbox as a remedial tool triggered by compliance breaches, though he supports expanding it to include monitoring and tooling [407-411].


3. AI as systemic infrastructure vs emergent technology – Chaudhary treats AI as a core financial utility subject to resilience standards [44]; Sanyal stresses AI’s emergent behaviours that resist traditional infrastructure regulation [242-250].


4. Cybersecurity impact – Bhattacharya maintains that AI does not fundamentally alter security fundamentals [215-224]; Chaudhary warns that AI amplifies cyber-risk, creating new adversarial threats that need anticipatory safeguards [78-80].


5. Purpose of the sandbox (expanded) – The panel differed on whether the sandbox should primarily enable innovation experimentation (Kamat) or serve compliance remediation with supervisory relief (Manchala).


Audience question & response


An audience member (Aditya, founder of First Tile) asked how India’s sovereign data assets could be leveraged for AI model development while respecting privacy and ownership [359-380]. Sanyal responded that India’s massive data pool is “new oil” and that rights to the data and the ability to process it (through domestic data centres and AI refineries) are essential for strategic autonomy, noting the recent tax holiday for data-centre investment as a policy lever [359-362][361].


Aditya also proposed a consent-backed API standard for data sharing and a regulatory seat for data processors. Kamat acknowledged the idea but highlighted legal incompatibilities between the IFSC’s foreign-currency regime and domestic regulations that must be resolved before a cross-jurisdictional sandbox can operate [392-401][404-410]. Manchala added that an inter-operable sandbox already exists for compliance issues, and a broader sandbox offering compute, data and tooling support is under consideration [407-411].


When asked to name an under-estimated risk, Manchala replied that risk itself is being underestimated, underscoring the need for robust governance to surface hidden vulnerabilities [296-301].


Closing remarks


Priyanka concluded the session by emphasizing that AI should initially be applied to bounded problems (e.g., chess) and that the community must maintain a healthy skepticism about AI’s promises, ensuring that optimism is always tempered by rigorous scrutiny [331-336].


Forward-looking roadmap – The panel distilled the discussion into actionable recommendations:


* Adopt a proportional, risk-based governance model flexible enough for AI’s emergent behaviours [50-54][59-63];


* Provide supervisory relief (“safe-harbour”) for firms that demonstrably implement robust guardrails, model inventories and transparent incident reporting [204-207][296-301];


* Expand the IFSC sandbox into an interoperable platform for cross-regulatory AI pilots, while addressing legal constraints such as currency compatibility [263-291][392-401][404-410];


* Develop a consent-backed API framework giving data processors a voice in rule-making and ensuring privacy-by-design [381-391];


* Invest in sovereign AI infrastructure – domestic semiconductor capability, cloud capacity, data-centre incentives and home-grown foundation models – to reduce dependence on a few foreign suppliers [99-106][107-110];


* Mandate continuous monitoring, explainability audits and periodic impact assessments to detect model drift, bias and concentration risk [68-71][112-115];


* Strengthen cybersecurity by combining traditional controls (MFA, patching) with AI-in-the-loop detection, automated reporting and upskilling programmes [215-224][225-233];


* Clarify intellectual-property rules for AI-generated outputs, establishing ownership among prompt authors, data owners and model creators [317-324].


Unresolved challenges remain: operationalising a risk-based framework when many AI risks are unknown, assigning ex-ante liability across the AI supply chain, and harmonising cross-jurisdictional data handling between the IFSC and domestic regulators. The collective optimism, tempered by a realistic appraisal of systemic vulnerabilities, underscores a strategic imperative for India to lead a balanced, home-grown AI governance model that draws lessons from the US, EU and China while remaining uniquely suited to its digital public-goods ecosystem [126-132][317-324].


Session transcriptComplete transcript of the session
Moderator

Thank you. very much in line with the overall theme of the summit. We are looking at the overall aspect of governance of AI, but not as something that will be set aside and looked at through a different lens altogether, but something that can be looked in as an embedded layer of governance that we already govern technologies with. In the interest of time that we have with us, I will request the panelists to be seated on the dais, and I will request AK Chaudhary sir to please begin his keynote.

Ajay Kumar Chaudhary

Good afternoon to everyone. Distinguished policy makers, regulators, industry leaders, members of the FinTech community, and esteemed guests. I will just very closely following last four days how and what are things happening and it was amazing the type of enthusiasm type of excitement and type of budge around AI and this summit and I believe and that whatever is there actually is a real thing which is happening possibly multiple small applications are going to come in coming days which will solve multiple issues and problems in coming days and we’ll have the real leading role actually to play as a country that is the way we look at it we also will have a great role to play on the data side particularly when we are going to train the models for that obviously when we are going to scale up entire thing then possibly there might be some run -throughs some risk also and those risks something is known, something is unknown and for unknown much cannot be done except we need to do take care of the embedding the governance part.

That is the theme of today’s talk, how we need to embed the governance actually the entire life cycle of the AI, the design of the AI. That is the way we have to look at. Yesterday I was again listening our Honorable Prime Minister and the beautiful way that he summarized the entire theme in one word that is called mano, that is called humanity. So possibly in future I am going to use that instead of responsible AI, that is possibly we can talk about human AI because it is going to touch upon moral and ethical systems, accountable governance to sovereign, national sovereignty, accessible and inclusive and valid. All the aspects what we are going to touch upon, everything is covered in this one word that is called Mano.

Now coming back to my address, proposed address, I’m coming back to this now. It’s indeed a privilege to participate in this dialogue at a defining moment in India’s digital evolution. Over the past decade, India has demonstrated how population -scale digital public infrastructure can drive inclusion, efficiency and trust. Systems built with interoperability, transparency and scale at their core have reshaped financial participation by millions. Today we stand at the next inflection point in that journey. A new tech layer is being superimposed upon this digital foundation. AI, artificial intelligence, what we know it, is not arriving in isolation. It is integrating with payment systems, credit and risk management platforms, supervisory frameworks. and cybersecurity architecture that already operate at national scale.

This convergence of scale and intelligence marks a structural shift. Unlike earlier waves of digitalization that automated existing processes, AI introduced adaptive systems, systems that learn, recalibrate, and influence outcome dynamically. In a country as large and diverse as India, such systems do not merely improve efficiency, that see access, opportunity, and systemic resilience. The question before us is not whether AI will transform finance. It already is. The more fundamental question is whether governance will evolve at the same pace as innovation and whether it will be designed into a system from inception rather than appended later as a compliance of the thought. In financial services, trust is foundational. AI system cannot function as opaque black boxes, especially when they influence access to credit or flag financial behavior.

Governance cannot be an overlay applied after innovation has already been scaled. It must be embedded by design. As Peter Drucker observed, quote, management is doing things right, leadership is doing right things, unquote. In the context of AI in finance, governance is not merely about tech correctness. It is about doing the right things at the right time in ways that preserve trust, resilience, and inclusion. Now, looking at AI as infrastructure tool, it has evolved from analytical assistance to shaping financial outcomes. In credit market, machine learning model analyze transaction histories, behavioral signals, and dynamic cash flows to generate granular borrower assessments. In fraud prevention, AI detects anomalous activities within milliseconds, processing volume beyond earlier systems. AI -enabled detection can reduce certain categories of fraud losses by up to 25 to 30 percent at this point of time in high -value payment environment, what we are witnessing in NPCI.

Compliance functions increasingly rely on automated pattern recognition, while adaptive cybersecurity models respond to emerging threats in real time. The diffusion of AI across the financial value chain enhances efficiency and precision. Yet, when models operate on a systemic scale, even marginal inaccuracy can produce material consequences. In finance, where stability and trust are public goods, the tolerance for systemic error is limited. India’s financial system adds its own complexities. Its scale of digital participation, linguistic diversity, demographic heterogeneity, and income variability are also important. heightened model risk. Although trained on narrow urban centric or historically squid data sets may inadvertently misclassify, misprice or exclude segments that digital finance is intended to integrate. It is therefore imperative that we do not view AI as a peripheral tech enhancement.

It must instead be understood as a component of financial infrastructure which is systemically relevant and should be subject to the same standard of resilience, governance and accountability what we expect of any critical financial utility. When we talk about embedded governance in AI, historically regulation in financial services have often responded to innovation after risk gets materialized. Governance in the AI era must however be embedded into systems design. Embedded governance means integrating accountability, transparency, and risk management into every stage of the AI life cycle. from conceptualization and data acquisition to model development, deployment and ongoing monitoring. It rests on several foundational pillars. I will mention four. One is proportionality, that is the governance intensity should be risk -based.

It should be risk -based intensity. Fairness and non -discrimination. Third is explainability and transparency. And fourth is accountability, which must be clearly defined. While institutions may collaborate with tech providers or leverage shared infrastructure, responsibility for outcomes cannot be outsourced. Potential vulnerability of AI systems that save their operations, board and senior management must understand that logic, limitations, et cetera. Further, and more importantly, in financial AI, algo efficiency should not compromise equitable opportunity. Now, specifically coming to the financial infrastructure, risk -based approach to AI governance, just I’ll touch upon this. A risk -based approach to AI governance acknowledges that innovation and prudence are not opposing forces. They are complementary. Financial authorities globally are converging on principles that emphasize robustness, resilience, transparency, and human oversight.

India’s regulatory thinking reflects this balance, encouraging experimentation while reinforcing institutional responsibility. The objective is not to slow innovation, but to ensure that systemic risk does not accumulate invisibly. Several risk dimensions deserve particular attention as AI becomes integral to financial systems. It may include multiple issues. I will touch upon only four. One is the model integrity. For instance, it can no longer be viewed as a one -time validation exercise. Intelligent systems must be evaluated across economic cycles. And stress against extreme but plausible scenarios. As data patterns evolve and models recalibrate, continuous oversight becomes inevitable to guard against drift, unintended bias, or reinforcing feedback loops. Second is operational concentration risk. I will detail subsequently also. It is an emerging systemic concern.

Diversification and resilience planning are essential to safeguard continuity. Data governance through data integrity, consent management, purpose limitation, and minimization principle is foundational. Financial data is not merely transactional. It reflects livelihood, behavioral choices, and economic participation. And the fourth item is cybersecurity risks that are amplified in the AI environment. As AI strengthens defense mechanisms, it can also be leveraged by adversaries. Institutions must anticipate adversarial AI and strengthen defensiveness. Detection capability accordingly. A risk -based… framework recognizes that governance cannot be static system that learn and evolve demand demand oversight that is equally dynamic as also measured proportionate and forward looking now just touching upon supervisory intelligence as ai permeates financial institution supervisory framework are also evolving supervisors increasingly leverage advanced analytics to monitor systemic pattern identify anomalies and strengthen early warning mechanism this creates a reciprocal dynamic institution embed ai in operation while oversight bodies integrate intelligence into supervision however governance cannot be regulated driven alone institution capability is critical ai literacy at the board and senior management level is no longer optional leaders must understand model architecture validation methodology vendor dependency and ethical limit implications Effective governance requires interdisciplinary capability bringing together tech, risk, compliance and legal experts as well as business leaders together Institutions that integrate AI governance into their ERM framework strengthen resilience Christian Lagarde has noted, innovation and regulations are not adversity, they are partners in progress That partnership must guide the embedding of AI within finance Coming to the inclusion part, what our Honorable Prime Minister has mentioned about the last A in MANO, that is access and inclusion India’s financial transformation has been anchored in inclusion Over the past decades, tech has lowered barriers, reduced transaction costs and brought millions into the formal financial ecosystem AI now offers an opportunity to deepen that trajectory Through granular dynamic risk assessment Thank you It can reduce reliance on collateral heavy models and static credit history.

Transition level data, cash flow analytics and behaviour indicators can provide more nuanced insight into the repayment capacity, particularly for MSME who are presently outside the traditional credit framework. India is expected to account for a significant share of global digital transition growth this decade. If harnessed responsibly, AI can convert this expanding digital footprint into broader formal access to fair financial services and adoption at scale. Yet, inclusion cannot be assumed. It must be intentionally designed. Algo, trained on historically squid dataset, risks perpetuating structural inequalities. In formal sector, income volatility. In terms of the future, of the Gender -based data gas may distort credit outcomes. Without corrective safeguards, technology may reinforce rather than reduce disparities. Inclusive AI thus requires representativeness in training datasets, periodic impact audits, and community -level feedback mechanism.

It calls for institutional mechanisms that allow individuals to seek clarification and redress where automated decisions affect their financial standing. Now coming to the sovereign and resilient AI foundation. AI governance intersects not only with the institutional risk, but with strategic resilience. Concentration in advanced chips and foundational AI models raise critical consideration for economic sovereignty, financial stability, and I can further add, the national security. Dependency on limited supply chains can create systemic vulnerability. If we may look at AI stability. I’m going to go ahead and start with the AI. more granularly. It rests on five interdependent layers. At the base are specialized semiconductor chips we all know. Above this sits the cloud and data -centric infrastructure that provides scalable processing capacity.

And these systems are fueled by vast data sets drawn from public and proprietary sources. On this foundation operate large foundation models adaptable across domain and finally at the top are application and that embed AI into financial services and everyday economic life. In this context we should be conscious of the fact that one firm controls more than 90 % of advanced chips. Three dominate cloud capacity and a handful command foundation models threatening financial stability and economic sovereignty. We must therefore diversify supply chains to the extent possible through domestic innovation and international collaboration to secure resilient AI foundations. Further, if you look at what is the pathway for ecosystem scaling possibly we have to look at the consent based data sharing, shared AI and risk infrastructure investment in AI literacy and governance at all levels including board and senior management and most importantly encouraging home grown tech and AI capable entities.

It may be appreciated that an India first approach is not inward looking. It is context aware. It ensures that governance reflects local realities while remaining global coherent. Now coming to the operationalization of embedded governance, it may involve multiple issues but I am touching upon 5 to 6 one. The life cycle based model governance institutions should embed governance checkpoints from data acquisition to deployment and post deployment monitoring. obviously clear risk classification framework based on the systemic impact that we should have to have independent review and oversight, enhanced oversight on that. It should be auditable and documentation should be there cross functional governance committee will be helpful no doubt on that and continuous monitoring and feedback loop that basically helps in periodic recalibration by way of external audit.

Consumer centric safeguards obviously by way of transparent disclosure clear appeal processes and human intervention mechanism are critical to maintain public trust. These pathway ensure that governance is not episodic but embedded women into operations DNA. Now I will just before concluding I will touch upon the role of India in AI and trust as a corner store of financial AI. Finance rest on confidence that systems are fair, stable and accountable. Deposit trust institution to safeguard asset borrowers’ trust systems to assess risk fairly, and market trust, transparency, and stability. EI has the potential to enhance this trust by improving fraud detection, accelerating compliance and broadening access and inclusion. But if governance is ineducated, EI can erode confidence rapidly.

Trust is built when systems are predictable, explainable, and accountable. Trust deepens when innovation aligns with public interest. And trust endures when leadership anticipates risk rather than reacts to failure. India stands at a pivotal moment, working across all five layers of the EI stack, and demonstrating the ability to deploy application at population scale. It is shaping a global agenda for inclusive EI. The convergence of digital infra, regulatory foresight, and entrepreneurial innovation offers a chance to show that scale and safety can coexist and governace can catalyze inovation.Coming to conclusion artificial intelligance wiil sace the next chapter of financial services. But tech alone does not determine outcomes. Institutional design does. Design choicesmgovernance framework and institutional culture will determine whether AI strenghten finance finacial resilience and inclusion or not.

Embedded governance is not regulatory burden.It is strategic imperative.It ensures that innovation is sustainable, trust is preserved and the system stabillity is protected. If we embed fairness,transparency,anccuntability and proportional oversight into the architecture of financial AI form iception, India can chart distinctive path,one that alligns tech ambition with ethical responsibility. Let as approach rhos moment not with hestitation but with disciplined forseght .Let us ensure that as our financial systems become more intelligent,our governance become more robust, our oversight becomes more anticipatory and our commitment to inclusion more resolute. In doing so, we will not only harness the power of AI ,but we also shape it to serve the broader goals of stability, oppertunity and share prospectively.Thank you.

Moderator

Thank you, sir. That was very insightful and sets the context for the panel discussion to follow. We could also request you, if you would want, you could join us in the audience. That would be great. Over to you, Priyanka, for introduction to the panelists and then taking this discussion forward.

Priyanka Jain

Thank you so much. Thank you. Our panelists need no introduction. I’m going to keep it very fast so that we can make the most of, you know, capturing their thoughts. First, I have with me Mr. Sanjeev Sanyal. Sir is the economic advisor to the Prime Minister. He’s in the Prime Minister’s office and he needs no introduction. If I actually go by what AI has given me as his persona, AI summarized it as a macro thinker, a historian, a historian of structural cycles and a strategic geopolitical lens. Fortunately, today we have the OG himself in the room. And without any further ado, I want to ask him my first question. So historically, countries that have mastered general purpose technologies, right from the steam engine, early electricity, Internet, they’ve gained outsized economic advantage.

Is AI that inflection point for India? And if so, does early well -designed self -governance accelerate trust or does it deny us of any competitive momentum?

Sanjeev Sanyal

Yes, it is important that you are engaging in it, but let me point out that it’s not always the first movers who benefit from it and it’s not the case that even those who invent these technologies know where they’re headed. I mean, just to give you an example, the European Renaissance, which led ultimately to the Western domination of the world for half a millennium, was based on three technologies. One was the printing press, the other was gunpowder, and the third was mathematics. The first two were invented by the Chinese and the third was invented by the Indians, but it is the Europeans that took it, owned it and dominated the world. So, one important thing to recognize in all of this is that do not try and necessarily guess where this is headed.

But of course, we need to engage in it. We need to engage in these technologies and build on them. Otherwise, you know, somebody will take your technology and dominate you. So it is very, very important that India does participate in this AI revolution. But again, in this context, let me say, that does not mean that we should spend time trying to work out exactly where this is headed. For example, when the social media revolution was happening 20 years ago, when Facebook and all these things came about, the marketing tool of the people at that time was, see, now everybody can talk to everybody, we will all move to the golden mean, because we will all have similar views, because we can all talk to each other, and so on.

But in fact, the algorithms went out of their way to put us in buckets and echo chambers. So in fact, we ended up, social media ended up doing exactly the opposite of what the, you know, the technology experts were telling us social media would do. now why does this apply to AI as well and here I am going to talk about this risk based thing that everybody is talking about let me tell you that you cannot actually put AI or any types of AI into any real risk bucket because this is an emergent evolving thing even more so than social media so consequently if you are saying I am going to do risk based it means that you have some assessment of where that thing will go and I am telling you that it is almost impossible to do this so for example in my view the European way in which they are going about and having you know risk they are the pioneers of risk based systems I understand it is pretty obvious that you don’t want AI to take over our nuclear buttons but other than that the risk levels of most of the other things is utterly unknown this is a bad thing because I am not saying that because I am not saying that because I am not saying that something totally innocuous might go and blow up the whole system because these things are emerging they are evolving, they are interconnecting therefore I actually do not think the risk a system that is largely based on perceptions of risk will work because it is not possible ex -ante to work out what is dangerous or for that matter what is beneficial now what should you do if you can’t tell what is going to happen I am telling you the European system is either going to be strangulate the system by being too stringent or it will open things up because it wants progress but will ultimately the risk based system will not be able to take control of it so the other model that is there is of China which is the state knows best but we know from the experience we had with the Wuhan virus that the state can very often lose control of things that are happening and it can spiral out.

The third model that is mostly the American model is to have a laissez -faire and let anybody do whatever they want. Now the dangers of that are obvious. In my view, the way they control it is through tort laws, i .e. if something goes badly wrong, you will then end up with a billion dollar fine or something like that. So in some ways it works better because it’s ex post rather than ex ante system. It depends on those who are running the system having skin in the game, i .e. your company will go down and you will be jailed and you will have a billion dollar fine on it. If things go wrong, that is how they are doing it.

It’s an ex post punishment. But as you can tell, that is some ways, is an ex post system and if something really bad goes wrong, you know, it will you’ll only find, you know you can punish the person after the horse has already bolted you are going to lock it. So all these systems have their downsides but I’m just telling you that whatever system we design in order to control this has got to be based on being agnostic to how this whole thing works going forward. Now, I know I’m taking up their time but give me a minute. There are other systems that we manage where we have no idea where they are going. Take for example the stock market.

You and I don’t know where the stock market will be in a decade’s time. It’s a complex system just like artificial intelligence but we manage it. How do we do it? Well, we do it by creating a framework which does the following thing. It first of all has institutes audits. And enforces transparency and explainability. if you can’t explain your accounts you can’t be in the stock market two it has systems of shutting things down when things go wrong so there are every stock market will have when things spiral out it shuts down three it deliberately creates systems of separation for example this you know there are the same company cannot you know be a bank as well as being a company that so there are conflict of interest so in the same way AI will need to create compartments I am personally very suspicious of any idea of the internet of everything and the AI of everything that would be a disaster I think we need to be willing to allow compartmentalized AI I think it will be more efficient anyway from an energy perspective but I think it’s also safer and most importantly you need to create skin in the game, i .e.

ex ante tell people who will be held responsible when things go wrong. So, in the case of financial markets, the directors of the company are the ones hauled up when things go wrong, or the CEO. In the case of AI, we will have situations where when things go wrong, the person who made the algorithm will blame the data, the data guy will blame the company, the guy who is the user, all kinds of things will happen. We need to ex ante decide who in the system will be hauled up when things go wrong. That will create skin in the game. But we cannot wait for something to go wrong and then this happens, we need to decide this ex ante.

So, all of these things exist in the case of financial regulation. I personally think a similar system.

Priyanka Jain

Rightly put technology moves fast but trust takes time to build and compartmentalization is a great way to de -risk in some form and also look at it with a focused agenda and attention. With that we can actually bring in Mr. Kamath. Mr. Kamath is from the GIF City IFSC, a compartmentalized global financial hub in a way that India has created and we are very fortunate to have you sir here GIF City actually operates at a unique intersection of innovation and global credibility. It competes with the likes of Singapore, Dubai, London. Can GIF City become a lab for AI governance and we wanted to know your view sir and especially a great segue from Sanjeev sir on how we can look at it differently in a compartmentalized manner.

Praveen Kamat

See if you see a Gift IFSC as a jurisdiction, it is just, it was set up in 2015, so it’s just 11 years old. We are building it up from scratch. Now, when you build something from scratch and when you have a brand new regulator, like IFSC which was created in 2020, you start with a clean slate. So that means you have more leg room and you have more space to experiment. So we don’t have baggage of the legacy systems. So if you see the way we have evolved over the last six years, IFSC, the way regulations have evolved, we have all the verticals across finance, capital markets, banking, insurance, pensions. And we have introduced new verticals, ship build, ship leasing, aircraft leasing, ancillary services and so on.

You know, in line with all of the global financial centers. So with respect to experimentation, when you use the word lab, you imply experimentation. So the appetite for experimentation and the appetite for taking risks, is much higher than other, say, domestic regulators or regulators overseas because of the absence of retail investors. so yes gift city has an immense ability to to come across as a lab uh for ai governance however building a financial center is a is is like a 45 kilometer marathon you know it’s not a 8 kilometer dream run so it will take its time uh we are on the growth trajectory on the upward trajectory and there is a certain gestation period for every financial center that that period gestation period cannot be skipped we are in that gestation period so once we reach critical mass we will we’re going to see a lot of things happening and coming out of gift ifsc.

Priyanka Jain

Thank you actually i will go murli sir and the rbi free ai report or the framework on uh you know any enablement of ethical ai i think it’s very forward looking it is actually building on existing regulatory controls and architecture to bring in you know the principal base ai ecosystem so my question to you is If a company has embedded robust controls, model inventories, bias testing, continuous monitoring, should regulators reward and discipline such companies with calibrated supervisory relief? And in other words, is there a safe harbor for somebody who’s, you know, who’s put in risk -based controls but, you know, has been a first -time defaulter?

⁠Murlidhar Manchala

Yeah. In fact, in the same report, it was suggested that the entities which put in place all the guardrails and then in case of any labs, if they are doing the root cause analysis, trying to address the problem, they should have a, the regulator should have a lenient supervisory approach. And it should be seen as a, it should be seen as an instrument. It should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a over acting risk area so that is something which we recognize so on both friends one is we understand the technology is probabilistic and then it can have lapses but once in terms of governance if you if you put in the guardrails if you put in the processes if you put in the mechanisms across the lifecycle to see that this the the customer doesn’t face the risk so that is the main focus the customer it should be transparent to the customer it should not be a black black box rather than it can be a glass box so so and and it should be understandable to the customers so once all the measures are taken take are taken into consideration by the by the entity in terms of governance as well as the processes you can see that the customer does not face the risk so that is the main focus the customer it should be transparent to the customer it should not be a black black box rather than it can be a glass box so and it should be understandable to the customers so once all the measures are taken take are taken into consideration by the by the entity in terms of governance as well as the processes because of the nature of the technology presently we understand it can lead to some to some aberrations but then as long as it is it is taken in a in a right process you have these incident reporting mechanisms you have the you will have the manual or right so the once you have these controls and the and the right approach the supervision will not should not be used as a as a systemic or or a greater risk you should rather you should allow first time lapse and then in terms of say rewarding it we also suggested that there would be an award for a in finance particularly there are specific works done in in terms

Priyanka Jain

Thank you. I think Vikram your advantage point here because you are a global infrastructure player. You are seeing regulatory trends across the US, UK, Singapore and many other markets. You heard about how the panel has been shaping right from the policy makers to international financial center and also RBI. Want to know as an infrastructure provider how are you looking at cyber security and its evolution in the age of generative AI?

Vikram Kishore Bhattacharya

Thanks so much Priyanka. I would just make one correction as a cloud scientist. I am a cloud scientist and I am a cloud scientist. I am a cloud service provider and not merely an infrastructure provider. I think one of the things is, for good or for worse, we’ve seen the benefits of generative AI, but we’re also seeing bad actors use generative AI for phishing attacks, for credential attacks, for malicious code. So, you know, with the good comes the challenges. But one of the more important elements is that while it’s serving as an accelerant to existing methods, I don’t think it’s foundationally changing the nature of the attacks. And, in fact, there was a report that came out in 2025.

It talks about how generative AI has lowered the barriers for a lot of these threat actors. But I go back to what I said. It’s because it’s not foundationally changed. The same principles and the same foundations of cybersecurity that held true before gen AI still hold true. So, you know, multi -factor authentication, strong passwords, regular updates, scanning your systems. And I think it is imperative for organizations. To fundamentally, especially in the financial services, who are always. being attacked and India is a country where not only the banks but we have a huge citizenry with different levels of financial literacy so therefore how do you use these tools to actually safeguard the financial system so I think that in that respect you know a lot of sort of kudos to the RBI for also thinking about it on you know these principal lines but also the banks for actually leveraging these technologies and I think that one of the elements that you need to always do is you know trust service providers like us but also banks should verify and that is done through standards like ISO or the NIST and you know independent third party reports validate the various controls that are there and I think that now and it’s a point that I was making a little earlier you have to become an active participant in cyber security no longer can you be a passive passenger in it because the landscape is changing and as more and more people are digitizing so are the people who are willing to and are looking to attack any vulnerability.

So GenAI does provide you with the tools, because, again, I’m also a believer in not, you know, human in the loop, but having AI in the loop. So how do you use these technologies to have faster responses? How do you automate scanning? How do you automate getting reports? To be able to make those value judgments at the right time. So that requires skilling. Again, that requires awareness, not just about something like an AWS or the cloud, but also banks and also, you know, the work that, again, regulators as well as cloud service providers are doing is having these awareness programs to make sure that the more people understand the technology, the better the framework and the groundwork will be for them to adopt.

Thank you.

Priyanka Jain

I think I also referred to our earlier discussion today afternoon, wherein rather than AI thinking, about a human in the loop, humans think AI as a loop to move forward and I think that was a great paradigm shift that we can look at. Sanjeev sir I am going to come back to you but I also want to give a backdrop to this question India has never simply adopted technology we have created it, we have adapted it, we have scaled it and we have governed it in our own way. We did it with identity, we did it with payments and we did it with digital public infrastructure If the governance frameworks around AI are beginning to emerge and they are also being divergent globally like US being innovation led, EU being compliance led, China being state led, where is the access that India is going to strategically position itself and how are you looking at it from your lens?

So I

Sanjeev Sanyal

think I will continue from what I was saying earlier Now, we need to be very, very careful that we don’t end up with a bureaucratic risk -based system. This is an emergent technology. It will evolve in all different ways, and we’ll have to be very, very creative about this. Now, there is a difference between, say, the systems as an architecture. AI is an emerging thing. It’s not just infrastructure in the sense that, say, you can think of UPI as infrastructure, for example, digital identity as infrastructure. It doesn’t in itself have emergent behaviors. AI has emergent behaviors, i .e., it evolves and interacts with other forms of AI, and which is why I said you need to be fundamentally suspicious of anybody who says that they have a very clear idea where this whole thing is going.

We don’t at all have a clear idea. Nobody on the planet has a clear idea where it’s going. So we do need some regulation. We need to be very, very careful about having humans in the loop. as I said right in the beginning you need to have systems switch off buttons you need to create what are called in finance Chinese walls which separate different tracks as I said earlier I am not a huge fan of the AI of everything I think that’s dangerous and will lead to bad outcomes however AI can be run in compartments rather well and why don’t we use that because in any case that’s less energy using in any case it is better at solving bounded problems when you give AI an unbounded problem it tends to hallucinate because unfortunately it has learnt another human trait that it doesn’t like to tell you that I don’t know it rather make up stuff so consequently I think it is better that we deal we give it bounded problems let it solve those bounded problems and get back to us going for this AI or internet of everything which everything is interconnected sounds very good but just it was last July or July before that we saw when one very small code of a Microsoft program which was by the way static it wasn’t even a fluid one it went wrong and you ended up with causing havoc in airports ATMs all kinds of things around the world now imagine the same thing happening in a system where it has emergent characteristics by the time you fix one bit of it it has flowed into some other part of the system so I personally think we need to create firewalls you know forest fire is also an emergent thing and the way we control it is not by predicting where the fire is coming from and where it will go we just have these firewalls from time to time we do that in finance all the time we don’t try to work out what the conflict of interest is, we simply ban situations where conflict of interest will emerge and the same thing is true of skin in the game I think we need to ex -ante work out where in the chain is the responsibility I personally think that it should be done at the level of where the algorithm is made public to use whoever is making it even if their data is wrong you cannot blame the data you are responsible so somebody else may disagree, whatever point of the matter is we need to have very clear points of punishment when things go wrong we need to have audit systems for explainability there is nothing very deep about this after all every company listed in the stock market has got it itself several times a year why can’t we ask major AI companies to be audited?

If you cannot explain why your results are turning up too bad, you shut it down. We do that even with relatively small companies have to go to a chartered accountant several times and chartered accountant has to sign it off. Maybe we have a chartered AI audit for anything that goes beyond some threshold. And I think given how potentially dangerous it is and lucrative it is as well, I don’t think we should be thinking about this as a problem. Rather than doing what I think many others say, okay, they understand it’s dangerous, they will say, but why don’t we have risk -based? Now, ex ante, you cannot work it out. All you will do is you will have technologies that are just, you will end up with regulations that will become just too stringent and will kill the sector.

Rather, along the way, you have a system of explainability audits. With that, let me hand

Priyanka Jain

Mr. Kamath, I’m going to come to you. Economists worry about tourists under regulation that creates instability and over -regulation that will kill dynamism. Where do you see gift city? Because, again, it’s at an intersection of local and I want to hear your views on it.

Praveen Kamat

That’s the problem facing all regulators worldwide across financial sector. Over -regulation repels innovation. Under -regulation repels serious long -term capital. So now, where do you draw the balancing equilibrium point? Let me explain it with an example, simple example. I joined SEBI, Securities and Exchange Board of India in 2008. I was posted to the surveillance department. In 2008 itself, the financial crisis was in full flow. So in our surveillance systems, which are very, very powerful systems, we noticed 1 ,000 orders being entered in a span of couple of microseconds. So we were wondering how is this possible, how can a human enter so many orders. Then we came to know algorithmic trading terminals have been deployed by certain entities in the stock market.

When we dug deeper we came to know that initially it was deployed in 2004 by one entity and then slowly slowly it was the volumes were increasing. I mean it didn’t reach a critical point but they were slowly increasing. Now in 2010 the inflection point came when it reached a critical mass. SEBI came up with guidelines to protect safeguard the retail investors and to preserve financial stability. So here is a perfect example where an innovation in the capital market which is algorithmic trading was deployed by entities for a good six years. It was not regulated, it was being used and the regulator didn’t do anything to stop it. But when the regulator issued the guidelines the necessary safeguards were put in place.

However at the same time there were no breaks applied on the rollout of the innovation. So algorithmic trading even after the guidelines grew exponentially in the Indian capital market to where it is today. So in the same manner, we hope to facilitate innovation in gift IFSE. We have sandboxes in place for startups as well as established entities. They can roll out their AI pilots in the sandbox. The goal is to cap the risk. Like sir said, it’s very difficult to identify all the risks. But whatever possible risks can be identified, let’s cap the risk without going into the technical mechanics, you know, the internal mechanics. And then see how it flows out. Based on the data that you receive in the experimentation, accordingly the regulations can be tailored.

Thank you.

Priyanka Jain

I know we are at time, but I’m going to still extend because I have such a prestigious panel by another few minutes. Could you come back to you with a quick rapid fire? If you could tell us one risk that we are underestimating when it comes to AI. No, in

⁠Murlidhar Manchala

general we would not like to talk about risk. So that is our approach. Our keynote speaker, Ajay Choudhury, was also at the helm and the department was formed. So risk is maybe underestimating the risk. That is what I can say. That can be addressed only through the governance, particularly in the present emergence of technology. Actually, I

Priyanka Jain

like what Sanjeev sir was telling us. It’s never going to be risk -free, but we’ll have to move forward. We’ll have to figure it out and we’ll have to do it in as much as possible compartmentalized manner. So any risk that we are overestimating, anybody from the panel who wants to talk about any risk that we are overestimating. Let’s give Vikram a chance. I

Vikram Kishore Bhattacharya

mean, I think the fundamental nature would be there is no zero risk. It’s how do you equip yourself. to handle risks because I think a point that Mr. Chaudhry Mr. Sanyal also made is as a regulator or a regulated environment, how do you create the tools to be nimble to adapt as the technology adapts and I think that that is the important element. Right now the tools are there, there is so much we can do that we’re not, maybe we’re not doing as well, so maybe we can focus very well in the here and the now and equip ourselves to be nimble enough to deal with anything that comes because anybody who’s telling you what’s coming with a certain amount of certainty, I take that with a pinch of salt.

I think that the future is a little unknowable at this point of time but there are so much that is known and we should be able to tackle that right now. I

Priyanka Jain

think that’s great. Sanyal sir I’m going to again come to you. One reform that India must prioritize, what is your view on it? That’s

Sanjeev Sanyal

Copyright law. Who is the owner of a particular innovation? At which point do you call it an innovation? And is that innovation owned by the person who put the prompt in? Is it owned by the person on whose data it got trained? Or it belongs to the algorithm that created that innovation? So all of these I would say that we need to begin to think of a judicial system that can deal with these kinds of problems. We already have a cloud judicial system. But do remember that these very different kinds of, and I would almost call them philosophical problems, are going to turn up at our doorstep very, very quickly. And we need to be thinking about them.

Thank you. When UPI came in, I think about a decade ago, and we have the benefit of having the NPCA chairman himself being in the room, I think it was more than payment. It was trust in an invisible system. and today AI is becoming that invisible system that is sitting quietly in our credit underwriting decisions, our onboarding flows, grievance redressal systems, even regulatory reporting and I think that’s, it was a great discussion to talk about how do we embed trust in an AI system that is fast evolving because at the end of the day we’re thinking about the theme of the summit which is people, planet and progress all in the same breath. People, how do we protect them from opaque systems or bias?

Planet, how do we scale sustainably and responsibly and progress because it doesn’t have to be only fast innovation, it has to be fair innovation. So I think a lot of great thoughts today that came in the panel discussion and I’m extremely grateful to everybody who made time to have this discussion. Sanjeev sir, we could have some closing thoughts from your side. Well, you mentioned trust. Let me say that… while it is fair to trust UPI, but as I said it is relatively speaking not an emergent system. Deliberately in fact, you don’t want the UPI to be innovating on the interface. It can innovate at the back end however much you want, but you don’t want any surprises.

I send somebody 100 rupees and he gets 120 rupees or 80 rupees or on average you will get 100 rupees. That can’t be the basis of a UPI. So in that sense the UPI based system isn’t backbone infrastructure. It is not deliberately emergent. But AI systems are emergent. It can give you different answers at different points in time depending on what it’s trained for, what is the context, what is the things you have and in fact that is the innovation. If you fix it in a box to start with then you won’t get the innovation. But on the other hand if you give it some open ended thing yes, presumably it will improve but sometimes it may deteriorate, sometimes it may lie to you so in that context what I am trying to say is that in the case of artificial intelligence we should use it but we certainly should not trust it in fact its future is based on a certain level of skepticism, healthy skepticism that we must have about its capabilities it will do amazing things but in my view we should be clear that it is probably much much better at solving bounded problems it can play chess for example very very well but I doubt it can plan your career it’s an unbounded problem so if that is how you think about AI then what you need to do is to as I said begin to think this through in terms of how you apply it in particular boxes and where it has a clear set of things that you are trying to do.

So as I said, bounded problems and even there, verify.

Priyanka Jain

With that, we have audience questions. We have one question from Aditya, the founder of First Tile.

Audience

Thank you. Good evening. That was an incredible set of points that came up. Actually made some really interesting notes about the capital markets equal in Sanjeev that you drew. I thought that was a really interesting way of looking at AI and we’ve been in so many summits. I think this is a very, very interesting way that you’ve put it about risk and ex -ante versus post -ante. I had one question for you and I had two suggestions or requests. For Praveen and Davis. From an AI stack perspective, every summit or every conversation across different countries is looking at all the different components of the stack. And there are two things that kind of come up in most of these conversations, which is around sovereign data asset and leverage that comes out of it in terms of tools and models and so on.

Where is India’s perspective in all of this from a sovereign data asset utilization, the model leverage? And I think different countries are looking at their stack as their stack in which they’re going to give you access and so on and so forth. So I think that is something that will be great to get your perspective.

Sanjeev Sanyal

So obviously, India, with its very large population, has stacks of information on all kinds of things, from health to consumer behavior, et cetera. So in some ways, this is a good place for a huge amount of data for experimentation on human behavior and so on. But of course, you know, if data is the new thing, the new oil, the new… we need to be clear that we own… those rights if it’s our data I mean I’m not even getting into the privacy issue I’m assuming here that it’s all that has been taken care of so we are using anonymized data but even then we should at least have the rights to that data and also to some part of the processing of it there’s no point in saying that you know that we have the data but we neither have the rights to it nor do we have the oil rigs to pump out or the refineries to process the new oil so this is the context in which you may have seen in the latest budget we announced almost quarter of a century sort of tax holiday for putting up data centers in this country that’s not a trivial thing to do why are we doing it well basically because as I said data centers are the oil rigs of this new kind of oil.

And then, of course, we need new companies that will process this oil. Those are the new… We have created one, EI -LLM, but frankly, everybody gets very excited about LLM. LLM is only a very limited, in my view, not even the most interesting usage of artificial intelligence. It just happens to be that it is linguistically talented and consequently, you know, we use it for that. But there are many, many more interesting uses of AI. And as I keep coming back to you and stating that we need to create an ecosystem and that ecosystem, we all say, oh, you need to have, you know, half a trillion dollars of investment to create. Actually, no. Much of where you will end up with this use of these refineries, so to speak, will be quite… bounded problems in certain spaces.

So there is more than enough space for startups with much more modest budgets to do interesting things in AI. And I’m not just talking about people building use cases on other people’s. I’m saying literally bottom -up uses of AI. So I think there’s a lot to be done here. It’s an open space. This is basically like discovering the Americas. But, you know, yes, Spain did have an initial sort of starting advantage. But the great empire in the world was actually built by Britain, which was actually a late starter. So there are many, many countries in the world who you do not think today to be a particular player in this game, who will also turn up here.

And one of them could do much, much better than the guys who you think are at the cutting edge today. So this is an emergent situation with all kinds of unintended consequences, uses, positive and negative will happen out of all of this. I think the key here is to be nimble, keep your eyes open, including on the regulatory find, and do not have set ideas where this whole thing is headed because, frankly, we don’t know.

Audience

No, thanks for that. You know, I’m the founder of First Eye, which is a customer data platform. We work with a large number of enterprises on data, all consent. And so we get a ringside view to the application of all of that that you’re saying. And this kind of leads me to the suggestion. As a supplement, we have AI course, which is a repository of data sets, which is growing. And then for the financial sector as well, we are looking to, say, aggregate, to start with synthetic data and then maybe take up, take correlated data from the regulated. entities with their concepts so that would come into use. Okay, awesome. Actually, that kind of goes towards my suggestion bit for the two of you, which is I think, you know, Praveen, when you spoke about the sandbox from an IFSA perspective, I think the ability to extend that beyond just IFSA to, you know, also the other regulators is something I think will be very, very interesting for at least folks like us because we work with a number of entities which cut across different regulators and an associated point is, you know, today there are so many regulations that come in and I think there’s a lot of, there are two opportunities that I see exist.

One is there is different interpretation of the regulations by different entities and second is as a large data processor, not a data owner, but a data processor, I think there is stakeholder, we are one of the stakeholders in that whole process and today we may not have the adequate access or a seat at the table from a regulatory interpretation standpoint. And there, there is, I think, an opportunity for us to define something which is like, you know, what is a consent -backed API for data consumption, for example, and having a regulatory definition of that with participation from a data processor like us. And we’d love to kind of see if there are processes that allow somebody like us to engage with the regulators.

Praveen Kamat

We are open to that idea, but you have to remember one thing. IFSA is a jurisdiction, you know. It has its set of rules which are different from domestic India. So there is an interoperable sandbox mechanism in place between IFSA, RBI, SEBI, and IRDAI. So a solution that spans across the four regulators can be tested within the sandbox. But the issue is not technological. It’s not fiscal or financial. It’s legal. For example, in India, INR transactions are the norm, right? In IFSA, INR transactions are not permitted. You have 16 foreign currencies that are enabled. and you have to do transaction in those 16 currencies. So if your solution is not compatible across these areas, just to give you an example, then the sandbox experimentation will not go through.

So there are a lot more nuances like this which affect the rollout of pilots within the intraoperable sandbox. So just to give you an example. With respect to movement and processing of data, I will not comment at the moment because there are certain things in works in IFSA. So I leave my RBI colleague for that.

⁠Murlidhar Manchala

So just like my colleague said, we already have an intraoperable sandbox across regulators and it is on tap. So earlier it was team -based, but then now it’s on tap. Any type of product can be tested in the sandbox. But just to clarify on the sandbox, it is only when the regulated entity feels that the existing products or services is violating one of the regulations. So there are… very few number of entities which come to the sandbox because in general they are not required to be, required to come to the sandbox if they feel they are compliant to the regulations there is no need to come to the sandbox but then we are also thinking of another sandbox where we also provide some more than in terms of monitoring the regulation we can provide, we can support the innovation in terms of say compute data or tools so that is that is also in the thought process.

Priyanka Jain

We have been one of the beneficiaries of the sandbox and the hackathon at 5Money and the process has been phenomenal the way the RBI fintech teams engaged so maybe Aditya I can share some notes with you offline. but I think thank you this has been a phenomenal panel and great discussion on embedded governance when AI is making space in all things financial services how do we make space for governance in AI that was the theme of the discussion and I am very pleased to hear the views of this panel and I am grateful for making time thank you everyone applause thank you I am actually not going to say anything more apart from the fact that thank you and we will have a quick give of the mementos from India AI mission so my my colleague Kriti will do that so starting with applause applause applause applause applause applause applause applause applause applause applause applause Thank you.

Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (31)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“India’s population‑scale public digital infrastructure such as UPI and other platforms, showing how interoperability, transparency and scale have reshaped financial participation.”

The knowledge base notes that India’s digital public infrastructure emphasizes trusted, interoperable, and scalable systems that transform financial inclusion, confirming the report’s description of UPI-style infrastructure [S29] and the broader DPI discussion [S42].

Additional Contextmedium

“AI is now being super‑imposed on this foundation, integrating with payment systems, credit‑risk platforms, supervisory frameworks and cybersecurity architectures that already operate at national scale.”

Sources describe AI being applied to payment networks (e.g., MasterCard’s AI use in payments) and AI’s role in digital public infrastructure, providing context that AI is being layered onto existing financial and cybersecurity systems [S106] and within India’s DPI ecosystem [S42].

Confirmedhigh

“Risk‑based governance treats AI as a systemic financial utility.”

A dedicated discussion on a risk-based AI policy for the banking sector confirms that a risk-based, systemic approach to AI governance is being advocated for finance [S1].

Additional Contextmedium

“Embedded governance pillars: proportionality, fairness & non‑discrimination, explainability & transparency, accountability.”

The knowledge base lists fairness, non-discrimination, and the need for governance frameworks that include accountability and transparency as core AI governance concerns, aligning with the reported pillars [S103] and the broader ethical AI discussion [S102].

Additional Contextlow

“The moderator reminded participants that the summit’s overarching aim was to treat AI governance as an embedded layer within the existing technology‑oversight framework.”

Opening remarks from the AI Policy Summit emphasize shaping governance to be inclusive and integrated across the technology landscape, providing contextual support for the claim of an “embedded layer” approach [S98].

External Sources (115)
S1
Secure Finance Risk-Based AI Policy for the Banking Sector — -Vikram Kishore Bhattacharya- Role: Cloud service provider representative; expertise in cybersecurity and cloud infrastr…
S2
Secure Finance Risk-Based AI Policy for the Banking Sector — – Ajay Kumar Chaudhary- Moderator – Ajay Kumar Chaudhary- Murlidhar Manchala- Sanjeev Sanyal – Ajay Kumar Chaudhary- S…
S3
Secure Finance Risk-Based AI Policy for the Banking Sector — -Praveen Kamat- Role: Official from GIFT City IFSC (International Financial Services Centre); expertise in financial reg…
S4
Secure Finance Risk-Based AI Policy for the Banking Sector — -Sanjeev Sanyal- Role: Economic Advisor to the Prime Minister; described as a macro thinker, historian, and strategic ge…
S5
https://dig.watch/event/india-ai-impact-summit-2026/secure-finance-risk-based-ai-policy-for-the-banking-sector — Thank you so much. Thank you. Our panelists need no introduction. I’m going to keep it very fast so that we can make the…
S6
Secure Finance Risk-Based AI Policy for the Banking Sector — -Priyanka Jain- Role: Panel moderator and discussion facilitator; mentioned as being from 5Money and having experience w…
S7
Global Standards for a Sustainable Digital Future — Maike Luiken: Well, we do continue to develop standards around AI. And of course, we talked here a lot about AI based on…
S8
Secure Finance Risk-Based AI Policy for the Banking Sector — – Ajay Kumar Chaudhary- Murlidhar Manchala
S9
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S10
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S11
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S12
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S13
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S14
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S15
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — And together with these sutras, there are six pillars under which these recommendations are classified. And these have, …
S16
Responsible AI in India Leadership Ethics &amp; Global Impact part1_2 — Absolutely. So coming to the first question, you know, that you asked, I think, you know, I think there are obviously th…
S17
WS #123 Responsible AI in Security Governance Risks and Innovation — Drew emphasizes that governance is not something that can be added after the fact as an afterthought. Instead, it needs …
S18
What is it about AI that we need to regulate? — TheWS #123emphasized that”governance is not something that can be added on after the fact. It’s not an afterthought. It …
S19
AI Meets Cybersecurity Trust Governance &amp; Global Security — AI -related risk is really no different. And third, framing privacy and encryption as tradeoffs against security ultimat…
S20
Advancing Scientific AI with Safety Ethics and Responsibility — “So we consider the distribution aspects of the data and models.”[122]. Artificial intelligence | Monitoring and measur…
S21
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — Yeah, thanks, Steve. Very well covered. If I can add just a few more points. I think one of the challenges we see is cop…
S22
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — But as far as the regulated entity. As far as the regulated entity is dealing with the customers are concerned, we would…
S23
WS #205 Contextualising Fairness: AI Governance in Asia — 4. Difficulty in “cleaning” biased data: Mueller argued that historical data inherently reflects past biases and cannot …
S24
Day 0 Event #171 Legalization of data governance — He Bo: Thank you. Good afternoon, everyone. I’m He Bo from China Academy. Academy of Information and Communication T…
S25
EU AI Act (Commission proposal) — (44) High data quality is essential for the performance of many AI systems, especially when techniques involving the tra…
S26
https://dig.watch/event/india-ai-impact-summit-2026/medtech-and-ai-innovations-in-public-health-systems — So as we know that health is a state subject, so ultimately government of India works in collaboration with the state go…
S27
Panel Discussion Data Sovereignty India AI Impact Summit — Compute infrastructure must be within national control as it processes, stores data and builds models, but can use forei…
S28
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — The conversation addressed critical questions about technological sovereignty and long-term sustainability. Kumar distin…
S29
Building Indias Digital and Industrial Future with AI — Another thing I mean in February 2019, 7 years back we had something called draft e -commerce policy. Now the tagline of…
S30
Next Steps for Digital Worlds — In conclusion, the Metaverse and virtual reality offer exciting possibilities for connectivity and advancements in vario…
S31
The Battle for Chips — Dependence on Taiwan’s chip manufacturing and its complex international supply chain poses risks to the global economy. …
S32
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — “And that is something which has resulted into that 38 ,000 GPUs, which government is talking about, the shared compute …
S33
AI data centre boom sparks incentives and pushback — The explosive growth of AI and cloud computing hasignited a data centre building boomacross the United States, with stat…
S34
Deepfake and AI fraud surges despite stable identity-fraud rates — According to the 2025 Identity Fraud Report by verification firm Sumsub, the global rate of identity fraud has declined …
S35
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — So one of the key application, key product what we have developed is Fraud Pro. What it does, it actually detects the fr…
S36
Responsible AI in India Leadership Ethics &amp; Global Impact — “I think we have to start slowly, ensure the accuracy can be a little lower, but the false positive, which is a genuine …
S37
Boosting women digital entrepreneurship: Bridging the gender financing gap (UNCTAD) — Overall, the analysis provides valuable insights into the significance of MSMEs, the challenges faced by MSMEs in access…
S38
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — Sharma identified AI’s transformative potential in financial services, arguing that “access to credit creates wealth.” H…
S39
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S40
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — In the document and then in our trainings, we have four pillars. They’re all linked. The first pillar is context-based a…
S41
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S42
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — And as we look at the journey on AI, which is just beginning for most of the world, what I see is if I look at the US, f…
S43
Building Population-Scale Digital Public Infrastructure for AI — This is a strategic concern for national security and autonomy, as very few countries can be completely digitally sovere…
S44
State of Play: AI Governance / DAVOS 2025 — The discussion highlighted tensions between regulation and innovation. While some advocated for light-touch governance t…
S45
Open Forum #33 Building an International AI Cooperation Ecosystem — Risk-based regulatory approaches are needed but implementation remains challenging
S46
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Armando Guío:Thank you very much. Thank you, Axel, for your kind introductions. And it’s a real pleasure to be here in s…
S47
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — Participant 1 (Maureen) This comment reveals a fundamental practical barrier that challenges assumptions about regulato…
S48
Emerging Shadows: Unmasking Cyber Threats of Generative AI — Dr. Yazeed Alabdulkarim:Yeah, regulations are basically a controversial topic because many believe that it’s challenging…
S49
Cybersecurity in the Age of Artificial Intelligence: A World Economic Forum Panel Discussion — The defensive applications include autonomous response systems, intelligent threat detection, and AI-powered security ag…
S50
Secure Finance Risk-Based AI Policy for the Banking Sector — “We have sandboxes in place for startups as well as established entities”[60]. “If your solution is not compatible acros…
S51
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — High consensus with strong implications for global sandbox development. The alignment suggests that despite different re…
S52
WS #35 Unlocking sandboxes for people and the planet — 3. European Union: Katerina Yordanova discussed the European context, particularly the AI Act’s sandbox requirements. Sh…
S53
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — However, financial requirements for entering a regulatory sandbox can be challenging for startups and small innovators. …
S54
Cybersecurity in the Age of Artificial Intelligence: A World Economic Forum Panel Discussion — Popelka provided a personal example, describing how her own voice had been deepfaked and used in an attempted attack dur…
S55
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Anastasiya Kozakova:Thank you very much. It’s a pleasure to be here. I represent the civil society organization. I work …
S56
Open Forum #3 Cyberdefense and AI in Developing Economies — Artificial intelligence has fundamentally changed the speed and dynamics of cyber attacks, allowing threats that previou…
S57
The Innovation Beneath AI: The US-India Partnership powering the AI Era — However, Jeff Binder warned that hardware breakthroughs could potentially make entire data centres “almost instantly, at…
S58
Critical infrastructure — AI plays a pivotal role in safeguarding critical infrastructure systems. AI can strengthen the security of critical infr…
S59
Tech Transformed Cybersecurity: AI’s Role in Securing the Future — Furthermore, the analysis underscores the importance of considering regional regulations and governance in cybersecurity…
S60
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Evaluation of sandbox implementation is another crucial aspect discussed in the analysis. It emphasizes the need to meas…
S61
WS #100 Integrating the Global South in Global AI Governance — Martin points out the positive role of regulatory sandboxes in enabling safe experimentation with AI technologies. These…
S62
Cybersecurity regulation in the age of AI | IGF 2023 Open Forum #81 — She emphasizes the need for addressing the unique risks associated with AI in their development and implementation, ensu…
S63
Unveiling Trade Secrets: Exploring the Implications of trade agreements for AI Regulation in the Global South — In conclusion, Brazil’s ongoing efforts to establish a comprehensive legal framework for AI regulation are commendable. …
S64
Comprehensive Report: European Approaches to AI Regulation and Governance — A particularly concerning dimension emerged around mental health impacts of AI use. An audience member reported people b…
S65
Secure Finance Risk-Based AI Policy for the Banking Sector — Ajay Kumar Chaudhary opened by highlighting India’s opportunity to lead in AI development while managing associated risk…
S66
WS #123 Responsible AI in Security Governance Risks and Innovation — Drew emphasizes that governance is not something that can be added after the fact as an afterthought. Instead, it needs …
S67
WEF Business Engagement Session: Safety in Innovation – Building Digital Trust and Resilience — Beyond safety by design, companies need governance from design embedded at every stage from ideation through deployment …
S68
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — And as we look at the journey on AI, which is just beginning for most of the world, what I see is if I look at the US, f…
S69
Building Indias Digital and Industrial Future with AI — As India advances in digital public infrastructure and its AI ambitions, the key is how we ensure these systems remain t…
S70
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S71
Building Population-Scale Digital Public Infrastructure for AI — This is a strategic concern for national security and autonomy, as very few countries can be completely digitally sovere…
S72
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Virkkunen explains that the EU’s AI regulation is not as comprehensive as critics suggest, focusing primarily on high-ri…
S73
Unveiling Trade Secrets: Exploring the Implications of trade agreements for AI Regulation in the Global South — One aspect of the proposed AI regulation in Brazil is its risk-based approach. Critics argue that this approach only con…
S74
WS #162 Overregulation: Balance Policy and Innovation in Technology — Paola mentions different regulatory approaches such as risk-based, human rights-based, principles-based, rules-based, an…
S75
State of Play: AI Governance / DAVOS 2025 — The discussion highlighted tensions between regulation and innovation. While some advocated for light-touch governance t…
S76
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — By fostering interaction, investigation, and the exchange of ideas, sandboxes serve as a stepping stone towards implemen…
S77
WSIS Action Line C6: Digital Ecosystem Builders in action: Redefining the role of ICT regulators — Al Rejraje promotes regulatory sandboxes as a key tool for de-risking investment in emerging technologies. These sandbox…
S78
Emerging Shadows: Unmasking Cyber Threats of Generative AI — Dr. Yazeed Alabdulkarim:Yeah, regulations are basically a controversial topic because many believe that it’s challenging…
S79
Cybersecurity regulation in the age of AI | IGF 2023 Open Forum #81 — Hiroshi Honjo:Yes. So pretty much close to what Dr. Balushi said. So as a private company, we kind of state the AI gover…
S80
Cybersecurity in the Age of Artificial Intelligence: A World Economic Forum Panel Discussion — The defensive applications include autonomous response systems, intelligent threat detection, and AI-powered security ag…
S81
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S82
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S83
Empowering India &amp; the Global South Through AI Literacy — The discussion maintained an optimistic and collaborative tone throughout, with panelists sharing positive field experie…
S84
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S85
Evolving Threat of Poor Governance / DAVOS 2025 — The tone was largely serious and analytical, with panelists offering thoughtful insights on complex governance challenge…
S86
Laying the foundations for AI governance — The tone was collaborative and constructive throughout, with panelists building on each other’s points rather than disag…
S87
WS #255 AI and disinformation: Safeguarding Elections — The tone of the discussion was largely analytical and cautiously optimistic. While speakers acknowledged serious risks a…
S88
WS #187 Bridging Internet AI Governance From Theory to Practice — The discussion maintained a thoughtful but increasingly cautious tone throughout. It began optimistically, with speakers…
S89
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — Mariana Rozo-Pan: Thank you, Sophie. And hi, everyone. Good morning, good afternoon, good evening. We are very excited a…
S90
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — The discussion maintained a professional, collaborative tone throughout, characterized by constructive problem-solving r…
S91
Dedicated stakeholder session (in accordance with agreedmodalities for the participation of stakeholders of 22 April 2022) — The overall tone was constructive and collaborative, with countries sharing their experiences implementing CBMs and offe…
S92
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The discussion maintained an optimistic and collaborative tone throughout, characterized by constructive problem-solving…
S93
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — The tone is consistently optimistic, visionary, and inspirational throughout. The speaker maintains an enthusiastic and …
S94
AI, Data Governance, and Innovation for Development — The tone of the discussion was largely optimistic and solution-oriented. Speakers acknowledged significant challenges bu…
S95
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S96
Opening address of the co-chairs of the AI Governance Dialogue — – **Moderator**: Role/Title not specified beyond being a moderator for the event The co-chairs expressed their commitme…
S97
How to make AI governance fit for purpose? — – **Innovation focus** – Each representative emphasized avoiding over-regulation that could stifle technological advance…
S98
AI Policy Summit Opening Remarks: Discussion Report — And basically it shows that we can answer the call in a swift way when we need it. So what does it mean to be the AI gen…
S99
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Estímulo à geração de emprego e renda. This was the paradigm of the Declaration on Artificial Intelligence, which we app…
S100
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S101
(Plenary segment) Summit of the Future – General Assembly, 5th plenary meeting, 79th session — The Prime Minister advocates for the responsible development and use of artificial intelligence. This argument stresses …
S102
Ethical AI_ Keeping Humanity in the Loop While Innovating — Absolutely. And it’s about having these different entities around the table, but also having different governments and h…
S103
GOVERNING AI FOR HUMANITY — – Discrimination and unfair treatment of groups, including based on individual or group traits, such as gender, group is…
S104
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — India’s approach to Digital Public Infrastructure (DPI) emphasizes the importance of civil society and citizen engagemen…
S105
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — However, there are concerns that need to be addressed when implementing DPI. One major concern is the risk of exclusion …
S106
Agentic AI in Focus Opportunities Risks and Governance — Absolutely, and hi, everyone. It’s great to be here with you. As you said, for MasterCard, AI is nothing new. We have be…
S107
AI in 2026: Learning to live with powerful systems — Initiatives that emphasisehuman-centred governanceremind us that AI should serve human flourishing, not redefine it. Thi…
S108
Skilling and Education in AI — Create stackable, modular learning systems that can adapt to changing requirements rather than fixed long-term programs
S109
Agents of Change AI for Government Services &amp; Climate Resilience — The minister says AI is moving beyond simple question answering toward agents that can act autonomously. This marks a sh…
S110
Shaping the Future AI Strategies for Jobs and Economic Development — Now, this session hits squarely within the summit’s trusted AI pillar, and deliberately so. Because trust is no longer a…
S111
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — Parliaments are pivotal to ensuring coherence between domestic legislation, established human rights, and evolving inter…
S112
AI for Democracy_ Reimagining Governance in the Age of Intelligence — I say this because the theme of this session, AI for Democracy, cuts to the heart of the matter. We are not simply debat…
S113
Scaling AI for Billions_ Building Digital Public Infrastructure — “Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verificatio…
S114
AI Governance Dialogue: Steering the future of AI — Doreen Bogdan Martin: Thank you. And we now have a chance together to reflect on AI governance with someone who has a un…
S115
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Ajay Kumar Chaudhary
21 arguments136 words per minute2451 words1075 seconds
Argument 1
Governance pillars: proportionality, fairness, explainability, accountability (Ajay Kumar Chaudhary)
EXPLANATION
The speaker outlines four core pillars that should guide AI governance in finance: proportionality (risk‑based intensity), fairness and non‑discrimination, explainability and transparency, and clear accountability.
EVIDENCE
He enumerates the four pillars, stating that governance intensity should be risk-based (proportionality) [50-51], that fairness and non-discrimination are essential [52], that explainability and transparency must be ensured [53], and that accountability must be clearly defined [54].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for proportional, fair, transparent and accountable AI governance is echoed in the EU AI Act’s emphasis on data quality and non-discrimination [S25] and in calls for glass-box transparency for customers [S22]; overall embedded governance is highlighted in the Secure Finance policy discussion [S1].
MAJOR DISCUSSION POINT
Governance pillars: proportionality, fairness, explainability, accountability
DISAGREED WITH
Sanjeev Sanyal
Argument 2
Governance must be built‑in by design, not an after‑thought (Ajay Kumar Chaudhary)
EXPLANATION
AI governance should be embedded into the system from the outset rather than added later as a compliance overlay.
EVIDENCE
He stresses that governance cannot be an overlay applied after innovation has been scaled and must be embedded by design [46-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multiple sources stress that AI governance cannot be an after-thought and must be integrated throughout the lifecycle [S17][S18].
MAJOR DISCUSSION POINT
Governance must be built‑in by design, not an after‑thought
Argument 3
Continuous oversight to monitor model drift and bias (Ajay Kumar Chaudhary)
EXPLANATION
AI models need ongoing monitoring throughout their lifecycle to detect drift, unintended bias, or feedback loops, especially as data patterns evolve.
EVIDENCE
He notes that intelligent systems must be evaluated across economic cycles, stressed against extreme scenarios, and continuously overseen to guard against drift, bias, or reinforcing feedback loops [68-71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Continuous monitoring of data and model distribution shifts is identified as essential for safety and bias detection [S20]; broader AI systems rely on ongoing oversight of large data sets [S5].
MAJOR DISCUSSION POINT
Continuous oversight to monitor model drift and bias
Argument 4
AI can expand financial inclusion but must avoid reinforcing historical biases (Ajay Kumar Chaudhary)
EXPLANATION
While AI offers opportunities to broaden access to finance, it must be designed to prevent the perpetuation of existing structural inequalities.
EVIDENCE
He warns that models trained on narrow, urban-centric data can misclassify or exclude segments that digital finance aims to integrate, thereby risking structural inequalities [42-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bias in historical data and the need for diverse datasets are highlighted as key to avoiding exclusion [S23]; AI-driven credit scoring can improve inclusion when designed responsibly [S38]; inclusion cannot be assumed without safeguards [S1].
MAJOR DISCUSSION POINT
AI can expand financial inclusion but must avoid reinforcing historical biases
Argument 5
Require representative training data, periodic impact audits, and redress mechanisms (Ajay Kumar Chaudhary)
EXPLANATION
To ensure fair outcomes, AI systems should use diverse datasets, undergo regular impact assessments, and provide mechanisms for individuals to seek clarification or redress.
EVIDENCE
He calls for representativeness in training datasets, periodic impact audits, community-level feedback, and institutional mechanisms that allow individuals to seek clarification and redress where automated decisions affect their financial standing [91-93].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The EU AI Act stresses high-quality, representative data and impact assessments for high-risk systems [S25]; transparency and redress are advocated as glass-box approaches for customers [S22]; expanding datasets is recommended to mitigate bias [S23].
MAJOR DISCUSSION POINT
Require representative training data, periodic impact audits, and redress mechanisms
Argument 6
India must own its data and develop domestic AI infrastructure to safeguard sovereignty (Ajay Kumar Chaudhary)
EXPLANATION
The speaker argues that strategic autonomy requires India to retain ownership of its data and build home‑grown semiconductor, cloud, and model capabilities.
EVIDENCE
He describes a five-layer AI stack, highlighting that over 90 % of advanced chips are controlled by a single firm, three firms dominate cloud capacity, and a handful command foundation models, underscoring the need for domestic innovation to protect economic sovereignty [100-106].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Data sovereignty and domestic compute capacity are emphasized as strategic priorities for India’s AI stack [S27][S28][S32].
MAJOR DISCUSSION POINT
India must own its data and develop domestic AI infrastructure to safeguard sovereignty
Argument 7
Dependence on foreign chips, cloud providers, and foundation models threatens economic security (Ajay Kumar Chaudhary)
EXPLANATION
Reliance on external suppliers for critical AI components creates systemic vulnerabilities that could affect financial stability and national security.
EVIDENCE
He notes that one firm controls more than 90 % of advanced chips, three dominate cloud capacity, and a few control foundation models, which could threaten financial stability and economic sovereignty [100-106].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Concentration of advanced chips and cloud services creates systemic risk, as noted in global chip supply analyses and Indian sovereignty discussions [S31][S27].
MAJOR DISCUSSION POINT
Dependence on foreign chips, cloud providers, and foundation models threatens economic security
Argument 8
Government incentives for data‑centres and home‑grown AI models to build a resilient stack (Ajay Kumar Chaudhary)
EXPLANATION
The speaker suggests that policy measures such as tax holidays for data‑centre construction can help develop a domestic AI ecosystem.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy incentives for data-centre construction are discussed in the context of rapid AI-driven infrastructure growth [S33].
MAJOR DISCUSSION POINT
Government incentives for data‑centres and home‑grown AI models to build a resilient stack
Argument 9
AI can cut fraud losses in high‑value payment environments by up to 30 percent
EXPLANATION
Ajay highlights that AI‑enabled detection systems can significantly reduce certain categories of fraud, improving the security of large‑value transactions.
EVIDENCE
He notes that AI-enabled detection can reduce certain categories of fraud losses by up to 25 to 30 percent in high-value payment environments, citing the experience of NPCI [35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Generative AI-enabled fraud detection tools have shown significant loss reductions, with industry reports confirming up to 30 % improvement [S34][S35][S36].
MAJOR DISCUSSION POINT
AI-driven reduction of fraud losses
Argument 10
AI‑driven granular risk assessment can broaden credit access for MSMEs
EXPLANATION
By leveraging transaction histories, cash‑flow analytics and behavioural signals, AI can create more nuanced credit scores for micro, small and medium enterprises, reducing reliance on traditional collateral‑heavy models.
EVIDENCE
He explains that transition-level data, cash-flow analytics and behaviour indicators can provide nuanced insight into repayment capacity, especially for MSMEs that are currently outside the traditional credit framework, thereby reducing dependence on heavy collateral models [82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-based credit scoring that leverages transaction and behavioural data is highlighted as a way to expand MSME financing [S38].
MAJOR DISCUSSION POINT
AI enhancing financial inclusion for MSMEs
Argument 11
AI should be treated as a core financial utility and subject to the same resilience standards as other critical infrastructure
EXPLANATION
Ajay argues that AI is not a peripheral add‑on but a systemic component of the financial system, requiring the same level of accountability, transparency and robustness as any critical financial service.
EVIDENCE
He states that AI must be understood as a component of financial infrastructure that is systemically relevant and should be subject to the same standards of resilience, governance and accountability expected of any critical financial utility [44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI is described as a systemic component of financial infrastructure requiring the same resilience standards as other utilities [S1][S5].
MAJOR DISCUSSION POINT
AI as critical financial infrastructure requiring robust governance
Argument 12
AI markedly improves operational efficiency and precision across the entire financial value chain.
EXPLANATION
By automating complex analyses and decision‑making, AI reduces processing time, cuts errors, and enhances the accuracy of credit assessments, fraud detection, and other core financial functions.
EVIDENCE
He describes how AI models analyze transaction histories and behavioral signals to generate granular borrower assessments, how AI detects anomalous activities within milliseconds, and how the diffusion of AI across the financial value chain enhances efficiency and precision [33-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s role in automating analyses and enhancing precision across financial services is noted in discussions of AI-enabled platforms built on large data foundations [S5][S38].
MAJOR DISCUSSION POINT
AI-driven efficiency and precision in finance
Argument 13
AI strengthens compliance and regulatory reporting by automating pattern recognition and real‑time monitoring.
EXPLANATION
Advanced AI tools can continuously scan transactions, identify compliance breaches, and generate timely reports, reducing the burden on human staff and improving regulatory oversight.
EVIDENCE
He notes that compliance functions increasingly rely on automated pattern recognition and that adaptive cybersecurity models respond to emerging threats in real time, illustrating AI’s role in enhancing regulatory processes [35-37].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-driven compliance tools that continuously scan transactions and generate reports are highlighted as enhancing regulatory oversight [S19].
MAJOR DISCUSSION POINT
AI‑enabled compliance and regulatory reporting
Argument 14
Trust requires explainability and transparency in AI systems that affect credit decisions
EXPLANATION
Ajay stresses that AI models influencing credit access must be understandable to maintain public trust, warning against opaque black‑box approaches.
EVIDENCE
He states that AI systems cannot function as opaque black boxes, especially when they influence access to credit or flag financial behavior, highlighting the need for transparency and explainability [26-27].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for glass-box transparency and clear explanations of automated credit decisions are echoed in governance guidelines [S22][S16][S25].
MAJOR DISCUSSION POINT
Trust and transparency in AI-driven credit decisions
Argument 15
Proactive embedded governance is needed to prevent invisible systemic risk accumulation
EXPLANATION
Ajay stresses that AI governance should be proactive, ensuring that systemic risks do not build up unnoticed as AI systems scale, rather than reacting after problems emerge.
EVIDENCE
He states that the objective is not to slow innovation, but to ensure that systemic risk does not accumulate invisibly, highlighting the need for forward-looking governance measures [62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Proactive, lifecycle-wide governance is advocated to avoid hidden risk build-up [S17][S18][S19].
MAJOR DISCUSSION POINT
Proactive governance to prevent hidden systemic risk
Argument 16
Embedded AI governance is a strategic imperative, not a regulatory burden
EXPLANATION
Ajay argues that embedding governance into AI systems should be seen as a strategic necessity that sustains innovation, preserves trust, and protects system stability, rather than being treated as an additional regulatory hurdle.
EVIDENCE
He declares that embedded governance is not a regulatory burden but a strategic imperative that ensures sustainable innovation, preserves trust, and protects system stability [129-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Embedding governance is framed as a strategic necessity for sustainable innovation and system stability [S1].
MAJOR DISCUSSION POINT
Embedded governance as a strategic imperative
Argument 17
Operational concentration risk of AI in finance requires diversification and resilience planning
EXPLANATION
Ajay warns that as AI becomes embedded across the financial value chain, it can create concentration risk where a few AI providers or models dominate critical functions, potentially threatening systemic stability. He calls for diversification of AI providers and resilience measures to mitigate this risk.
EVIDENCE
He identifies operational concentration risk as one of the four key risk dimensions of AI in finance and stresses the need for diversification and resilience planning to safeguard continuity [71-75].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Operational concentration risks from a few AI providers are highlighted as a systemic threat, underscoring the need for diversification [S31][S27].
MAJOR DISCUSSION POINT
Operational concentration risk of AI in finance
Argument 18
Robust data governance—integrity, consent, purpose limitation, and minimisation—is foundational for trustworthy AI in finance
EXPLANATION
Ajay emphasizes that AI models rely on high‑quality data and that data must be governed through strict integrity checks, explicit consent, clear purpose limitation, and minimisation to prevent misuse and protect individuals.
EVIDENCE
He outlines data-governance principles such as integrity, consent management, purpose limitation and minimisation as foundational for financial AI, noting that financial data reflects livelihoods and economic participation [75-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
High-quality, purpose-limited data and consent management are identified as core requirements for trustworthy AI systems [S25][S27].
MAJOR DISCUSSION POINT
Foundational data‑governance principles for financial AI
Argument 19
AI integration with cybersecurity creates both defensive benefits and new attack vectors, requiring anticipatory safeguards against adversarial AI
EXPLANATION
Ajay points out that while AI can strengthen cyber‑defence mechanisms, it also lowers barriers for adversaries to launch sophisticated attacks, so regulators and institutions must anticipate and mitigate adversarial AI threats.
EVIDENCE
He notes that AI-enabled cybersecurity risks are amplified in the AI environment, with AI strengthening defenses but also being leveraged by adversaries, calling for anticipation of adversarial AI and stronger defensiveness [78-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI can strengthen cyber-defence while also lowering attack barriers, necessitating anticipatory safeguards [S19][S34].
MAJOR DISCUSSION POINT
Dual‑use nature of AI in cybersecurity and need for anticipatory safeguards
Argument 20
Effective AI governance requires interdisciplinary teams and integration into enterprise risk‑management frameworks
EXPLANATION
Ajay argues that governing AI safely cannot be the sole domain of any single function; it demands collaboration among technology, risk, compliance, legal, and business experts, and should be embedded within an institution’s ERM system to strengthen overall resilience.
EVIDENCE
He states that effective governance needs interdisciplinary capability bringing together tech, risk, compliance and legal experts together with business leaders, and that institutions integrating AI governance into their ERM framework strengthen resilience [81-84].
MAJOR DISCUSSION POINT
Interdisciplinary capability and ERM integration for AI governance
Argument 21
AI should be layered onto India’s existing digital public infrastructure to leverage its scale, interoperability, and trust, ensuring AI‑driven services inherit the robustness of systems like UPI and digital identity.
EXPLANATION
The speaker stresses that AI is not arriving in isolation but is being superimposed on the digital foundation that already supports payments, credit, risk management, supervisory frameworks and cybersecurity. By embedding AI within these trusted platforms, the sector can benefit from the proven inclusion, efficiency and reliability of the existing infrastructure.
EVIDENCE
He notes that AI integrates with payment systems, credit and risk-management platforms, supervisory frameworks and cybersecurity architecture that already operate at national scale, describing this convergence as a structural shift [18-21]. He also references India’s decade-long experience of population-scale digital public infrastructure driving inclusion, efficiency and trust, positioning AI as the next layer on this foundation [14-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Building AI on top of established digital public infrastructure is recommended to inherit proven inclusion and trust benefits [S5].
MAJOR DISCUSSION POINT
AI integration with existing digital public infrastructure
S
Sanjeev Sanyal
9 arguments156 words per minute3299 words1266 seconds
Argument 1
Ex‑ante responsibility and “skin‑in‑the‑game” for algorithm creators (Sanjeev Sanyal)
EXPLANATION
Responsibility for AI outcomes should be assigned before deployment, ensuring that creators and operators have direct accountability for any failures.
EVIDENCE
He emphasizes that ex-ante decisions must identify who will be hauled up when things go wrong, creating “skin-in-the-game” for algorithm creators, board members, and senior management [180-185].
MAJOR DISCUSSION POINT
Ex‑ante responsibility and “skin‑in‑the‑game” for algorithm creators
Argument 2
Risk‑based approach cannot predict unknown AI risks; may be too stringent or too lax (Sanjeev Sanyal)
EXPLANATION
Because AI is an emergent technology, risk‑based regulation cannot reliably anticipate unknown hazards, leading either to over‑regulation or insufficient safeguards.
EVIDENCE
He argues that you cannot put AI into any real risk bucket because its emergent nature makes ex-ante assessment almost impossible, risking either overly stringent or overly lax regulation [160-166].
MAJOR DISCUSSION POINT
Risk‑based approach cannot predict unknown AI risks; may be too stringent or too lax
DISAGREED WITH
Ajay Kumar Chaudhary
Argument 3
European risk‑based system risks stifling innovation; US relies on ex‑post tort penalties (Sanjeev Sanyal)
EXPLANATION
The European model’s risk‑based framework may choke innovation, whereas the US relies on post‑incident tort penalties to enforce accountability.
EVIDENCE
He contrasts the European risk-based approach, which could strangle progress, with the US model that uses ex-post tort law and large fines as a deterrent after harms occur [161-168].
MAJOR DISCUSSION POINT
European risk‑based system risks stifling innovation; US relies on ex‑post tort penalties
Argument 4
Need for clear ex‑ante accountability rather than post‑hoc punishment (Sanjeev Sanyal)
EXPLANATION
Regulatory frameworks should define responsibility before AI systems are deployed, avoiding reliance on reactive penalties after failures.
EVIDENCE
He reiterates that ex-ante clarity on who is responsible is essential, so that accountability is built into the system rather than imposed after the fact [180-185].
MAJOR DISCUSSION POINT
Need for clear ex‑ante accountability rather than post‑hoc punishment
Argument 5
Compartmentalized AI reduces systemic bias and concentration risk (Sanjeev Sanyal)
EXPLANATION
Separating AI applications into bounded, compartmentalized environments can limit systemic bias, reduce concentration risk, and improve energy efficiency.
EVIDENCE
He advocates for compartmentalized AI, warning against an “AI of everything” and suggesting that bounded, compartmentalized AI solves problems more efficiently while limiting emergent risks such as bias and concentration [238-244].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Segregating AI applications and expanding diverse datasets are suggested to limit bias and concentration risk [S23][S31].
MAJOR DISCUSSION POINT
Compartmentalized AI reduces systemic bias and concentration risk
Argument 6
Mandatory explainability audits, similar to financial audits, should be required for high‑impact AI systems
EXPLANATION
Sanjeev proposes that AI models exceeding a certain systemic impact threshold undergo a chartered AI audit to ensure transparency and accountability, with the possibility of shutdown if explainability cannot be provided.
EVIDENCE
He suggests that companies with AI beyond a threshold should have a chartered AI audit, comparable to regular financial audits, and that if they cannot explain their results they should be shut down, mirroring existing audit practices for listed companies [250-252].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The EU AI Act mandates conformity assessments and audits for high-risk AI, providing a model for mandatory explainability audits [S25].
MAJOR DISCUSSION POINT
Chartered AI audits for high‑risk models
Argument 7
AI‑generated content raises novel copyright ownership questions that require a new judicial framework
EXPLANATION
Sanjeev raises fundamental questions about who owns innovations created by AI—whether it is the prompt author, the data owner, or the algorithm creator—and calls for the development of judicial mechanisms to address these issues.
EVIDENCE
He lists a series of questions concerning ownership of AI-generated innovation, prompting the need for a judicial system to handle such disputes, including who owns the prompt, the data, or the algorithm itself [317-322].
MAJOR DISCUSSION POINT
Need for judicial mechanisms to resolve AI copyright ownership
Argument 8
AI systems should incorporate kill‑switches and compartmentalized “Chinese walls” to prevent systemic failures.
EXPLANATION
Embedding hard shutdown mechanisms and clear separations between AI applications limits the spread of errors or malicious behavior, safeguarding the broader financial system.
EVIDENCE
He argues that AI must have system-switch-off buttons and Chinese-wall style separations to avoid cascading failures, warning against an “AI of everything” approach and advocating bounded, compartmentalized deployments [238-244].
MAJOR DISCUSSION POINT
Kill‑switches and compartmentalization for AI safety
Argument 9
Avoid bureaucratic, overly risk‑based regulation; adopt creative, flexible approaches
EXPLANATION
Sanjeev cautions that a bureaucratic risk‑based system could stifle innovation and argues that regulators need to be inventive and adaptable when governing emergent AI technologies.
EVIDENCE
He says “we need to be very, very careful that we don’t end up with a bureaucratic risk-based system” and calls for creativity in regulation [238-240].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Adapting regulations to local contexts rather than copying foreign models is emphasized as essential for flexibility [S21].
MAJOR DISCUSSION POINT
Need for flexible, non‑bureaucratic AI regulation
M
Murlidhar Manchala
16 arguments0 words per minute0 words1 seconds
Argument 1
Supervisory relief for firms that implement robust controls (Murlidhar Manchala)
EXPLANATION
Regulators should adopt a lenient supervisory stance for entities that have put effective guardrails and processes in place, treating compliance as an instrument rather than a punitive measure.
EVIDENCE
He notes that the report suggests entities with strong guardrails should receive a lenient supervisory approach and that such firms should not be treated as higher-risk, emphasizing a “lenient supervisory approach” for those that have implemented robust controls [204-206].
MAJOR DISCUSSION POINT
Supervisory relief for firms that implement robust controls
Argument 2
Regulators should remain flexible and adapt as AI evolves (Murlidhar Manchala)
EXPLANATION
Regulatory frameworks need to be dynamic, incorporating continuous audits, transparency, and the ability to shut down systems when necessary, mirroring practices used in financial market oversight.
EVIDENCE
He describes a framework that includes audits, transparency, explainability, and mechanisms to shut down systems when they spiral out, drawing parallels with stock-market regulation and emphasizing the need for flexible, adaptive oversight [172-176].
MAJOR DISCUSSION POINT
Regulators should remain flexible and adapt as AI evolves
Argument 3
Transparency (“glass‑box”) for customers to understand automated decisions (Murlidhar Manchala)
EXPLANATION
AI‑driven services should be presented as a “glass‑box” rather than a “black‑box,” ensuring customers can see and understand the logic behind automated outcomes.
EVIDENCE
He stresses that customers should have transparent, understandable decisions, describing the ideal as a “glass-box” rather than a black-box system [204-207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulators and industry advocates call for glass-box AI that lets customers see decision logic [S22].
MAJOR DISCUSSION POINT
Transparency (“glass‑box”) for customers to understand automated decisions
Argument 4
Risk underestimation must be tackled through proactive governance rather than reactive measures
EXPLANATION
Murlidhar warns that the industry may be underestimating AI‑related risks and argues that only a strong, forward‑looking governance framework can mitigate these unknowns.
EVIDENCE
He states that risk is maybe underestimating the risk and that it can be addressed only through governance in the present emergence of technology, highlighting the need for robust oversight [296-302].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Proactive, lifecycle-wide AI governance is recommended to address hidden risks before they materialise [S17].
MAJOR DISCUSSION POINT
Addressing risk underestimation via proactive governance
Argument 5
Sandbox mechanisms should be expanded to provide monitoring and support, not only to address compliance breaches
EXPLANATION
Murlidhar explains that the current sandbox is invoked when entities suspect regulatory breaches, but proposes a new sandbox that also offers monitoring, data, and tooling to help innovators manage risk proactively.
EVIDENCE
He describes the existing intra-operable sandbox used when regulated entities feel a product violates regulations, and then suggests a future sandbox that would provide monitoring, compute, data, and tools to support innovation while managing risk [407-411].
MAJOR DISCUSSION POINT
Extending sandbox use for proactive risk monitoring and support
Argument 6
Mandatory incident reporting and manual override mechanisms are essential for AI-driven financial services to ensure rapid remediation of failures.
EXPLANATION
Regulators should require firms to establish clear incident reporting procedures and provide manual controls that can intervene when AI systems behave unexpectedly, thereby protecting customers and maintaining trust.
EVIDENCE
Murlidhar explains that once robust governance and processes are in place, there should be incident reporting mechanisms and manual overrides to address any aberrations in AI operations, emphasizing that these safeguards are part of a right process for handling risks [204-206].
MAJOR DISCUSSION POINT
Incident reporting and manual overrides for AI systems
Argument 7
Regulators should allow first‑time lapses without punitive action when robust guardrails are in place
EXPLANATION
Murlidhar proposes that if firms have implemented comprehensive controls, regulators should treat initial failures as learning opportunities rather than imposing strict penalties.
EVIDENCE
He mentions that supervision should not be used as a systemic greater risk and that “you should allow first time lapse” when appropriate controls exist [204-206].
MAJOR DISCUSSION POINT
First‑time lapse tolerance with strong governance
Argument 8
Supervisory relief should be granted only when firms can demonstrably document robust AI guardrails and governance processes, making the relief an instrument of risk mitigation rather than a blanket leniency.
EXPLANATION
Regulators should require firms to provide clear, auditable evidence of the controls they have put in place before offering a lenient supervisory stance, ensuring that relief is tied to concrete governance measures.
EVIDENCE
He notes that the report suggests entities with all guardrails should receive a lenient supervisory approach and that supervision should not be used as a systemic greater risk, implying the need for documented processes to qualify for relief [204-206].
MAJOR DISCUSSION POINT
Conditional supervisory relief based on documented AI governance measures
Argument 9
Regulators should formalise a proactive incident‑reporting and manual‑override framework for AI‑driven financial services to ensure rapid remediation of failures.
EXPLANATION
A mandatory system for reporting AI incidents and providing human‑in‑the‑loop overrides can help contain errors, protect customers, and maintain trust in AI‑enabled financial products.
EVIDENCE
He describes the need for incident reporting mechanisms and manual overrides as part of the right process for handling AI risks, emphasizing that these safeguards are essential once robust governance is in place [204-206].
MAJOR DISCUSSION POINT
Mandatory incident reporting and manual overrides for AI systems
Argument 10
Board and senior management must develop AI literacy to understand system logic, limitations and vulnerabilities.
EXPLANATION
Murlidhar stresses that senior leaders need to be knowledgeable about how AI models work, their constraints, and potential risks so they can oversee deployment responsibly.
EVIDENCE
He notes that potential vulnerability of AI systems requires board and senior management to understand the logic, limitations, and other aspects of the technology, emphasizing the need for deep AI literacy [204-206].
MAJOR DISCUSSION POINT
AI literacy for senior leadership
Argument 11
AI guardrails should be treated as an enabling instrument rather than a punitive compliance checkbox.
EXPLANATION
He argues that the purpose of guardrails is to facilitate safe innovation, acting as a tool that supports firms rather than merely imposing penalties.
EVIDENCE
Murlidhar states that the guardrails should be seen as an instrument, implying they enable innovation while ensuring safety [204-206].
MAJOR DISCUSSION POINT
Guardrails as enabling instrument
Argument 12
Regulatory response should prioritize root‑cause analysis and remediation before imposing supervisory actions.
EXPLANATION
He suggests that when a firm experiences a lapse, regulators should first require a thorough investigation and corrective measures, using the findings to guide any supervisory approach.
EVIDENCE
He mentions that entities performing root-cause analysis and addressing problems should receive a lenient supervisory approach, indicating that remediation should precede enforcement [204-206].
MAJOR DISCUSSION POINT
Root‑cause analysis before supervisory action
Argument 13
Introduce a formal award to recognize financial institutions that demonstrate exemplary AI governance and risk management.
EXPLANATION
Murlidhar proposes that regulators create an award to honor firms that excel in implementing AI guardrails and robust governance processes, thereby incentivizing best practices in the financial sector.
EVIDENCE
He mentions that the report suggests there would be an award for finance, highlighting specific works done in the area and indicating a proposal to recognize and encourage strong AI governance efforts [204-206].
MAJOR DISCUSSION POINT
Recognition award for exemplary AI governance in finance
Argument 14
Algorithmic efficiency must not compromise equitable opportunity; AI systems should balance performance with fairness.
EXPLANATION
He emphasizes that while AI can improve operational efficiency, it should not do so at the cost of equity, urging that AI-driven financial services maintain inclusive outcomes.
EVIDENCE
He explicitly states that in financial AI, algorithmic efficiency should not compromise equitable opportunity, underscoring the need to preserve fairness alongside efficiency [204-206].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balancing efficiency with fairness requires diverse data and bias mitigation, as discussed in fairness-focused AI governance literature [S23].
MAJOR DISCUSSION POINT
Balancing algorithmic efficiency with equitable opportunity
Argument 15
AI should not be automatically classified as a higher‑risk category; risk classification must be proportionate and evidence‑based
EXPLANATION
Murlidhar cautions against over‑acting by treating AI as a high‑risk area by default, arguing that regulators should apply a proportional risk‑based approach that reflects the actual systemic impact of each AI application.
EVIDENCE
He notes that the regulator should not over-act by labeling AI as a higher-risk area and that this over-classification is recognized as something to avoid [204-206].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The EU AI Act advocates proportionate risk classification based on actual systemic impact, warning against blanket high-risk labeling [S25].
MAJOR DISCUSSION POINT
Proportional risk classification for AI systems
Argument 16
Regulators should develop a standardized AI incident‑reporting framework to ensure consistent, timely disclosure across financial institutions
EXPLANATION
Murlidhar emphasizes the need for clear, uniform incident‑reporting mechanisms and manual overrides, suggesting that a standardized framework would enable rapid remediation and maintain customer trust.
EVIDENCE
He describes the existence of incident-reporting mechanisms and manual overrides as essential safeguards and implies that a formal, consistent process across entities would improve oversight [204-206].
MAJOR DISCUSSION POINT
Standardized incident reporting for AI‑driven financial services
P
Praveen Kamat
5 arguments184 words per minute874 words283 seconds
Argument 1
Sandbox experimentation to test governance in a controlled environment (Praveen Kamat)
EXPLANATION
A dedicated sandbox can allow firms to pilot AI models under regulatory oversight, enabling risk‑based testing and iterative refinement before full deployment.
EVIDENCE
He explains that IFSC, being a clean-slate jurisdiction, can host sandboxes where AI pilots are tested, with risk caps and continuous monitoring, allowing regulators to tailor rules based on observed outcomes [192-199] and later [284-291].
MAJOR DISCUSSION POINT
Sandbox experimentation to test governance in a controlled environment
DISAGREED WITH
Murlidhar Manchala
Argument 2
IFSC as a clean‑slate jurisdiction to experiment with AI governance frameworks (Praveen Kamat)
EXPLANATION
The International Financial Services Centre (IFSC) offers a fresh regulatory environment without legacy constraints, making it suitable for innovative AI governance experiments.
EVIDENCE
He notes that IFSC was set up in 2015, built from scratch, and therefore provides “more leg room” and space to experiment without legacy baggage, positioning it as an ideal lab for AI governance [192-199].
MAJOR DISCUSSION POINT
IFSC as a clean‑slate jurisdiction to experiment with AI governance frameworks
Argument 3
Legal and cross‑currency constraints limit sandbox interoperability between IFSC and domestic regulators
EXPLANATION
Praveen points out that while technical sandbox integration exists, differences such as the prohibition of INR transactions in IFSC create legal barriers that hinder seamless experimentation across jurisdictions.
EVIDENCE
He explains that IFSC operates with 16 foreign currencies and does not permit INR transactions, which creates legal incompatibilities that affect sandbox rollout and limit cross-jurisdictional experimentation [399-402].
MAJOR DISCUSSION POINT
Legal and currency barriers to sandbox interoperability
Argument 4
IFSC’s comprehensive regulatory coverage across finance, capital markets, banking, insurance and pensions enables holistic AI governance.
EXPLANATION
Having a single jurisdiction that oversees multiple financial verticals allows coordinated AI policy, risk‑management standards, and cross‑sector supervision, fostering consistent governance.
EVIDENCE
He lists that IFSC has introduced verticals across finance, capital markets, banking, insurance, pensions and ancillary services, highlighting its breadth of regulatory authority [198-200].
MAJOR DISCUSSION POINT
Broad regulatory scope of IFSC supports integrated AI governance
Argument 5
AI governance in IFSC requires a gestation period; rapid scaling may compromise stability
EXPLANATION
Praveen likens building a financial centre to a long marathon, emphasizing that AI governance frameworks need time to mature before full deployment can be safely achieved.
EVIDENCE
He compares building a financial centre to a “45 kilometre marathon” and stresses the need for a gestation period before reaching critical mass for AI initiatives [196-199].
MAJOR DISCUSSION POINT
Importance of gradual development and gestation for AI governance in IFSC
V
Vikram Kishore Bhattacharya
4 arguments175 words per minute694 words236 seconds
Argument 1
Generative AI lowers attack barriers but does not change core security principles (Vikram Kishore Bhattacharya)
EXPLANATION
While generative AI makes it easier for malicious actors to craft phishing or credential‑stealing attacks, the fundamental cybersecurity controls remain unchanged.
EVIDENCE
He cites a 2025 report showing generative AI has lowered barriers for threat actors, yet stresses that the same principles-multi-factor authentication, strong passwords, regular updates, and system scanning-still apply [215-223].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Generative AI makes attacks easier, yet fundamental security controls such as MFA and patching remain essential [S34][S19].
MAJOR DISCUSSION POINT
Generative AI lowers attack barriers but does not change core security principles
DISAGREED WITH
Ajay Kumar Chaudhary
Argument 2
Organizations need an active AI‑in‑the‑loop security posture for faster detection and response (Vikram Kishore Bhattacharya)
EXPLANATION
Integrating AI into security operations can accelerate threat detection, automate scanning, and enable rapid decision‑making, but requires appropriate skill development.
EVIDENCE
He describes using AI in the loop to automate scanning, generate reports, and make timely value judgments, emphasizing the need for upskilling and awareness across banks and regulators [226-232].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Embedding AI into security operations accelerates threat detection and response, as recommended in AI-enabled cybersecurity frameworks [S19].
MAJOR DISCUSSION POINT
Organizations need an active AI‑in‑the‑loop security posture for faster detection and response
Argument 3
Adoption of standards, third‑party audits, and upskilling are essential to manage AI‑related threats (Vikram Kishore Bhattacharya)
EXPLANATION
Compliance with recognized standards (e.g., ISO, NIST), independent audits, and continuous training are critical to mitigate AI‑driven cyber risks.
EVIDENCE
He recommends verification through standards like ISO or NIST, third-party audit reports, and extensive upskilling programs to ensure organizations can handle AI-related security challenges [224-233].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Compliance with standards (ISO/NIST) and independent audits, together with workforce upskilling, are highlighted as key to mitigating AI-driven cyber risks [S25][S19].
MAJOR DISCUSSION POINT
Adoption of standards, third‑party audits, and upskilling are essential to manage AI‑related threats
Argument 4
Financial institutions must shift from a passive to an active cybersecurity posture, proactively integrating AI to detect and respond to threats.
EXPLANATION
Rather than merely defending existing perimeters, firms should embed AI tools that continuously monitor, scan and automate response actions, complemented by upskilling programmes to maintain readiness.
EVIDENCE
He stresses that organizations need to become active participants in cybersecurity, using AI-in-the-loop for faster detection, automated scanning, and rapid value judgments, and calls for extensive upskilling and awareness across banks and regulators [225-233].
MAJOR DISCUSSION POINT
Proactive AI‑driven cybersecurity for financial services
P
Priyanka Jain
4 arguments110 words per minute1025 words555 seconds
Argument 1
India must craft a balanced, home‑grown AI governance model, learning from US, EU, and China (Priyanka Jain)
EXPLANATION
India should develop its own AI governance framework that draws lessons from the strengths and weaknesses of the US, EU, and Chinese approaches, ensuring a tailored, sovereign strategy.
EVIDENCE
She asks the panel whether India should position itself between the US, EU, and China models, highlighting the need for a balanced, home-grown approach [146-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Localising AI regulations to Indian context while drawing lessons from global models is advocated as a best-practice approach [S21][S25][S27].
MAJOR DISCUSSION POINT
India must craft a balanced, home‑grown AI governance model, learning from US, EU, and China
Argument 2
Emphasis on compartmentalization and bounded problem solving to maintain trust (Priyanka Jain)
EXPLANATION
Focusing AI applications on well‑defined, bounded problems and using compartmentalized architectures can reduce systemic risk and preserve user trust.
EVIDENCE
She remarks that compartmentalization is a great way to de-risk AI and that solving bounded problems helps maintain trust while avoiding the pitfalls of an “AI of everything” [236-240].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Compartmentalised AI deployments reduce systemic bias and concentration risk, supporting trust in AI systems [S23][S31].
MAJOR DISCUSSION POINT
Emphasis on compartmentalization and bounded problem solving to maintain trust
Argument 3
AI governance must embed access and inclusion as core design principles to avoid reinforcing existing inequalities.
EXPLANATION
Policies should require representative training data, periodic impact audits, and clear redress mechanisms so that AI‑enabled financial services broaden participation rather than marginalise vulnerable groups.
EVIDENCE
She emphasizes that inclusion cannot be assumed and must be intentionally designed, calling for representativeness in datasets, impact audits, community-level feedback, and institutional redress mechanisms for automated decisions affecting individuals [85-93].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusive AI requires representative data, impact audits, and redress mechanisms to prevent structural bias [S23][S38][S1].
MAJOR DISCUSSION POINT
Inclusion‑by‑design in AI governance
Argument 4
Leverage India’s existing digital public infrastructure as a foundation for AI governance
EXPLANATION
Priyanka points out that India’s proven digital public infrastructure—such as UPI and digital identity—offers a solid base onto which AI governance mechanisms can be layered.
EVIDENCE
She references India’s track record of scaling digital public infrastructure that drives inclusion, efficiency and trust, suggesting AI can be embedded on this foundation [14-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Building AI on top of established digital public platforms (e.g., UPI, digital ID) is recommended to inherit proven inclusion and trust benefits [S5].
MAJOR DISCUSSION POINT
Building AI governance on established digital public infrastructure
A
Audience
1 argument187 words per minute555 words177 seconds
Argument 1
Call for consent‑backed APIs and regulatory participation for data processors to shape interpretation of rules (Audience)
EXPLANATION
Stakeholders propose a regulatory framework for consent‑backed data‑sharing APIs, allowing data processors to engage with regulators and influence rule interpretation.
EVIDENCE
An audience member suggests developing a consent-backed API for data consumption and seeks regulatory definitions that involve data processors in shaping interpretation of rules [354-360].
MAJOR DISCUSSION POINT
Call for consent‑backed APIs and regulatory participation for data processors to shape interpretation of rules
M
Moderator
2 arguments16 words per minute145 words531 seconds
Argument 1
Framing the summit’s focus on AI governance sets the agenda for subsequent discussion
EXPLANATION
The moderator emphasizes that the summit’s overarching theme is to embed AI governance within existing technology regulation, thereby establishing the context for the panel’s contributions.
EVIDENCE
In the opening remarks the moderator thanks participants and states that the discussion will look at AI governance as an embedded layer of existing technology governance, not as a separate lens [1-2]. Later, after the keynote, the moderator thanks the speaker and notes that the insights will set the context for the panel discussion, reinforcing the need to frame the conversation around AI governance [133-136].
MAJOR DISCUSSION POINT
Setting the agenda for AI governance discussion
Argument 2
AI governance should be embedded as an integral layer of existing technology regulation rather than a separate silo
EXPLANATION
The moderator frames the summit’s purpose as integrating AI governance into the current governance structures for technologies, stressing that AI should not be treated as a stand‑alone regulatory domain.
EVIDENCE
In the opening remarks the moderator says the discussion will look at AI governance as an embedded layer of governance that we already govern technologies with, not as a separate lens [2].
MAJOR DISCUSSION POINT
Embedding AI governance within existing regulatory frameworks
Agreements
Agreement Points
Similar Viewpoints
Unexpected Consensus
Differences
Different Viewpoints
Effectiveness and feasibility of a risk‑based AI regulatory approach
Speakers: Ajay Kumar Chaudhary, Sanjeev Sanyal
Governance pillars: proportionality, fairness, explainability, accountability (Ajay Kumar Chaudhary) Risk‑based approach cannot predict unknown AI risks; may be too stringent or too lax (Sanjeev Sanyal) Avoid bureaucratic risk‑based system; need creative flexible regulation (Sanjeev Sanyal)
Ajay advocates a risk-based, proportional governance framework as a core pillar for AI in finance [50-51][59-63], while Sanjeev argues that risk-based regulation cannot reliably anticipate AI’s emergent risks and may either over-regulate or under-protect, warning against a bureaucratic risk-based system [160-166][238-240].
POLICY CONTEXT (KNOWLEDGE BASE)
Risk-based AI regulation is embedded in sector-specific frameworks such as the Secure Finance Risk-Based AI Policy for banking, which ties sandbox outcomes to tailored rules [S50], and has been endorsed in multistakeholder forums as a balanced approach to AI-related cyber risks [S62].
Purpose and scope of sandbox mechanisms for AI experimentation
Speakers: Praveen Kamat, Murlidhar Manchala
Sandbox experimentation to test governance in a controlled environment (Praveen Kamat) Sandbox only for compliance breaches; propose expanded sandbox for monitoring and support (Murlidhar Manchala)
Praveen describes the IFSC sandbox as a venue for pilots, risk caps and iterative regulation, emphasizing its experimental role [192-199][284-291], whereas Murlidhar states that the current sandbox is invoked only when a regulated entity suspects a breach and suggests a new sandbox that also provides monitoring and tooling beyond compliance issues [407-411].
POLICY CONTEXT (KNOWLEDGE BASE)
Regulatory sandboxes are employed to test AI solutions for startups and incumbents, with outcomes informing rule-making [S50]; a global consensus on sandbox principles supports responsible innovation across jurisdictions [S51]; the EU AI Act mandates sandbox provisions that differ among member states, reflecting varied sectoral needs [S52]; financial entry barriers for small innovators are noted as a challenge to sandbox participation [S53]; effective sandbox design requires systematic evaluation and stakeholder involvement [S60]; and sandboxes are highlighted as safe spaces for controlled AI deployment [S61].
Characterisation of AI as core financial infrastructure versus an emergent technology
Speakers: Ajay Kumar Chaudhary, Sanjeev Sanyal
AI should be treated as a core financial utility and subject to the same resilience standards as other critical infrastructure (Ajay Kumar Chaudhary) AI is an emergent technology with behaviours that cannot be fully regulated ex‑ante (Sanjeev Sanyal)
Ajay argues that AI is a systemic component of financial infrastructure requiring the same standards of resilience and accountability as any critical utility [44], while Sanjeev emphasizes AI’s emergent, unpredictable nature, stating it cannot be treated like traditional infrastructure and resists full ex-ante regulation [242-245][246-248].
Impact of AI on cybersecurity fundamentals
Speakers: Vikram Kishore Bhattacharya, Ajay Kumar Chaudhary
Generative AI lowers attack barriers but does not change core security principles (Vikram Kishore Bhattacharya) AI integration with cybersecurity creates new attack vectors and requires anticipatory safeguards (Ajay Kumar Chaudhary)
Vikram maintains that despite AI enabling easier phishing and credential attacks, the underlying security controls such as MFA, strong passwords and regular updates remain unchanged [215-223], whereas Ajay points out that AI introduces amplified cybersecurity risks, including adversarial AI, necessitating proactive safeguards [78-80].
POLICY CONTEXT (KNOWLEDGE BASE)
AI introduces new cyber-threat vectors such as deepfake-based social engineering attacks [S54]; ethical guidelines for AI in cybersecurity stress the need to address these novel risks [S55]; AI dramatically shortens attack development cycles, shifting the threat landscape from months to minutes [S56]; at the same time, AI can reinforce critical infrastructure protection through advanced detection and response capabilities [S58]; and IGF discussions call for specific regulatory measures to manage AI-driven cyber risks [S62].
Unexpected Differences
Different regulatory visions for sandbox utilisation
Speakers: Praveen Kamat, Murlidhar Manchala
Sandbox experimentation to test governance in a controlled environment (Praveen Kamat) Sandbox only for compliance breaches; propose expanded sandbox for monitoring and support (Murlidhar Manchala)
Both speakers are regulators, yet Praveen envisions the sandbox as a proactive experimental space for AI pilots across jurisdictions [192-199][284-291], while Murlidhar sees it primarily as a remedial tool triggered by suspected breaches and only later suggests a broader monitoring role [407-411]. This contrast in purpose was not anticipated given their shared regulatory background.
POLICY CONTEXT (KNOWLEDGE BASE)
While a broad international consensus underpins sandbox use for AI, national implementations diverge, exemplified by varying EU member-state approaches to the AI Act sandbox requirements [S51][S52]; financial and resource constraints for startups shape regulatory design choices [S53]; successful sandbox programmes depend on stakeholder engagement, risk mitigation, and robust monitoring frameworks [S60]; and positive assessments highlight sandboxes as mechanisms for safe AI experimentation [S61].
Whether AI fundamentally changes cybersecurity versus merely lowering attack barriers
Speakers: Vikram Kishore Bhattacharya, Ajay Kumar Chaudhary
Generative AI lowers attack barriers but does not change core security principles (Vikram Kishore Bhattacharya) AI integration with cybersecurity creates new attack vectors and requires anticipatory safeguards (Ajay Kumar Chaudhary)
Vikram asserts that despite AI-enabled threats, the foundational security controls remain the same [215-223]; Ajay counters that AI introduces novel adversarial threats, demanding new safeguards [78-80]. The divergence in assessing AI’s impact on security fundamentals was unexpected.
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence shows AI reduces the time needed to develop sophisticated attacks from months to minutes, indicating a fundamental shift in cyber threat dynamics [S56]; AI-generated deepfakes create entirely new categories of social-engineering attacks, beyond simple barrier reduction [S54]; IGF deliberations argue that AI introduces unique cyber risks that require dedicated regulatory responses, not just lower thresholds for existing threats [S62]; and ethical discourse reinforces the view of AI as a transformative factor in cybersecurity [S55].
Overall Assessment

The panel shows strong consensus on the need for trustworthy AI, inclusion, and interdisciplinary governance, but diverges on the suitability of risk‑based regulation, the classification of AI as infrastructure, the design of sandbox mechanisms, and the extent to which AI reshapes cybersecurity. These disagreements reflect differing views on regulatory flexibility versus predictability and on the balance between innovation and systemic risk.

Moderate disagreement: while participants align on overarching goals, they propose contrasting regulatory tools and conceptual framings, indicating that achieving a unified AI governance model will require negotiation between risk‑based, ex‑ante, and experimental approaches.

Partial Agreements
All speakers share the goal of trustworthy, safe AI deployment, but differ on the mechanisms: Ajay stresses embedded governance throughout the AI lifecycle [46-48][50-54]; Sanjeev calls for ex‑ante accountability and kill‑switches [180-185][238-244]; Murlidhar focuses on guardrails, transparency and incident reporting as tools [204-207][296-302]; Vikram emphasizes standards, audits and skill development [224-233].
Speakers: Ajay Kumar Chaudhary, Sanjeev Sanyal, Murlidhar Manchala, Vikram Kishore Bhattacharya
Governance must be built‑in by design, not an after‑thought (Ajay Kumar Chaudhary) Ex‑ante responsibility and “skin‑in‑the‑game” for algorithm creators (Sanjeev Sanyal) Guardrails should be an enabling instrument, with glass‑box transparency and incident reporting (Murlidhar Manchala) Adoption of standards, third‑party audits and upskilling are essential to manage AI‑related threats (Vikram Kishore Bhattacharya)
All agree that AI should broaden financial access, yet differ on implementation: Ajay calls for representative training data and impact audits [42-44][91-93]; Priyanka stresses policy design to ensure access and inclusion [85-93]; Sanjeev proposes bounded, compartmentalized AI to avoid systemic bias [238-244].
Speakers: Ajay Kumar Chaudhary, Priyanka Jain, Sanjeev Sanyal
AI can expand financial inclusion but must avoid reinforcing historical biases (Ajay Kumar Chaudhary) Inclusion must be designed into AI governance, with representativeness and redress (Priyanka Jain) Compartmentalized AI solving bounded problems can support inclusion while limiting bias (Sanjeev Sanyal)
Takeaways
Key takeaways
AI must be treated as core financial infrastructure and governed by design, not as an after‑thought overlay. Four pillars of embedded AI governance were highlighted: proportionality (risk‑based intensity), fairness/non‑discrimination, explainability/transparency, and clear accountability. Continuous monitoring for model drift, bias, and operational concentration risk is essential throughout the AI lifecycle. Ex‑ante “skin‑in‑the‑game” – algorithm creators and senior management must be held responsible before failures occur. Traditional risk‑based regulatory models are inadequate for emergent AI; they may be either too stringent or too lax. A supervisory relief or “safe‑harbor” regime is proposed for firms that implement robust, auditable AI controls and transparent redress mechanisms. Sandbox environments (especially in the IFSC) are seen as practical venues for testing AI models, governance frameworks, and cross‑regulatory interoperability. AI can dramatically expand financial inclusion, but only if training data are representative and impact audits with grievance redress are institutionalised. Sovereign data ownership and domestic AI stack (chips, cloud, foundation models) are critical for economic and national security; incentives for data‑centres and home‑grown models were noted. Generative AI heightens cybersecurity threats but does not overturn fundamental security principles; active AI‑in‑the‑loop defenses, standards compliance, and up‑skilling are required. India should craft a balanced, home‑grown AI governance model that draws lessons from US (ex‑post), EU (compliance‑led) and China (state‑led) approaches, emphasizing compartmentalisation and bounded problem solving.
Resolutions and action items
RBI and other regulators to consider a supervisory relief framework that rewards firms with documented AI governance (model inventories, bias testing, continuous monitoring). Expand the IFSC sandbox to allow interoperable testing across RBI, SEBI, IRDAI, and IFSC for AI‑driven financial products. Develop a consent‑backed API standard for data sharing, with stakeholder participation from data processors and regulators. Promote domestic investment in AI infrastructure (data centres, semiconductor manufacturing, cloud capacity) through policy incentives such as tax holidays. Introduce periodic, independent AI audit mechanisms (e.g., “chartered AI audit”) for high‑impact models. Create clear ex‑ante accountability matrices that assign responsibility for AI outcomes to algorithm developers, data owners, and senior management. Implement a framework for impact audits and redress mechanisms to ensure inclusive AI outcomes.
Unresolved issues
How to operationalise a risk‑based regulatory approach for AI when many risks are unknown and emergent. Legal framework for ownership of AI‑generated innovations and copyrighted material (prompt‑owner vs. data‑owner vs. model‑owner). Specific mechanisms for cross‑jurisdictional data handling between IFSC (foreign‑currency) and domestic Indian regulators. Detailed standards for AI‑related cybersecurity incident reporting and real‑time response automation. Exact criteria and thresholds for granting supervisory relief or safe‑harbor status to AI‑enabled firms. Procedures for ensuring representative training data and preventing bias in AI models at scale.
Suggested compromises
Adopt a proportional, risk‑based governance model that is flexible enough to evolve with AI, avoiding both over‑regulation and regulatory vacuum. Provide supervisory relief (lighter oversight) to firms that demonstrate robust, auditable AI controls while retaining the right to intervene if systemic risk emerges. Use compartmentalised AI deployments (bounded problem domains) to limit systemic exposure and energy consumption. Combine ex‑ante accountability (clear responsibility assignments) with ex‑post penalties for severe failures, balancing prevention and deterrence. Leverage sandbox environments as a middle ground for innovation and regulation, allowing controlled experimentation before full market rollout.
Thought Provoking Comments
I will use the word ‘Mano’ – humanity – instead of ‘responsible AI’ because it captures moral, ethical, sovereign, inclusive and accountable dimensions in a single word.
Reframes AI governance around a human‑centric value rather than a technical checklist, linking policy, ethics and national identity in one concept.
Set a unifying narrative for the rest of the discussion, prompting other panelists to address how AI can be aligned with broader societal goals rather than isolated compliance.
Speaker: Ajay Kumar Chaudhary
Governance cannot be an overlay applied after innovation has scaled; it must be embedded by design into the AI life‑cycle.
Highlights the necessity of proactive, design‑level controls rather than reactive regulation, a theme that recurs throughout the panel.
Provided a foundational premise that guided subsequent debates on risk‑based approaches, sandbox experimentation, and the need for continuous monitoring.
Speaker: Ajay Kumar Chaudhary
Historically, the Europeans dominated the world by taking technologies invented elsewhere (printing press, gunpowder, mathematics). India must engage now or risk losing AI leadership.
Uses a powerful historical analogy to illustrate the strategic risk of technological complacency, shifting the conversation from technical details to geopolitical stakes.
Prompted panelists to consider national sovereignty, data ownership, and the urgency of building domestic AI capabilities, leading to later remarks on data oil and sovereign AI stacks.
Speaker: Sanjeev Sanyal
A risk‑based regulatory system for AI is fundamentally flawed because the technology is emergent and its risks are unknowable ex‑ante; we need ex‑post accountability and clear pre‑defined responsibility.
Challenges the prevailing regulatory paradigm, arguing that traditional risk assessments cannot capture AI’s dynamic nature.
Shifted the tone from prescriptive risk frameworks to a discussion on compartmentalisation, ‘firewalls’, and assigning skin‑in‑the‑game, influencing later suggestions on auditability and liability.
Speaker: Sanjeev Sanyal
AI should be compartmentalised – run in bounded, well‑defined problems with clear ‘switch‑off’ buttons and Chinese walls – rather than an ‘AI of everything’ which could cause systemic failures.
Introduces the concept of modular AI deployment as a safety mechanism, borrowing from financial market safeguards.
Guided the conversation toward practical governance tools such as firewalls, audit trails, and the need for clear jurisdictional boundaries, echoed later by other speakers.
Speaker: Sanjeev Sanyal
GIF City, being a clean‑slate jurisdiction with its own regulator, can act as a sandbox for AI governance, allowing experimentation without legacy baggage.
Proposes a concrete institutional experiment to test governance models, linking policy to a real‑world testbed.
Opened a new line of discussion about sandbox frameworks, interoperability across regulators, and the practical steps needed to scale AI innovation safely.
Speaker: Praveen Kamat
Regulators should offer calibrated supervisory relief – a ‘safe harbour’ – for firms that embed robust controls, model inventories, bias testing and continuous monitoring.
Suggests an incentive‑based regulatory approach that rewards good governance rather than only penalising failures.
Prompted dialogue on balancing enforcement with encouragement, influencing later remarks on awards, glass‑box transparency, and the role of audits.
Speaker: Murlidhar Manchala
Generative AI lowers the barrier for threat actors but does not fundamentally change cybersecurity principles; we need active participation, standards like ISO/NIST, and AI‑in‑the‑loop for faster response.
Adds a security dimension to the governance conversation, emphasizing that existing cyber hygiene remains vital even as AI tools evolve.
Expanded the scope of the discussion beyond governance to operational resilience, leading to consensus on the need for continuous monitoring and skill development.
Speaker: Vikram Kishore Bhattacharya
We need to start thinking about copyright and ownership of AI‑generated outputs – who owns the innovation: the prompt writer, the data source, or the model creator?
Raises a novel legal challenge that has not been widely addressed in the financial AI context, highlighting future litigation and policy gaps.
Shifted the conversation toward intellectual property considerations, prompting the audience to contemplate regulatory frameworks for AI‑generated content.
Speaker: Sanjeev Sanyal
Data is the new oil; India must secure rights to its sovereign data and build the ‘oil rigs’ (data centres) and ‘refineries’ (AI models) to process it, rather than just hoarding raw data.
Frames data sovereignty in economic terms, linking infrastructure policy to AI capability building.
Reinforced earlier points about sovereign AI stacks, influencing the panel’s emphasis on domestic data centres, tax incentives, and the need for a national AI ecosystem.
Speaker: Sanjeev Sanyal (audience follow‑up)
Overall Assessment

The discussion was driven forward by a handful of pivotal insights that reframed AI governance from a technical checklist to a human‑centric, sovereign, and legally nuanced endeavor. Ajay Kumar Chaudhary’s ‘Mano’ framing and call for embedded governance set the conceptual foundation. Sanjeev Sanyal’s historical analogy and sharp critique of risk‑based regulation introduced a strategic urgency and challenged conventional regulatory thinking, prompting the panel to explore compartmentalisation, clear liability, and the need for ex‑post accountability. Praveen Kamat’s proposal of GIF City as a sandbox provided a tangible experimental venue, while Murlidhar Manchala’s safe‑harbour suggestion offered a pragmatic incentive model. Vikram Kishore’s security perspective broadened the scope to operational resilience, and Sanyal’s copyright and data‑sovereignty remarks opened new legal and economic dimensions. Collectively, these comments redirected the conversation from abstract policy to concrete mechanisms—sandboxing, audits, liability frameworks, and infrastructure investment—thereby deepening the analysis and shaping a multidimensional roadmap for AI governance in India’s financial sector.

Follow-up Questions
How can governance be embedded throughout the AI lifecycle in financial services (from design to post‑deployment monitoring) to ensure fairness, transparency, accountability and proportionality?
Embedding governance is essential to preserve trust, resilience and inclusion when AI systems influence credit decisions, fraud detection and other critical financial outcomes.
Speaker: Ajay Kumar Chaudhary
What strategies can mitigate operational concentration risk in AI infrastructure (semiconductor chips, cloud platforms, foundation models) to protect economic sovereignty and financial stability?
Reliance on a few global suppliers creates systemic vulnerability; diversifying supply chains is crucial for national security and stable financial markets.
Speaker: Ajay Kumar Chaudhary
How can AI literacy and governance capability be built at the board and senior‑management level across financial institutions?
Leadership must understand model architecture, validation, vendor dependence and ethical implications to exercise effective oversight and “skin‑in‑the‑game.”
Speaker: Ajay Kumar Chaudhary
What mechanisms (representative training data, periodic impact audits, community feedback) are needed to ensure AI systems are inclusive and do not perpetuate structural bias?
Without intentional design, AI may exclude marginalized groups, undermining the inclusive goals of India’s digital finance agenda.
Speaker: Ajay Kumar Chaudhary
Given the emergent nature of AI, how can a regulatory framework balance risk‑based oversight with ex‑post accountability without stifling innovation?
Traditional risk‑based models may be too rigid for AI’s unknown risks; a flexible, adaptive approach is required.
Speaker: Sanjeev Sanyal
How should AI systems be compartmentalized (e.g., firewalls, “Chinese walls”) to limit systemic spill‑over and energy consumption?
Compartmentalization reduces the chance that a failure in one AI module cascades across the financial system.
Speaker: Sanjeev Sanyal
How can ex‑ante responsibility (“skin‑in‑the‑game”) be assigned among algorithm developers, data providers and end‑users for AI outcomes?
Clear liability incentives encourage careful design and prevent blame‑shifting after failures.
Speaker: Sanjeev Sanyal
What audit and explainability standards (e.g., chartered AI audit) should apply to AI systems that cross predefined risk thresholds?
Regular, independent audits can detect drift, bias or unsafe behavior before systemic damage occurs.
Speaker: Sanjeev Sanyal
Should regulators provide a safe‑harbor or calibrated supervisory relief for firms that implement robust AI controls and experience first‑time lapses?
Rewarding proactive governance encourages firms to adopt best practices without fear of disproportionate penalties.
Speaker: Murlidhar Manchala
Which cybersecurity standards and verification processes (ISO, NIST, third‑party reports) are needed for cloud service providers and banks to secure AI‑enabled financial services?
Generative AI lowers attack barriers; consistent standards are vital to protect the financial ecosystem.
Speaker: Vikram Kishore Bhattacharya
How can regulators and cloud providers develop active participation mechanisms to stay nimble as AI technologies evolve?
A proactive stance is required to adapt quickly to new threats and leverage AI for faster incident response.
Speaker: Vikram Kishore Bhattacharya
How should India position its AI governance strategy amid divergent global models (US innovation‑led, EU compliance‑led, China state‑led) to ensure access, inclusion and competitiveness?
Strategic positioning will affect India’s ability to capture AI benefits while safeguarding its citizens.
Speaker: Sanjeev Sanyal
What reforms are needed in copyright law to address ownership of AI‑generated innovations and data‑derived outputs?
Clarifying intellectual‑property rights is essential for legal certainty and incentivizing AI development.
Speaker: Sanjeev Sanyal
How can India leverage its sovereign data assets for AI model development while ensuring data rights, privacy and processing capabilities?
India’s massive data pool is a strategic asset; proper rights and processing infrastructure are needed to turn it into AI value.
Speaker: Sanjeev Sanyal (responding to audience)
How can the IFSC sandbox be extended to inter‑operate with other regulators (RBI, SEBI, IRDAI) for cross‑jurisdictional AI pilots?
A unified sandbox would enable broader experimentation and harmonised regulatory oversight across financial sectors.
Speaker: Praveen Kamat; Murlidhar Manchala
What regulatory framework could define consent‑backed APIs for data consumption, giving data processors a seat at the table in rule‑making?
Clear consent‑based data sharing standards are needed to balance innovation with privacy and compliance.
Speaker: Audience member (Aditya) and discussed by Praveen Kamat & Murlidhar Manchala
What are the under‑estimated risks of AI in finance (e.g., systemic model drift, concentration, emergent behavior) that need deeper investigation?
Identifying overlooked risks is critical for designing effective safeguards before adverse outcomes materialise.
Speaker: Murlidhar Manchala; Vikram Kishore Bhattacharya

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Towards a Safer South Launching the Global South AI Safety Research Network

Towards a Safer South Launching the Global South AI Safety Research Network

Session at a glanceSummary, keypoints, and speakers overview

Summary

The event marked the launch of the Global South Network for Trustworthy AI, introduced by Dr. Urvashi Aneja at the India AI Impact Summit to address AI deployment challenges in the Global South [8-9]. She highlighted that AI is rapidly being used in critical sectors across the Global South but that low institutional capacity and deep inequities create significant risks, and that the region is under-represented in global safety and governance structures [11-18]. Independent civil-society organisations were presented as uniquely positioned to provide grounded evidence from real-world deployments that can inform global benchmarks and standards [19-22].


The network’s core activities will include building an independent evidence base, conducting contextual real-world assessments, and advancing evaluation science beyond existing benchmarks [29-33]. Specific flagship projects for the coming year were announced: multilingual AI benchmarks with the Collective Intelligence Project, a gender-harm taxonomy with GXD Hub, and work to link evaluation outcomes to public-policy procurement mechanisms, which are seen as a lever for responsible innovation in the Global South [43-52]. Mr. Abhishek Singh emphasized that safe and trusted AI is a universal goal but that current multilingual benchmarks are lacking, noting India’s 22 official languages as an example and praising the New Delhi Frontier AI commitments as a step toward shared data and evaluation tools [66-77][88-92]. He warned that without capacity-building and resource sharing, the Global South will remain excluded from shaping AI safety standards [84-87][98-104].


Ambassador Philip Thigo reinforced the urgency of inclusion, calling the network timely yet late, and proposed regional nodes, multilingual benchmark datasets, an annual red-team exercise, and a Global South AI Safety Report to integrate the network into multilateral processes such as the UN AI governance panel [138-146][170-178][179-182]. Dr. Rachel Sibande argued that safety definitions must be re-defined to reflect local cultural, gender, and linguistic contexts, illustrating how mistranslation of a pregnant mother’s warning could miss a critical health signal [216-227]. Ms. Chenai Chair added that gender-biased voice interfaces and the diversity of African languages can exacerbate existing inequalities and even turn benign technologies into surveillance tools [240-269].


Natasha Crampton from Microsoft described the challenge of scaling community-led, multilingual evaluations to thousands of languages and stressed the need for sustainable, ongoing assessment processes [276-284]. Amir Banifatemi pointed out that safety is poorly defined, lacks financial incentives, and suffers from talent and infrastructure gaps, proposing open-source evaluation tools and incident-reporting systems to close feedback loops, especially where latency and regulatory mechanisms are weak [296-311][312-322]. Balaraman Ravindran noted the proliferation of overlapping AI safety initiatives and called for coordinated effort through a single node in the global accountability network to avoid duplication and amplify impact [330-337][338-342].


The speakers agreed that the network will serve as connective tissue between global governance, technology developers, and on-the-ground stakeholders, aiming to make AI trustworthy and inclusive for the Global South [37-39][59-60].


Keypoints


Major discussion points


Urgent need for trustworthy AI in the Global South and the current under-representation of these regions in global safety governance.


The speakers note that AI is rapidly deployed in critical sectors across the Global South, but low institutional capacity and deep inequities create high risks, while the region remains “under-represented in global safety and governance infrastructures” and often lacks its own oversight institutes [11-14][15-18].


The Global South Network for Trustworthy AI as a civil-society-driven platform to generate real-world evidence, improve contextual evaluation, and advocate for inclusive governance.


The network aims to “build an independent evidence base,” conduct “real-world deployment assessment,” and push the “science of evaluations” beyond standard benchmarks, while also “field building” and providing “connective tissue” between global governance and on-the-ground realities [29-33][34-38][43-50].


Key structural challenges identified: multilingual and cultural mismatches, limited access to compute, concentration of benchmark-setting power, and gaps in talent and infrastructure.


Participants highlight the scarcity of “multilingual benchmarks” for the many languages spoken in the Global South, the “access to compute” problem for researchers, the fact that “benchmarks are not neutral” and are often defined by a handful of institutions, and the lack of “talent inclusion” and appropriate “infrastructure” for evaluation [73-78][158-162][165-166][304-311].


Planned flagship projects to address these gaps, including multilingual benchmark development, gender-harm taxonomy, procurement-lever strategies, and sector-specific evaluations (e.g., health information systems).


The network will work with partners on “benchmarks for multilingual AI,” build a “taxonomy of gender harm,” support “procurement” as a lever for responsible innovation, and evaluate “labor market impacts” and “health information systems” in the Global South [43-50][54-58][59-60].


Calls for coordinated, regional, and multilateral structures to amplify impact and avoid duplication of effort.


The Ambassador proposes “regional nodes” and a “Global South AI Safety Report,” while other speakers stress the need to “harmonize” the many emerging initiatives, integrate the network into the UN AI governance process, and create a shared “steering committee” that includes Indian and Kenyan representatives [171-179][184-186][330-342].


Overall purpose / goal of the discussion


The session was convened to launch the Global South Network for Trustworthy AI and to articulate its mission: creating a civil-society-led ecosystem that generates context-specific evidence, builds multilingual and culturally aware evaluation tools, and advocates for the inclusion of Global South perspectives in global AI safety standards and governance frameworks.


Overall tone and its evolution


– The opening remarks are enthusiastic and celebratory, thanking partners and expressing excitement about the launch [4-9][11-14].


– The conversation then shifts to a problem-focused, analytical tone, detailing systemic gaps, risks, and technical challenges [15-22][73-78][158-166].


– As the panel proceeds, the tone becomes collaborative and solution-oriented, highlighting concrete project plans, regional coordination ideas, and commitments from industry and multilateral actors [43-50][171-179][348-349].


– The closing moments retain a hopeful and forward-looking tone, emphasizing rapid action, partnership, and the urgency of turning discussion into tangible outcomes [353-358].


Overall, the discussion moves from celebration of the network’s inception, through a sober assessment of existing deficiencies, to a constructive agenda for collective action.


Speakers

Dr. Urvashi Aneja – Founder and Director of Digital Futures Lab; host and moderator of the session.


Mr. Abhishek Singh – Under-Secretary, Ministry of Electronics and Information Technology, Government of India [S7].


Ambassador Philip Thigo – Special Envoy on Technology, Republic of Kenya [S4].


Mr. Quintin Chou-Lambert – Chief of Office and AI Lead, UN Office for Digital and Emerging Technologies [S16].


Ms. Natasha Crampton – Vice President and Chief Responsible AI Officer, Microsoft [S17].


Dr. Rachel Sibande – Senior Program Officer, AI for Africa, Gates Foundation [S10].


Ms. Chenai Chair – Director, Masakane African Language Hub [S12].


Dr. Balaraman Ravindran – Professor, IIT Madras; Head, Center of Responsible AI, IIT Madras; member of the UN scientific panel on AI [S1][S2].


Mr. Amir Banifatemi – (Speaker; specific title not stated in the transcript).


Additional speakers:


None identified beyond the list above.


Full session reportComprehensive analysis and detailed insights

The session opened with Dr Urvashi Aneja welcoming participants to the India AI Impact Summit and formally launching the Global South Network for Trustworthy AI. She opened the panel by asking Dr Rachel Sibande where clarity is lacking about safe AI in the Global South [210-214]. Aneja highlighted that AI is being deployed rapidly across health, education, the judiciary, and government in the Global South, creating “immense” opportunities but also “immense” risks because many of these contexts suffer from low institutional capacity, deep societal inequities and low literacy levels [11-14]. She warned that the region is “under-represented in global safety and governance infrastructures” and that many countries lack their own oversight bodies, leaving local concerns at risk of being ignored [15-18].


Aneja then positioned independent civil-society organisations as uniquely suited to fill this gap, arguing that their proximity to real-world deployments enables them to surface risks invisible to laboratory testing [19-21]. The Network’s core mission is to build an independent evidence base, conduct contextual real-world assessments, and advance the science of evaluation because existing benchmarks do not capture all societal risks [29-33]. The network will give visibility to technology companies designing tools and safety infrastructure, as well as to governments and international organisations shaping global AI-governance architecture[70-73]. It will also act as connective tissue between the global governance architecture, the global safety infrastructure, and what’s happening on the ground [70-73].


Five flagship projects for the first year were announced:


* development of multilingual AI benchmarks in partnership with the Collective Intelligence Project and CARIA [43-45];


* creation of a taxonomy of gender-related harms with GXD Hub and the Global Centre for AI Governance to improve incident-reporting databases [46-47];


* work on procurement levers, linking evaluation outcomes to public-policy procurement to shape markets for responsible innovation [48-53];


* a labour-market impact study; and


* health-information-system evaluations to test whether large language models meet clinicians’ needs in the Global South [54-58][59-60].


Mr Abhishek Singh reinforced that safe and trustworthy AI is a universal goal, but identified a critical shortfall: most benchmarks are English-centric, ignoring the 22 official languages of India and the linguistic diversity of other Global South nations [73-77]. He praised the New Delhi Frontier AI commitments, which require model developers to share usage data and to publish multilingual performance benchmarks, and asked how compliance can be ensured and capacity built across the region [84-92][98-104]. Singh cautioned that without tools and benchmarks, merely identifying risks is insufficient [71-73].


Ambassador Philip Thigo echoed the urgency, noting that the Global South has been “systematically excluded” from safety conversations and that Kenya is currently the only member of the international AI-safety-institute network [138-141]. He enumerated four structural gaps-limited team-capacity, access to compute, linguistic and cultural mismatches, and the non-neutrality of benchmarks, which concentrate power in a few institutions [157-166]-and proposed establishing regional nodes (e.g., an African hub) [160-162], creating multilingual benchmark datasets, organising an annual red-team exercise, and publishing a Global South AI Safety Report to feed into multilateral processes such as the UN AI-governance panel [170-179][180-182].


Dr Rachel Sibande (Gates Foundation) argued that safety must be re-defined to reflect local cultural, gender, religious and linguistic norms. She illustrated the danger of mistranslation with a pregnant mother’s phrase “waters have broken”, which could be rendered as “I have thrown away water” and thus miss a critical health alert [216-227]. She called for community-informed analyses of societal, ethical and distributional risks [216-218][229-232].


Ms Chenai Chair, Director of the Masakhane African Language Hub, added that developers often overlook user experience and gender dynamics, citing a voice-enabled agricultural tool that used a male-sounding voice in a context of gender-based violence, thereby exacerbating existing inequalities [236-247]. She highlighted the vast linguistic diversity of Africa-over 2 000 documented languages, of which Masakhane currently supports only about 50-leading to mismatches when tools are deployed in local dialects [248-255]. She warned that benign technologies can quickly become surveillance tools when communities are not consulted, giving the example of luggage-tracking devices being misused [256-269].


From the industry side, Natasha Crampton (Microsoft) described the challenge of scaling community-led, multilingual evaluations. She noted that projects like Samishka, which combined civil-society insight with research, must be turned into sustainable, ongoing evaluation pipelines that can operate across thousands of languages and cultural settings [276-284]. She stressed that benchmarks cannot be a one-off activity; they need to be run continuously to capture shifts in model behaviour [281-284].


Amir Banifatemi (ITS Rio) pointed out that safety is poorly defined and rarely costed into financial planning, meaning firms lack incentives to prioritise it [296-311]. He identified gaps in compute access, talent inclusion, and system-wide evaluation tools, arguing that current assessments focus narrowly on model design and ignore the broader ecosystem of APIs, data pipelines and infrastructure [312-322]. He advocated for open-source incident-reporting tools that capture contextual harms and for mechanisms to accelerate feedback loops in the Global South, where institutional latency hampers rapid response [321-322].


Professor Balaraman Ravindran (IIT Madras) observed a proliferation of overlapping AI-safety initiatives-including networks in Africa, China and UN-led capacity-building programmes-creating a risk of duplication [330-342]. He urged the Global South Network to serve as a single node within the broader accountability network, coordinating efforts and harmonising activities to amplify impact [330-337][338-342].


Mr Quintin Chou-Lambert (UN Office for Digital and Emerging Technologies) warned that technical standards alone cannot ensure safety, as a one-size-fits-all approach fails to capture contextual nuances [191-194]. He argued that field-tested, low-resource examples are essential to surface challenges that large-scale models overlook, and that the Network can feed such empirical evidence into the UN Global Dialogue on AI Governance [195-199].


Rapid-fire commitments


– Microsoft pledged to honour the New Delhi Frontier AI commitments by sharing multilingual data and investing $50 billion by the end of this decade in Global South infrastructure to support scalable evaluation [348-349].


– The Gates Foundation committed to institutionalise safety evaluation at the point of deployment, ensuring issues are caught early [355].


– Masakhane announced a benchmarking initiative for African languages to be delivered within the year [356].


– Amir’s labs in Bangalore and San Francisco will release open-source, culturally contextual incident-reporting tools for public use [358].


Across the discussion, participants converged on the need for multilingual, culturally aware benchmarks (Aneja, Singh, Crampton, Thigo, Chenai Chair, Sibande) [29-33][34-38][43-45][73-77][174-175][216-227][236-247]; they agreed that civil-society insight and inclusive talent are essential for surfacing risks (Aneja, Thigo, Chenai Chair, Banifatemi, Sibande) [19-22][138-141][236-247][304-315][216-218]; and they recognised capacity-building, compute access and infrastructure investment as prerequisites (Singh, Thigo, Crampton, Banifatemi, Aneja) [71-73][158-160][276-284][296-311][34-38].


Key points of disagreement emerged around who should define benchmarks. Singh called for multilingual benchmarks to address risks [73-77], while Thigo warned that benchmarks are not neutral and should not be set by a handful of institutions [161-165]; Banifatemi added that evaluations must consider the whole system, not just model performance [319-320]. On incentives, Banifatemi argued that safety is not costed into financial planning, reducing corporate motivation [307-311], whereas Singh stressed that safety should complement, not stifle, innovation [105-108]; Aneja suggested using public procurement as a lever to drive responsible AI [50-53]. Regarding the scope of safety, Singh focused on technical risk identification [68-73], Thigo broadened it to include environmental, misinformation and lifecycle harms[154-156], Banifatemi emphasised system-wide evaluation [319-320], and Sibande called for a culturally grounded definition of harm [216-218].


Take-aways: (i) AI deployment in the Global South offers great promise but also risks amplifying existing social, gender, linguistic and environmental harms; (ii) identifying risks is insufficient without tools, benchmarks and capacity-building; (iii) the Global South is systematically under-represented in AI-safety governance, and the Network aims to provide field-tested evidence and act as a bridge to global policy forums; (iv) English-centric benchmarks must be replaced by multilingual, culturally aware ones; (v) capacity gaps-compute, talent, sustainable evaluation mechanisms-must be addressed; (vi) governance must be de-concentrated, ensuring benchmarks are not dictated by a few institutions and that safety is financially incentivised; (vii) coordination across overlapping initiatives is essential to avoid duplication and maximise impact [29-33][34-38][43-50][174-175][216-227][236-247][276-284][319-322][330-342].


Unresolved issues include: (a) a precise, universally accepted definition of “safety” and “harm” that captures diverse cultural contexts; (b) concrete mechanisms to cost safety into corporate financial planning or impose penalties for unsafe AI; (c) design of ongoing, scalable evaluation frameworks beyond one-off tests; (d) equitable access to high-performance compute for Global South researchers; (e) detailed pathways for the Network to integrate with UN AI-governance processes; (f) strategies to de-concentrate benchmark authority and ensure inclusive risk prioritisation; (g) methods to close the accountability loop so that technical evaluations translate into tangible citizen-level benefits [216-218][307-311][281-284][158-160][170-179][161-165][180-182].


Suggested compromises involve establishing regional nodes to balance rapid activation with local expertise, adopting an open-source, collaborative benchmarking framework that allows multiple institutions to contribute, leveraging the New Delhi Frontier AI commitments as a baseline while expanding multilingual evaluation work, combining top-down UN engagement with bottom-up civil-society evidence generation, and using pilot projects and incremental infrastructure investments (e.g., Microsoft’s $50 bn pledge) as stepping stones toward a sustainable, global evaluation ecosystem [348-349][170-176][S3].


Overall, the launch marked a decisive step toward a coordinated, inclusive AI-safety ecosystem for the Global South, with broad consensus on the need for multilingual, context-sensitive evaluation and capacity-building, alongside notable divergences on benchmark governance, incentive structures and the breadth of safety considerations that will shape the Network’s future trajectory.


Session transcriptComplete transcript of the session
Dr. Urvashi Aneja

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Good evening, everyone. My name is Urvishya Neja. I am the founder and director of Digital Futures Lab. And I am so excited to see all of you here and to have you all here for the launch of this network. So it’s a real pleasure to welcome you to the launch of the Global South Network for Trustworthy AI here at the India AI Impact Summit. On behalf of Digital Futures Lab and our other founding partners, Sirai from IIT Madras, the Global Center for AI Governance, ITS Rio, International Innovation Corps, thank you all for being here. And we’re especially grateful to Mr. Abhishek Singh and Ambassador Philip Tigo and Mr.

Quintin Chow and to all our distinguished speakers and guests who are joining us today. Across the Global South, AI systems are being rapidly deployed in critical social sectors such as healthcare, education, judiciary, and in government. And while the opportunities are immense, in many of these contexts, many of these contexts are also marked by low institutional capacity, deep societal inequities, popularization, and populations with low levels of literacy. So while the potential is immense, the risks and harms are also immense. And so it’s particularly important that we figure out ways to make AI safe and trustworthy in these contexts to ensure not only that we protect the populations and to ensure that we don’t exasperate existing harms, but also to ensure that we build the infrastructure for safe and inclusive AI adoption.

Unfortunately, Global South organizations, Global South communities, Global South states remain underrepresented in global safety and governance infrastructures. And many countries in the Global South are actually unlikely to even have in the near term their own safety or oversight institutes. And there’s a real risk, therefore, that the concerns and priorities of these countries, of these communities remain underrepresented in the global safety infrastructure. And precisely those countries that have the most potential or the most opportunity to leverage AI. Independent civil society organizations are uniquely positioned to address this gap. Their proximity to real -world deployment contexts enables them to surface risks that are invisible to lab -based evaluations or testing. The form of grounded evidence that civil society organizations can bring can inform global safety benchmarks, standard -setting processes, and risk assessments, providing corrective signals to technical and regulatory institutions.

The Global South Network for Trustworthy AI works to advance exactly these objectives – to evaluate the real -world impact of AI systems, to build the trust and oversight mechanisms localized to different linguistic, cultural, and infrastructural contexts, and to elevate Global South perspectives in global AI governance forums. It is particularly encouraging that this initiative also aligns closely with the recently announced New Delhi Frontier AI. The Global South Network for Trustworthy AI works to enhance the ability to evaluate the real -world impact of AI systems, and to develop a more robust and efficient system The Global South Network for Trustworthy AI works to enhance the ability to evaluate the real -world impact of AI systems, and to develop a more robust and efficient system the real -world impact of AI systems, and to develop a more robust and efficient system The Global South Network for Trustworthy AI works to enhance the ability to evaluate the real -world impact of AI.

impact of AI. The Global South Network for Trustworthy AI works to enhance the ability to evaluate the real -world systems. The Global South Network for Trustworthy AI brings together some of the leading research institutions from across the Global South. We are joined by a community of organizations from Asia, from Africa, from Latin America, and the names of which you see displayed behind you. I also want to take this opportunity to highlight some of the key activities that we’re going to be doing as part of the network. I think one of the key things that we want to do as part of the network is to really build an independent evidence base to generate community -informed analysis of the societal, ethical, and distributional risks of AI systems across diverse contexts.

We also want to do real -world deployment assessment to conduct contextual and public evaluations of models and applications across diverse social contexts. We also want to push the field of evaluations, push the science of evaluations, where we say that benchmarks are very important, but benchmarks as they stand today do not necessarily capture all the societal risks that we see in the Global South. So how do we ensure that the evaluation work that we’re doing also captures some of those harms? In some sense, what we want to do with the network is field building. We want to bring together Global South civil society organizations to pool in their collective intelligence, to pool in their capacities, and to advocate together for the representation of Global South concerns on global governance forums.

So what we are trying to do here is field building within the Global South around AI safety and around building that trust infrastructure. And eventually what we hope that all of this amounts to is collective advocacy. We see an important role that the network will play in creating a connective tissue between the global governance architecture, between the global safety infrastructure, and what’s happening on the ground. We hope the network can provide that visibility to real -world impact. to technology companies who are designing tools, who are designing safety infrastructure, as well as to governments and international organizations who are building the architecture of global AI governance. So with that, I want to thank you all. Oh, wait, I have one more thing to share with all of you.

I’m not ready to thank you yet. I also want to showcase some of the projects that we’ll be doing in the coming year. Picking up on yesterday’s commitments, one of the things that we’ll be doing is building benchmarks for multilingual AI. This is with our network partners, the Collective Intelligence Project and CARIA, and we’re really excited to start this work. We’re also going to be doing work on gender and safety. This is with our partners at GXD Hub and the Global Center for AI Governance to build a taxonomy of gender harm so that we can start building a more robust incident reporting database when it comes to gender -related harms and really advance gender safety in digital spaces.

The third piece that we’re going to be working on this year is around procurement. All of the evaluation work that we do, all the benchmarks that we build, all of that has to eventually feed into public policy. And so we hope that some of this work can support procurement. And procurement, we think, is a really important lever for countries in the global south to shape markets for responsible innovation. I think we’ve all heard a lot about the kind of third way of AI governance that India brings to the global governance landscape. And procurement can be an important lever of making that third way a reality and setting the bar for what responsible innovation looks like.

Like I mentioned earlier, we also want to push on the science of evaluation. What does good evaluation look like? What are the kind of methodologies that we need? What are the kind of methodologies that reflect the concerns and the capacities of communities in the global south? So we’re very excited to be doing this work with ITS Rio, who’s also one of the founding partners, and specifically to implement and advance this discussion on evaluations. We’ll be looking at labor market impacts in the global south. and finally we’re going to be looking at evaluations of health information systems do the existing generative AI tools and large language models that we see do they deliver for clinicians do they deliver for doctors what more can they do to support the needs of healthcare professionals in the global south.

So those are the five kind of big flagship projects that we’re going to be launching within the coming year. We’re going to be very busy as you can see we have a lot that we’re going to try and get done and we’re really excited to be on this journey with all of you and would love to engage with all of you post the launch and see how we build this civil society and research infrastructure together. So with that I am delighted to welcome our keynote speakers first and I would like to give the floor to Mr. Abhishek Singh. Sir thank you for your continued support Thank you for the network and for your leadership on the India AI Summit.

Over to you, sir.

Mr. Abhishek Singh

Thank you, Urvashi. And first and foremost, I’d like to congratulate all the team, the network which has brought this together, this Global South Network for Trustworthy AI. With a few months back when we started discussing this concept with Urvashi, with Kalika, with my team, we felt that how do we go about it? Because safe and trusted AI is something that nobody disagrees with. Everybody says that whenever AI innovation is happening, but we must ensure that we must protect ourselves, we must kind of secure ourselves from the harms that can come from misuse of AI or from the risks that frontier AI poses. So yes, we did have the Yoshua Bengio’s report, the scientific panel report, which is part of all the impact summits, the Action Summit and the Bletchley Park Summit, in which it has kind of…

identifies the risks that frontier AI model poses. But what we do believe is that just identifying the risk is not sufficient. We need to think of how do we address those risks. And for addressing those risks, you need to first have the technical tool, the capacity to identify those risks. What are the benchmarks on which you will evaluate them? Some of which Roshi identified, like how do various models perform on multilingual benchmarks? Because very often, most models are evaluated on benchmarks which are predominantly in English language. But if you look at India, a diverse country, we have 22 official languages and multiple other dialects. How do we evaluate how a model performs on various domains in prompts given in those languages?

We don’t have specific linguistic benchmarks. The same applies to many countries of global south. So it felt that while limited expertise exists in some institutions where research is going on, like Serai is one of them, where Professor Balram Ravindran is leading it. There are many labs, of course. whether it’s Microsoft Research or whether other labs wherein such work is going on. The AI Security Institute in UK is doing some work in this direction. The OECD has been doing some work. But how do we ensure that we enable the access to such resources, such tools, such studies to the larger global majority? So with that, this whole concept of creating a global south network for trustworthy AI came in.

And then we immediately had these conversations with all the key stakeholders, partners. We got a lot of support from almost all stakeholders. And along with that, the conversation for the New Delhi Frontier AI commitments was also going on, which Kalika from my team was leading it. And luckily, we were able to announce it in which all models committed to those two commitments about sharing usage data as also multilingual performance benchmarks. So that was a huge achievement. And I feel that the launch of this. Global South Network for trustworthy AI is a further step in that direction. How do we enable compliance to those commitments? How do we ensure that how this data will be shared?

How do we create tools for evaluating models in various languages? How do we build up capacity in all countries of the global south? How do we share resources? How do we share knowledge across? So this is just the beginning and I feel that we support from all industry organizations, the frontier AI labs, the research organizations, governments across the world. This can really, really grow into a resource that can be a global utility. So I compliment all that team which is involved in doing that. The launch of the network is the first step. But how do we action it out? How do we make it functional? How do we ensure that we get necessary support from all stakeholders?

Very often whenever we talk about trusted AI, whenever we talk about safe AI, some people think that we are trying to stifle innovation. The objective is not that. We always say that while the primary objective is to ensure diffusion of AI, primary objective is to ensure that more and more users benefit from the usage of AI. But at the same time, we need to do that in a responsible manner. We need to do it in a safe manner. We do need to do it in a trustworthy manner to limit the harm that can be caused. So this Global South Network for Trustworthy AI which is being launched will work in that direction. It will be an institution that will support not only India but the entire Global South.

And I am sure with just the presence of all the speakers who are present in this session, the strong commitment that all industry and all countries and all multilateral organizations are showing to this initiative, I am sure this will get further strengthened in the days to come. There is a lot of work that Urvashi and team are taking. They are taking on their own. But we will be there to provide all necessary support for India AI mission and we will work towards ensuring that you get the same level of support from every. participating country which is here. So thank you once again and congratulations for this launch and look forward to working towards the objectives in the near future.

Thank you.

Dr. Urvashi Aneja

MS. Thank you, sir, for your remarks and most importantly for your support. I think it means a lot to us to be working so closely with the India AI mission and we’re really excited to be able to deliver on this promise. It’s now my honor to invite Ambassador Philip Tigo, the Special Envoy on Technology from the Republic of Kenya, to share his reflections.

Ambassador Philip Thigo

AMBASSADOR PHILIP TIGO, ASSOCIATE OF TECHNOLOGY, KENYA, Thank you so much for this opportunity to share my reflections. And I noticed that this is really a women -led network, so again, congratulations, Ovashi and Rachel, for putting this together. I think before we celebrate the launch of the network, I think we must acknowledge that we are working with the right people and we are working with the right people. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here.

And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think because you must acknowledge the structural problem around the safety conversations and the infrastructure that has been cutting safety in the last three years. I think the global south has always been excluded from this conversation. I say this from a position of strength because Kenya is the only, Kenya I think, we’re the only member of the international network of AI safety institutes.

And so there’s a challenge there. And so I think that model that is not inclusive to a global majority, that in most cases bears the brunt and the impacts of AI, is not acceptable. And so this network, in my sense, is timely but also late. And so there’s almost an urgency that we need to work very closely in how we scale up what this network does. The second part, of course, is, as I mentioned, that a lot of the global majority countries that are there are not. They are the ones that not just bear the brunt of the models, but bear the adverse societal harms of the models. Kenya is one of the countries that uses one of the models.

and from the use cases we see that they use it for the wrong reasons. Emotional support or companionship, it’s not necessarily for anything meaningful or productivity. And so as the world advances, it therefore behoves us that we work with these frontier model companies to ensure that their models are safe beyond secure, but also are more trustworthy. The second part, of course, is that part of model evaluations assumes access. We now know that a lot of my colleagues who are doing model evaluations are doing it from an external point of view. So we need to be very clear that global majority countries, and by this when I say global majority countries, we also have a new global south in AI, because it’s just not the global majority.

We know in the global north of artificial intelligence is two countries and a few companies. So we must, beyond this, extend to also include other colleagues, whether it’s from Europe, Western Europe, or Latin America. Safety must also go beyond technology. towards socio -technical issues. We look at AI in the countries of Kenya from minds to models and so safety must also include environmental harms, biases, misinformation, disinformation but also harms to water, environment and so we need full lifecycle accountability. It’s good to evaluate the models but also it’s good to evaluate the footprints of the model quickly. There are four structural gaps that we see and this is why I love this network and the network I think one is yes you want global majority folks to evaluate the models but we have great teaming capacity gaps so I hope that this network will look at this.

Secondly I think is also issues of access to compute. We can’t have global majority researchers trying to evaluate models without necessarily having access to compute to do that. Third part of course has been mentioned by I think his left issues around linguistic and cultural mismatch so we need to do that the other part of course is benchmarking. as governance power. Also, benchmarks are not neutral. Sometimes I think I like to be honest because that’s what evaluation needs to do. And so we need, in most cases, to ensure that only a handful of institutions should not define what risks are measured, what harms are prioritized, and what safe performance means. Governance is about power. And we must deconcentrate that power even if it’s unintentional.

Finally, I think for me, evaluation is also about agency. And we must have a question of agency, a notion of agency around these models, but also including sovereign capability. As we know, a lot of your countries are trying to build sovereign models, but also sovereign capabilities across the track. What should this network deliver, in my view? And I’ll humbly make these quick suggestions. One, I think, yes, good to have the network, but can we have regional nodes for this? So that, because Africa… I speak for Africa, Africa is another country, it’s 54 countries, expanded to have nodes. Secondly, include multilingual benchmark data sets. Could be an interesting annual red teaming exercise. Could be potentially, why not publish a Global South AI Safety Report with an expansive definition of what safety is.

And I would be remiss if I don’t say how do we fit this into the multilateral process. We already have a global UN scientific panel on AI, and there’s a global dialogue on AI governance. I’m one of the champions for this, so hopefully we will get this in there. Finally, let’s close the accountability loop. How do all this ultimately matter for citizens? We can evaluate all we want, but if they don’t translate

Dr. Urvashi Aneja

Thank you, Ambassador, for highlighting the urgency of this work and also reframing the safety conversation for the Global South. And just to say we are planning to have regional hubs, and we do. And I think the point about how we engage with the multilateral system is very important, and we will have the Indian AC as part of our steering committee, and we hope we can work with the government of Kenya as well. And, of course, we have Professor Ravindran, who is part of the scientific council, so we will be relying on him as well. But thank you. Thank you for your remarks. And with that, I’d like to call our final keynote speaker for the day, who represents the UN Office for Digital and Emerging Technologies.

I’m pleased to invite Mr. Quenchen Chow Lambert, the Chief of Office and AI Lead, to deliver the next keynote. Thank you for your keynote address.

Mr. Quintin Chou-Lambert

There is less, perhaps, infrastructure or energy connection to go around. So the concept of AI safety becomes less of a, or it kind of edges into this more contextual field, and that’s where this kind of low perspectives, field -tested examples can be very helpful to surface, which we’re missing. And I’d say the idea of AI standards as technical standards don’t solve that issue because a one -size -fits -all standard will not be contextually sensitive. So moving from this kind of scaling a small, a very concentrated, highly expensive model across a massive user base to more tailored, small -language models to context turns the issue of AI safety into a more fuzzy kind of discussion and one which really needs empirical evidence.

And I think the trends in the institutional discussions from Bletchley Park to Sears, Seoul, where there were also around 30 countries signing the declaration, to Paris, where you had 60 -plus, and now here. over 100 countries engaging. We now have the United Nations Global Dialogue on AI Governance, which will include a whole 193 member states informed by analysis from an independent international scientific panel on AI, which will look at the risks and also opportunities and impacts of AI. And so as the conversation in these summit settings and in the international level has widened and to include more countries and more people and covered more of humanity, the focus has, through the open source developments, been allowed to become much more focused of encompassing other perspectives.

And that’s why, to close and to echo Ambassador Thieger, these kinds of networks play a crucial role in connecting and bringing examples of the challenges that we face. Thank you very much. cases of threats from various sources to local people into discussions so that international discussions do not ignore or omit or discount the perspectives of the vast majority of people on the planet. Thank you.

Dr. Urvashi Aneja

Thank you, Mr. Chow, for those remarks. I’d now like to call our panelists onto the stage. Ms. Natasha Crampton, Vice President and Chief Responsible AI Officer at Microsoft. Dr. Rachel Tabande, Senior Program Officer AI for Africa at the Gates Foundation. Before you sit, we’re going to take one quick picture. Ms. Chennai Chair. I don’t see you. Oh, there you are. Yes, okay. Director of the Masakane African Language Hub. Mr. Amin Banefatami, Chief Responsible AI Officer. I’m cognizant. And last but certainly not least, Dr. Balaram Ravindran, Head Center of Responsible AI at IIT Madras. Yes, and can we get the keynote speakers as well? Thank you. As with all good things in life, we’re short on time.

But so let’s get started. Rachel, I’m going to start with you. Thank you. where according to you what according to you or where according to you do you feel like we still lack clarity on how safe and reliable AI systems are when they’re deployed in real world context in the global south

Dr. Rachel Sibande

thank you so couple of things maybe two three things number one is we need to redefine what is safe and what is harmful in as far as AI models or applications are concerned according to the social cultural context that they are deployed in and that means that having models or applications that are great at understanding the data or the patterns to generate content is not enough if they do not understand the social norms the gender dynamics the religious beliefs the political sensitivities or indeed even the humor the slang or the tone particularly now that voice is being used in the media a key channel for delivery of AI. So we need to redefine safety and harm in the context in which AI models are deployed.

So I think we’re missing that, but hopefully we get there. I think the second piece is around language. It’s not enough for a large language model to have strong translation capabilities. Language in itself is not just about vocabulary. It’s also about the lived meaning, the lived experiences. I come from a beautiful country called Malawi. It’s also called the warm heart of Africa. Now, if you’re deploying a model for pregnant mothers to access advisory messaging there, if the mother says their waters have broken, which clinically is a critical incident that should warrant that mother to be referred to a health facility, but if you translate that from the local language to English, which is where most of these large language models and applications have been benchmarked on, that will literally mean I have thrown away water.

So if the model is not trained to understand that context, then you will miss that flag. And then finally, I wanted to say that we also need to understand the harms that emerge as people use the AI models. Currently, I think much of the benchmarking is done on the content and predefined metrics. So final example, personally, I use my AI companion as my therapist. So it’s the one persona that knows a lot about my personality from all spheres, as a mother, as a career person, my finances, all of that. But at what point can we then be able to track whether I’m substituting my cognizance and cognitive capabilities with that AI model or application, or that I’m becoming overly emotionally dependent?

So I think there are those three areas that we’re missing, and hopefully we can get better at it. Thank you.

Dr. Urvashi Aneja

Thank you, Rachel, and thank you also for those powerful examples, because I think we’ve been saying some of this at almost a theoretical level, but I think those examples really bring home the gaps in terms of where the current safety conversation is. Chennai, from a civil society perspective, what do you feel companies or developers often miss about the safety implications of deploying AI systems in the global south?

Ms. Chenai Chair

That’s one thing they miss, the user experience. So on a more serious note, thank you, Vashi. So this is great to actually piggyback from what you said, and I was like, are we reading the same notes? So I think what really is missed when people are deploying some of these solutions is around the context in which they’re deploying the tool. And this is particularly looking at an example where on the African continent, there is high levels of gender inequality, a very youthful population with young people often unemployed, and also older people forgotten in actually the development of technologies. So I don’t know who we’re developing for, but sometimes we actually don’t consider that diversity and the inequalities that exist.

So you can find that sometimes when these tools are deployed, they actually further exacerbate a situation of inequality. And I’ll give you one example where perhaps an agricultural tool that has a voice system on it to provide farmers or women information on what to plant may actually have a male -sounding voice. And if in that context there’s high issues of gender -based violence or lack of trust, and the community members were not consulted in the design process, what it actually leads to is just exacerbating an already existing situation. And that is an example. That actually did happen when people were deploying Internet solutions for a community. Then secondly, also thinking about who gets left behind in deploying these solutions.

This is where language, as Rachel was mentioning, comes in. So on the African continent, we have over 2 ,000 languages that have been documented. Masakhane is only working on 50 of those African languages to build up quality data sets. So what you then find is when people are deploying technologies, even if they deploy them in something like Kiswahili, which now has a large number of data sets, people just don’t speak Kiswahili across East Africa. And particularly in Kenya, if you go to Nairobi, the Kiswahili spoken in Nairobi will be Shang. Then you go to, it’s not even Kiswahili, as I’m being corrected. And then if you go to the coast in Mombasa, it will be completely different. So we have to actually take into account the context and nuance of what is being deployed.

And then lastly, the way in which the sector, the technology is actually used, if deployment doesn’t take into account. the whole ecosystem of the end user, it can actually result in misuse. And I want to specifically say that there’s two forms of misuse here. There could be people who unintentionally actually carry out a problematic, harmful act online based on how they’re interacting with the technology. And we know that content, particularly if it’s in their own language, and we know that content moderation for the global majority is not sufficient. Or people are underpaid, as we’ve seen the cases that were coming out about content moderators in Kenya. Then there’s actually intentional misuse. Now, this is where we find gender disinformation, the use of deepfakes to discredit people, particularly around election period.

And now with increased open AI that people can actually just type something and get something back, we are seeing that high level of deployment without thinking about what is the after -end impact. To close it off, because I’m talking about AI as if it’s coming later. A10. when they were deployed. It was great, I can track my missing bag on a flight. They have now been put in women’s bags or children’s bags by people who they do not know and they track them. That’s already an act of surveillance that was, if people had been consulted, it might have been mitigated against. Yes, I do want to know where my bag is, but I don’t want to be tracked unknowingly.

Dr. Urvashi Aneja

Thanks, Chennai, for that and also for pointing out, bringing the gender dimension on the table and highlighting the issues around what seems like useful technology, how quickly it can become surveillance technology. I’d like to now bring the industry perspective into this conversation. So, Natasha, maybe I can start with you. As you scale systems globally, what are some of the hardest constraints that you as a company face in ensuring context -sensitive safety?

Ms. Natasha Crampton

Well, thanks for that question, Arati, and congratulations to everyone on the establishment of the network. I think it’s a really important step forward. So when I think about Microsoft, I think about sort of Microsoft’s scale, and our mission is really to try and empower every person in every organization in the world to achieve more. And so one of the challenges I think that we face with scaling up our efforts here is how do we take the very deep, careful, thoughtful, community -led evaluation work that animated a project like Samishka, which the CAIA organization, as well as the Collective Intelligence Project and Microsoft Research worked on together, which really developed very context -aware evaluations that were appropriate for the use case.

And how do we take that thoughtful work and really scale it up? Because really we want to do that type of work for thousands of languages and probably millions of different cultural settings. And so I think we really need to think about this system of how we are going to build multilingual and multicultural evaluations that we can really run broadly. I think sometimes we think evaluations, we don’t sort of understand how sustainably they need to be run. As in you can’t just do it once before you release a product. You need to run the evaluations on an ongoing basis to understand how there might have been shifts. And so I really think for us we need to think about this system.

How are we going to build a sustainable, grounded, community -led system of scalable evaluation?

Dr. Urvashi Aneja

Thanks, Natasha. And I hope in some sense also the network can actually play at least part of that function in building that kind of coherence to the space of evaluation and helping us at least build a shared vocabulary and a shared set of methodologies together as organizations. Amir, what do you think needs to change, whether it’s internally within companies or externally in terms of the ecosystem that we’re operating in, to make such grounded evaluations, the kind that Natasha was talking about, become the standard practice for industry? Should they be the standard practice? And if so, how? How do we get there?

Mr. Amir Banifatemi

Thank you for that question. And first, congratulations. I’m happy to be also part of this network and support it. I think Natasha mentioned part of the foundational questions. And I think from a, I’m putting my hat off, cognizant chief responsible, we work with a lot of companies and governments into deploying. new scenarios. We call it systems or applications or anything else. The concept of safety, I was mentioned, is diffused. It’s not very clear what we’d call safety. So evaluating the underlying element that needs to be changed or to be addressed is not obvious. When we talk about models, models are not just one thing that you deploy. It goes into an application, there’s a system, infrastructure, there’s network access, there’s API connected data access.

All of them are contextually different. That was mentioned before. And then the problem, one of the problems is that, you didn’t ask me about the problem, but there’s a problem issue is that there’s a lack of imagination. People that are building systems have no awareness about the context in which those situations occur and how they occur and what’s the causes and what’s the likelihood of solution to happen. So absent of that, all this context which language is part of it, culture is part of it, is not captured. So without that, there is very little capability. to address that from a regulation or incentive perspective. Safety, on the other side, is not costed into financial systems and so forth.

There is no penalty of not being safe. So as long as there is no constraint to put safety as a cost structure, which strong mandate, companies will not pay attention or enough attention. So if it’s not part of the financial planning and the processes and so forth, it won’t happen. So there is a disconnect between what we do as enterprises to make sure that systems and platforms are properly built and deployed. There is a disconnect between the system in which they are deployed. At the same time, there is a talent inclusion that is missing. So the inclusion part is that all the talent that is building into those safety conversations are not the talent that are exposed to those issues.

So that absent voice is also a piece that needs to be addressed, not just from a skilling perspective, but also from an integration perspective. And finally, the infrastructure part. The infrastructure is not just systems and models and data, it’s also the tooling and the evaluation. And it was mentioned that evaluation has to be done differently, but if you don’t know what harm or safety means, evaluation’s gotta be different. There is probably an opportunity here to come up with a series of evaluation tools that are not only built for model design, but also built for system deployment. And if we go from pilot to scaling, what issues occur and what examples are happening and what incidents are deployed, and incident reporting is a huge opportunity here because it will capture, nested in reporting, some of the hidden element of the control issues, data access, regulation, absence, or anything else.

Finally, there is a latency issue in global north, and you mentioned probably correctly that there’s a lot of latency issues. There are only a handful of countries in the global north that probably the new slot is much bigger. there are institutional framework, you have basically the rule of law, you have civil society which is very active, you have legal framework that basically creates an accelerated feedback loop into all this incident safety in most of the global south countries these mechanisms don’t exist which delays the feedback loop and basically compounds the possible harm and everything else so there is probably an opportunity to figure out how we can accelerate the learning capabilities and the skills at which we capture knowledge and data to be tied with tools that probably need to be implemented and deployed either on an open source matter or a free access matter and build it with a contextual environment, the talent pool to make it together so the ownership of the global south, all these pieces are important so the network can actually incentivize those different pieces that could complete each other to really play a role into the global south understanding better where safety issues are, where harm can happen and what corrections can be made in the rhythm that needs to happen because rhythms are not exportable and what we do from one country to another is not.

And finally the network could probably help bring it together.

Dr. Urvashi Aneja

Thank you for laying that out and also just pointing out how all the kind of pieces link to each other and we can’t just kind of go at it at one level alone and to the importance of capacity across all those. Professor Ravindran, AI deployment is accelerating in the global south, in India, in many other countries as well. But at the same time we don’t see or so far we haven’t seen as much investment in the kind of safety and safety infrastructure. Would you agree? You’re actually asking an academic about investment? Sure, of course there’s not enough money. Why not and how do we change it?

Dr. Balaraman Ravindran

so I’m going to answer a different question sure perfect like a true academic I’m sorry I’ll connect it back to what you asked so there are a whole lot of initiatives that are getting announced at the summit and also things that I kind of discovered while having various conversations that there are multiple networks that are getting launched there are already in operation there is a network in Africa looking at capacity building there is a network in China apparently which none of us seem to have heard about that’s being launched on AI safety and capacity building and that is our network that’s getting launched and that is the UN initiative on building this network of capacity building institutes for the global south which we had a meeting in the morning as well about that so there is just too many of these initiatives that are getting launched.

And we have to figure out a way how we would coordinate operations among these initiatives as well. So I think that would be a great multiplier instead of everybody going out and saying, okay, let me see what small piece of the pie that I can get so that I can do these activities. And after that, if there is a lot more coordination. And if you remember our initial conversations about when we wanted to start this thing was about this would be like this one node in the global AC. I can’t even say global network of safety institutes anymore, can I? So they’re not even safety institutes. So AC institutes, whatever ACs, this should be like one node in the AC network which kind of represents unheard voices there because almost except for, as the ambassador was pointing out, except for Kenya.

So we really have, and of course India, I presume. We don’t have safety institutes in the global south, right, that can participate in the dialogue. So I mean, that kind of larger collaboration framework is something that we should enable so that, I mean, even if you say, we go to Gates, and then how many different people, how many different networks would Gates want to spend their money to? If that is one way we can say that there’s this whole operation that’s happening, then that would be a great way of harmonizing our efforts. I can turn it back to the question. Thank you.

Dr. Urvashi Aneja

No, I mean, I think you raised a really important issue of kind of harmonizing these efforts, and also that how this network can play a really important role in the larger kind of AC network. Luckily, the S remains the same, so we can still go with the acronym, I guess, on the safety network. We’re almost at time, so let’s just do one kind of quick rapid -fire round with all the panelists, and maybe Natasha, I can start with you. What is the one concrete step your institution, Microsoft, could take in the next year to strengthen AI safety in the global south?

Ms. Natasha Crampton

Well I’m looking forward to making good on the New Delhi Frontier AI commitments that Microsoft made which is going to help advance multilingual and multicultural evaluation work as well as share data that will help policy makers make or understand AI adoption within their countries and make the sort of choices and policy interventions that help bring that broader access so if I can be sneaky and kind of come as one thing. The second thing I’m really excited about is we’re making large infrastructure investments across the global south to the tune of 50 billion dollars by the end of this decade now that infrastructure as Amir and others on the panel have mentioned is essential to being able to building up this scaled system of sustainable evaluation so I’m looking forward to those investments too.

Dr. Urvashi Aneja

Thank you.

Dr. Balaraman Ravindran

is that a fire alarm or something?

Dr. Urvashi Aneja

No, no, no, they’re telling us that we have to wrap up I think.

Dr. Balaraman Ravindran

Okay, great, so wrapping up, so we have to get the work going, rolling, right, so talking about it is one thing, but actually starting to do this collaboration and getting this research efforts going, we’d love to reach out to partners across the globe, in fact, I’m part of the other UN network as well and we have been talking about looking at problems that would necessarily require cross -border collaboration, right, as supposed to, you know, problems that we would anyway solve in our geography, then just working with somebody else to solve it in two geographies, okay but if you can pick problems that will necessarily require people across borders to collaborate, I think that will certainly drive this and also will, you know, kind of put forth the importance of having the network itself, not just information sharing, but actually problem solving that can be done only across the network.

Dr. Urvashi Aneja

Thank you. Rachel, 30 seconds.

Dr. Rachel Sibande

30 seconds I think from the foundation side is to really institutionalize the evaluation of safety of AI solutions right at deployment because what we see now is that safety issues almost emerge post deployment thank you

Ms. Chenai Chair

so from the hub side we actually do have a benchmarking initiative that’s going on this year so this will be one contributing to the African benchmarking work and so that will be our output in contribution

Dr. Urvashi Aneja

amazing looking forward to that thank you Chennai and Amir last but not least

Mr. Amir Banifatemi

we’re working already on with our two labs one in Bangalore actually and one in San Francisco on safety evaluations mostly on incident reporting and we already made it culturally contextual so I hope that we are helpful to basically provide open source tools for evaluation to disseminate them and work with that work to basically make them accessible to the public available to all partners.

Dr. Urvashi Aneja

Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (41)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Dr Urvashi Aneja formally launched the Global South Network for Trustworthy AI at the India AI Impact Summit”

The knowledge base lists Dr Urvashi Aneja as a participant in the launch of the Global South AI Safety Research Network, confirming the launch event and her role [S3].

Confirmedmedium

“Independent civil‑society organisations are uniquely suited to surface risks invisible to laboratory testing”

Civil-society organisations are described as able to bridge the gap between citizens and governments by conducting independent assessments and surfacing risks that may not appear in lab settings [S49].

Additional Contextmedium

“The network will act as connective tissue between the global governance architecture, the global safety infrastructure, and what’s happening on the ground”

The knowledge base notes that such networks play a crucial role in connecting local challenges to global discussions and that they bring unique regional expertise to facilitate sharing and capacity-building across the Global South [S16] and [S128].

Additional Contextmedium

“The Global South is under‑represented in global safety and governance infrastructures, with many countries lacking their own oversight bodies”

Discussion in the knowledge base highlights infrastructural barriers and a lack of assurance mechanisms for many Global South countries, underscoring under-representation in safety and governance frameworks [S34].

Confirmedhigh

“Ambassador Philip Thigo is involved in the network and echoed the urgency of addressing AI safety in the Global South”

Ambassador Philip Thigo is listed among the participants in the launch of the Global South AI Safety Research Network, confirming his involvement and support for the initiative [S3].

External Sources (132)
S1
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — -Balaraman Ravindran- Professor at IIT Madras (India), member of the UN scientific panel
S2
Why science metters in global AI governance — -Balaraman Ravindran- Professor at IIT Madras, member of International Independent Scientific Panel
S3
Towards a Safer South Launching the Global South AI Safety Research Network — – Dr. Balaraman Ravindran- Dr. Urvashi Aneja
S4
Responsible AI for Shared Prosperity — -Philip Thigo- His Excellency Ambassador, Special Technology Envoy of the Government of Kenya
S5
Philip Thigo named Kenya’s special envoy for technology — Philip Thigo, the Executive Director for Africa at Thunderbird School of Global Management, has been appointed as the Sp…
S6
https://dig.watch/event/india-ai-impact-summit-2026/toward-collective-action_-roundtable-on-safe-trusted-ai — And to explore those questions, we’ve got an amazing panel that I’m honored to introduce. We’ve got Dr. Chinasa Okolo on…
S7
Open Forum #30 High Level Review of AI Governance Including the Discussion — – **Abhishek Singh** – Under-Secretary from the Indian Ministry of Electronics and Information Technology Abhishek Sing…
S8
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Abhishek Singh:I can take that, no worries. Thank you, Abhishek. The floor is yours. You can give your question. Yeah, t…
S9
Announcement of New Delhi Frontier AI Commitments — -Abhishek: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S10
Towards a Safer South Launching the Global South AI Safety Research Network — – Dr. Rachel Sibande- Ms. Chenai Chair- Ambassador Philip Thigo – Ms. Natasha Crampton- Dr. Rachel Sibande
S11
Published by DiploFoundation (2011) — Malta: 4th Floor, Regional Building Regional Rd. Msida, MSD 2033, Malta Switzerland: Rue de Lausanne 56 CH-1202 Ge…
S12
Towards a Safer South Launching the Global South AI Safety Research Network — -Ms. Chenai Chair- Director of the Masakane African Language Hub
S13
Responsible AI for Shared Prosperity — -Chenai Chair- Director of the Mazakani African Languages Hub -Co-Moderator- Role/title not specified
S14
IGF to GDC- An Equitable Framework for Developing Countries | IGF 2023 Open Forum #46 — Moderator:based intergovernmental international organization dedicated to promoting and supporting the development of th…
S15
Towards a Safer South Launching the Global South AI Safety Research Network — – Dr. Urvashi Aneja- Mr. Quintin Chou-Lambert
S16
https://dig.watch/event/india-ai-impact-summit-2026/towards-a-safer-south-launching-the-global-south-ai-safety-research-network — I’m pleased to invite Mr. Quenchen Chow Lambert, the Chief of Office and AI Lead, to deliver the next keynote. Thank you…
S17
https://dig.watch/event/india-ai-impact-summit-2026/towards-a-safer-south-launching-the-global-south-ai-safety-research-network — Thank you, Mr. Chow, for those remarks. I’d now like to call our panelists onto the stage. Ms. Natasha Crampton, Vice Pr…
S18
Towards a Safer South Launching the Global South AI Safety Research Network — – Mr. Abhishek Singh- Ms. Natasha Crampton- Ms. Chenai Chair – Ms. Natasha Crampton- Dr. Rachel Sibande
S19
Multi-stakeholder Discussion on issues about Generative AI — Natasha Crampton:So, I’m Natasha Crankjian from Microsoft. I’m incredibly optimistic about AI’s potential to help us hav…
S20
Towards a Safer South Launching the Global South AI Safety Research Network — – Dr. Urvashi Aneja- Ambassador Philip Thigo
S21
Towards a Safer South Launching the Global South AI Safety Research Network — – Ambassador Philip Thigo- Mr. Amir Banifatemi
S22
https://dig.watch/event/india-ai-impact-summit-2026/secure-finance-risk-based-ai-policy-for-the-banking-sector — Compliance functions increasingly rely on automated pattern recognition, while adaptive cybersecurity models respond to …
S23
morning session — Aouad argues that without proper regulations and safeguards, the population could experience negative consequences. Furt…
S24
https://dig.watch/event/india-ai-impact-summit-2026/ai-safety-at-the-global-level-insights-from-digital-ministers-of — Thank you. Certainly, the reason that I continue to be involved with this is because… under Yoshua’s chairmanship of t…
S25
Global AI Policy Framework: International Cooperation and Historical Perspectives — Velasco explains the complementary roles of the UN’s two new AI governance mechanisms, with the scientific panel offerin…
S26
Leveraging the UN system to advance global AI Governance efforts — Gilbert Houngbo highlights the imperative role of the United Nations in spearheading global coordination efforts, thereb…
S27
Main Session 2: The governance of artificial intelligence — Importance of bringing voices from the global south and underrepresented communities to governance dialogues
S28
What is it about AI that we need to regulate? — Ensuring Better Representation of Developing and Least-Developed Countries in Global Digital GovernanceThe question of h…
S29
GC3B: Mainstreaming cyber resilience and development agenda | IGF 2023 Open Forum #72 — One of the main arguments put forward at the conference was the necessity for individuals and nations to be aware of the…
S30
Advancing Scientific AI with Safety Ethics and Responsibility — And I think, I think, So, just in terms of paradigm change that we are seeing and that you mentioned, is that there need…
S31
WS #123 Responsible AI in Security Governance Risks and Innovation — Jingjie He: So I think the inclusive engagement across stakeholders is essential for the effective global governance of …
S32
AI in Africa: Beyond the algorithm — ### The Systematic Exclusion of the Global South
S33
WS #82 A Global South perspective on AI governance — AUDIENCE: Ends up. We cannot hear. Rely on ISO 31,000 is what they see as the kind of framework for risk assessments…
S34
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Develop multilingual evaluations and benchmarks that account for diverse language ecosystems
S35
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Xianhong Hu:Thank you very much Mr. Ambassador. Good morning everyone. First of all please allow me, I’d like to be able…
S37
Shaping the Future AI Strategies for Jobs and Economic Development — Thank you. and how safety is governed under real constraints, how AI systems actually reach the people and states often …
S38
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S39
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — The tone was pragmatic and solution-oriented throughout, with speakers acknowledging both challenges and opportunities i…
S40
Smart Regulation Rightsizing Governance for the AI Revolution — Bella Wilkinson from Chatham House provided a realistic assessment of the current geopolitical landscape, arguing that g…
S41
MASTERPLAN FLAGSHIP PROGRAMMES — | Outcomes | Objectives …
S42
MASTERPLAN FLAGSHIP PROGRAMMES — | Outcomes | Objectives …
S43
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — In conclusion, the analysis reinforces the potential of digitalisation and emerging technologies, such as artificial int…
S44
WS #479 Gender Mainstreaming in Digital Connectivity Strategies — Ivy Tuffuor Hoetu: Yes, thank you. And picking up from where Dr. landed, it’s true we need the metrics and we need the d…
S45
WS #231 Address Digital Funding Gaps in the Developing World — The conversation concluded with calls for greater coordination among stakeholders to avoid duplication of efforts and ma…
S46
Successes &amp; challenges: cyber capacity building coordination | IGF 2023 — Furthermore, the discussion emphasizes the importance of coordinating with multiple stakeholders or through bilateral in…
S47
Resilient infrastructure for a sustainable world — Collaboration and Partnership Importance Ng explains that UNDRR depends on partnerships due to being a small organizati…
S48
Part 3: ‘Readiness across the spectrum: Countries’ — The EU strategy’s emphasis on the 2030 Digital Agenda aligns closely with the IMI’s Access pillar, providing a strong fo…
S49
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S50
Fireside Conversation: 02 — Timeline expectations, hype, and the AGI narrative
S51
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S52
Advancing Scientific AI with Safety Ethics and Responsibility — High level of consensus with significant implications for AI governance policy. The agreement across speakers from diffe…
S53
WS #162 Overregulation: Balance Policy and Innovation in Technology — Regulation is necessary but should not stifle innovation
S54
WS #283 AI Agents: Ensuring Responsible Deployment — Balance needed between privacy protection and innovation Despite representing different sectors (industry, government, …
S55
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — High level of consensus with significant implications for AI governance policy. The agreement among industry leaders fro…
S56
Main Session | Policy Network on Artificial Intelligence — These key comments shaped the discussion by broadening its scope beyond technical and policy considerations to include e…
S57
Towards a Safer South Launching the Global South AI Safety Research Network — We know in the global north of artificial intelligence is two countries and a few companies. So we must, beyond this, ex…
S58
From Technical Safety to Societal Impact Rethinking AI Governanc — Virginia stresses that AI safety cannot be limited to technical robustness, accuracy or alignment. It must incorporate m…
S59
DiploNews – Issue 312 – 15 November 2016 — News headlines are featuring more and more cases of severe cyber incidents, some openly attributed to states and their i…
S60
The impact of big data on geopolitics, negotiations, and diplomacy — At a global level, data is addressed by a wide range of organisations. Within the World Trade Organization (WTO), data f…
S61
Tech Diplomacy: New Impulses for the Geneva Ecosystem? (Science Diplomacy Week) — Mr Jean-Yves Art, Senior Director, Strategic Partnerships, Microsoft: Companies within the tech sector have a responsibi…
S62
https://dig.watch/event/india-ai-impact-summit-2026/towards-a-safer-south-launching-the-global-south-ai-safety-research-network — Finally, I think for me, evaluation is also about agency. And we must have a question of agency, a notion of agency arou…
S63
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — I think there is a significant role the governments, innovation hubs, academia, and startups have to play in developing …
S64
Who Watches the Watchers Building Trust in AI Governance — Thank you, Greg. And again, congratulations. Stephen was the publisher of the great report. And I think, first of all, I…
S65
Networking Session #74 Digital Innovations Forum- Solutions for the Offline People — Better coordination and collaboration among donors is needed to avoid duplication of efforts and maximize impact.
S66
Opening address of the co-chairs of the AI Governance Dialogue — The speakers demonstrate strong consensus on the fundamental principles of AI governance, including the need for inclusi…
S67
WS #335 Global Perspectives on Network Fees and Net Neutrality — Despite representing different stakeholder groups (regulator, private sector, civil society), these speakers all emphasi…
S68
Can we test for trust? The verification challenge in AI — **Anja Kaspersen** stressed the importance of bringing technical professional organizations into governance conversation…
S69
Panel Discussion Data Sovereignty India AI Impact Summit — “One, of course, is basically the policies need to evolve along with the infrastructure.”[37]. “As far as governments ar…
S70
Beyond universality: the meaningful connectivity imperative | IGF 2023 — However, concerns remain regarding device affordability and availability, the need for inclusivity in content and servic…
S71
AI That Empowers Safety Growth and Social Inclusion in Action — “investors should ask whether there is clear board level responsibility on AI risk whether executive incentives are alig…
S72
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — And how do we demonstrate that the risks have been managed well? And that is where the assurance ecosystem that Rebecca …
S73
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — This comment fundamentally redirected the conversation from discussing what rules to impose on companies to how to creat…
S74
Towards a Safer South Launching the Global South AI Safety Research Network — “And while the opportunities are immense, in many of these contexts, many of these contexts are also marked by low insti…
S75
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — And this is almost like a test for me of kind of saying. These names of these institutions through this panel. But they …
S76
Developing capacities for bottom-up AI in the Global South: What role for the international community? — ## Areas of Different Emphasis and Debate ## Practical Applications and Examples ## Unresolved Questions and Future Di…
S77
WS #362 Incorporating Human Rights in AI Risk Management — Jhalak Mrignayani Kakkar: Thank you. Thanks, Min. I think there’s a lot of work happening globally on human rights due d…
S78
What is it about AI that we need to regulate? — Ensuring Better Representation of Developing and Least-Developed Countries in Global Digital GovernanceThe question of h…
S79
Shaping the Future AI Strategies for Jobs and Economic Development — Thank you. and how safety is governed under real constraints, how AI systems actually reach the people and states often …
S80
WS #100 Integrating the Global South in Global AI Governance — Overall, the panel emphasized that while challenges remain, there are promising avenues to increase meaningful inclusion…
S82
How Multilingual AI Bridges the Gap to Inclusive Access — “And I think on these three capabilities, we need to jointly increase, and whoever doesn’t have it should be able to eas…
S83
Smart Regulation Rightsizing Governance for the AI Revolution — Bella Wilkinson from Chatham House provided a realistic assessment of the current geopolitical landscape, arguing that g…
S84
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Fundamental infrastructure challenges—including limited computing power, inadequate connectivity, and capacity gaps—requ…
S85
WS #462 Bridging the Compute Divide a Global Alliance for AI — This comment deepened the discussion by introducing the concept of compound disadvantages and helped other panelists rec…
S86
Benchmarking countries’ progress globally on closing the gender digital divide ( Women in Digital Transformation) — Data generation is essential to be able to address these gaps, especially gender gaps
S87
Future-Ready Education: Enhancing Accessibility &amp; Building | IGF 2023 — In conclusion, the analysis underscores the need for equitable access to the internet to ensure inclusive and quality di…
S88
MASTERPLAN FLAGSHIP PROGRAMMES — | Outcomes | Objectives …
S89
WS #110 AI Innovation Responsible Development Ethical Imperatives — Guilherme Canela de Souza Godoi: Thank you very much. First and foremost, thank you so much for the invitation to be her…
S90
MASTERPLAN FLAGSHIP PROGRAMMES — | Outcomes | Objectives …
S91
WS #231 Address Digital Funding Gaps in the Developing World — The conversation concluded with calls for greater coordination among stakeholders to avoid duplication of efforts and ma…
S92
Resilient infrastructure for a sustainable world — Collaboration and Partnership Importance Ng explains that UNDRR depends on partnerships due to being a small organizati…
S93
BPF: CYBERSECURITY — By working together strategically, they can pool resources, expertise, and knowledge to better respond to and mitigate c…
S94
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240 (continued)/5/OEWG 2025 — Dominican Republic: Thank you, Chairman. The Dominican Republic reiterates its firm conviction that capacity building …
S95
Digital Cooperation and Empowerment: Insights and Best Practices for Strengthening Multistakeholder and Inclusive Participation — ### Regional and Multilingual Strategies Amrita Choudhury emphasised the need to “keep processes open and inclusive wit…
S96
Opening and introduction — The AU’s commitment to working with Member States in adopting the meeting’s recommendations was reaffirmed, alongside th…
S97
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S98
Launch / Award Event #159 Book Launch Netmundial+10 Statement in the 6 UN Languages — The tone was consistently celebratory, appreciative, and forward-looking throughout the session. Participants expressed …
S99
Scaling Innovation Building a Robust AI Startup Ecosystem — The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with the awards cer…
S101
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S102
WS #225 Bridging the Connectivity Gap for Excluded Communities — The discussion maintained a professional but increasingly urgent tone throughout. It began optimistically with solution-…
S103
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S104
New Technologies and the Impact on Human Rights — This IGF session demonstrated a maturing of debates around technology and human rights, with stakeholders from different…
S105
Comprehensive Discussion Report: Governance Frameworks for Reducing Digital Divides in African and Francophone Contexts — The tone was pragmatic and solution-oriented, with speakers expressing both frustration with past failures and cautious …
S106
Tackling disinformation in electoral context — The tone of the discussion was largely collaborative and solution-oriented, with panelists sharing insights from differe…
S107
Panel 5 – Ensuring Digital Resilience: Linking Submarine Cables to Broader Resilience Goals — The tone was largely collaborative and solution-oriented. Panelists built on each other’s points and offered complementa…
S108
Panel 2 – Anticipating and Mitigating Risks Along the Global Subsea Network  — The discussion maintained a professional, collaborative tone throughout, with participants demonstrating technical exper…
S109
Panel 4 – Resilient Subsea Infrastructure for Underserved Regions  — The discussion maintained a professional, collaborative tone throughout, with panelists building on each other’s insight…
S110
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S111
Closing Ceremony — The overall tone was positive and forward-looking. Speakers expressed gratitude to the hosts and participants, emphasize…
S112
(Plenary segment) Summit of the Future – General Assembly, 4th plenary meeting, 79th session — The tone of the discussion was generally optimistic and forward-looking, with speakers emphasizing the need for urgent a…
S113
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S114
Friday Opening Ceremony: Summit of the Future Action Days — The overall tone was inspirational, hopeful and energetic. Speakers aimed to motivate and empower youth attendees while …
S115
Impact of the Rise of Generative AI on Developing Countries | IGF 2023 Town Hall #29 — Tomoyuki Naito:Ladies and gentlemen, good evening. I know this is today’s last session, that’s why not over 100 people c…
S116
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — And we’ll hear some of that in addition to global elements. A lot of that is also having a lot of innovation that will r…
S117
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — “For the next session, we have a fireside chat between Mr. Vivek Kaneja, Executive Director, CDAT, Mr. Nitin Bajaj, Dire…
S118
Global South’s role in AI governance explored at IGF 2024 — The inclusion of the Global South, particularly theMENA region, in AI governance emerged as a key focus in a recentpanel…
S119
Main Session on Artificial Intelligence | IGF 2023 — There is inadequate representation from the Global South in these discussions
S120
Harnessing AI for Child Protection | IGF 2023 — Artificial Intelligence is giving a lot of opportunities in various fields such as education, law, etc.
S121
WS #205 Contextualising Fairness: AI Governance in Asia — Milton Mueller: Can you hear me? Am I on? Okay, thank you very much. Yeah, I am going to, yeah, first issue you a f…
S122
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — The digital transformation increasingly contributes to greenhouse gas (GHG) emissions. For example, generative artificia…
S123
From India to the Global South_ Advancing Social Impact with AI — That itself is offensive because what are we trying to say? So I think those things will get blurred because opportuniti…
S124
Development of Cyber capacities in emerging economies | IGF 2023 Open Forum #6 — Central America is facing significant challenges in the field of cybersecurity. The region is underdeveloped in terms of…
S125
Open Forum #13 Bridging the Digital Divide Focus on the Global South — ICANN co-chair Tripti Sinha emphasized that the divide encompasses participation and inclusiveness beyond mere access, a…
S126
Main Topic 4: Transatlantic rift on Freedom of Expression — Yasur argues that civil society organizations are uniquely positioned to bridge the gap between platforms, governments, …
S127
Global network strengthens AI measurement and evaluation — Leaders around the worldhave committedto strengthening the scientific measurement and evaluation of AI following a recen…
S128
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-cybersecurity-_-india-ai-impact-summit — The network would bring unique expertise and perspectives from different regions of the world. This diversity would only…
S129
WS #98 Towards a global, risk-adaptive AI governance framework — Paloma Villa Mateos: Yeah, thank you. Thank you. Can you listen to me well? It’s okay? Okay, great. Well, thank you. W…
S130
AI Meets Cybersecurity Trust Governance &amp; Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S131
WS #97 Interoperability of AI Governance: Scope and Mechanism — Mauricio Gibson: Thank you. Yeah, I mean, just building on what Chet was saying, I think, and what you were saying, Olg…
S132
Networking Session #50 AI and Environment: Sustainable Development | IGF 2023 — The role of international organisations, such as the OECD, is highlighted by one speaker in facilitating cooperation on …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Urvashi Aneja
3 arguments129 words per minute2383 words1102 seconds
Argument 1
Risk of amplifying existing harms without adequate safeguards (Dr. Urvashi Aneja)
EXPLANATION
Dr. Aneja warns that while AI offers great opportunities in the Global South, the same contexts also feature low institutional capacity and deep inequities. Without proper safeguards, AI could worsen existing harms rather than alleviate them.
EVIDENCE
She notes that AI systems are being rapidly deployed in critical sectors such as healthcare, education, judiciary and government across the Global South, and that these contexts are marked by low institutional capacity, deep societal inequities, popularization and low literacy, which together mean that the risks and harms are immense and could exacerbate existing problems if not addressed [11-14].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for proper regulations and safeguards to prevent negative consequences is highlighted in [S23], and systematic exclusion of the Global South from safety governance, which can exacerbate harms, is discussed in [S32].
MAJOR DISCUSSION POINT
Risk of amplifying existing harms without adequate safeguards
AGREED WITH
Mr. Abhishek Singh
Argument 2
Network will build independent evidence, contextual evaluations, and act as a bridge to global governance (Dr. Urvashi Aneja)
EXPLANATION
The Global South Network for Trustworthy AI will generate real‑world evidence, conduct contextual assessments, and connect local insights with global AI governance structures. This aims to ensure that safety standards reflect the linguistic, cultural and infrastructural realities of the Global South.
EVIDENCE
Dr. Aneja describes the network’s purpose to evaluate the real-world impact of AI systems, build trust and oversight mechanisms tailored to different contexts, and elevate Global South perspectives in global AI governance forums [22-23]. She also explains that the network will serve as connective tissue between global safety infrastructure and on-the-ground realities, providing visibility to technology companies, governments and international organisations [37-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UN mechanisms that bridge scientific evidence with policy and promote inclusive governance are described in [S25] and [S26], while the importance of bringing Global South voices into AI governance is emphasized in [S27].
MAJOR DISCUSSION POINT
Network will build independent evidence, contextual evaluations, and act as a bridge to global governance
AGREED WITH
Dr. Balaraman Ravindran, Mr. Quintin Chou‑Lambert
Argument 3
Engagement with UN‑led AI governance processes and inclusion of Global South voices are essential (Dr. Urvashi Aneja)
EXPLANATION
Dr. Aneja emphasizes the importance of linking the network’s work with multilateral AI governance mechanisms, including UN‑led dialogues and the Indian AI Council. This ensures that Global South perspectives are represented in global policy making.
EVIDENCE
She thanks the ambassador for highlighting urgency and notes that the network will have regional hubs, will involve the Indian AI Council as part of its steering committee, and will collaborate with the Kenyan government and Professor Ravindran on the scientific council [183-186].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of UN-led dialogues and the need for inclusive participation of developing countries are outlined in [S25], [S26] and [S27].
MAJOR DISCUSSION POINT
Engagement with UN‑led AI governance processes and inclusion of Global South voices are essential
M
Mr. Abhishek Singh
4 arguments181 words per minute892 words294 seconds
Argument 1
Identifying risks is not enough; tools and benchmarks are needed to address them (Mr. Abhishek Singh)
EXPLANATION
Mr. Singh argues that merely recognizing AI risks does not solve the problem; concrete technical tools, capacity building and appropriate benchmarks are required to mitigate those risks, especially in multilingual contexts.
EVIDENCE
He references the Yoshua Bengio report and other scientific panel reports that identify frontier AI risks, then stresses the need for technical tools, capacity to identify risks, and benchmarks such as multilingual performance tests, noting the lack of specific linguistic benchmarks for India’s 22 official languages and many other Global South countries [68-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The New Delhi Frontier AI commitments call for strengthened multilingual and contextual evaluations, underscoring the need for concrete tools and benchmarks [S9]; broader discussions on decentralized checks and technical tool development are found in [S30] and [S3].
MAJOR DISCUSSION POINT
Identifying risks is not enough; tools and benchmarks are needed to address them
AGREED WITH
Ambassador Philip Thigo, Ms. Natasha Crampton, Mr. Amir Banifatemi, Dr. Urvashi Aneja
DISAGREED WITH
Ambassador Philip Thigo, Mr. Amir Banifatemi
Argument 2
Network enables compliance with New Delhi Frontier AI commitments and capacity‑building across countries (Mr. Abhishek Singh)
EXPLANATION
The launch of the network is presented as a mechanism to help implement the New Delhi Frontier AI commitments, which include sharing usage data and multilingual benchmark performance, while also building capacity throughout the Global South.
EVIDENCE
He explains that conversations with stakeholders led to the New Delhi Frontier AI commitments, where models agreed to share usage data and multilingual performance benchmarks, and that the network will support compliance, data sharing, tool creation and capacity-building across countries [87-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The announcement of the New Delhi Frontier AI commitments, which include multilingual evaluation requirements, directly supports this claim [S9].
MAJOR DISCUSSION POINT
Network enables compliance with New Delhi Frontier AI commitments and capacity‑building across countries
Argument 3
Current benchmarks are English‑centric; multilingual benchmarks are essential for accurate assessment (Mr. Abhishek Singh)
EXPLANATION
Mr. Singh points out that most AI models are evaluated on English‑only benchmarks, which fails to capture performance in the many languages spoken across the Global South, making multilingual benchmarks a necessity.
EVIDENCE
He notes that most models are evaluated on predominantly English benchmarks and highlights India’s 22 official languages and many dialects, stressing the need for evaluation on prompts in those languages because specific linguistic benchmarks are lacking [73-77].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The prevalence of English-only benchmarks and the lack of linguistic benchmarks are noted in [S3]; initiatives for multilingual evaluation are described in [S34] and cultural-linguistic diversity considerations in [S35].
MAJOR DISCUSSION POINT
Current benchmarks are English‑centric; multilingual benchmarks are essential for accurate assessment
AGREED WITH
Dr. Urvashi Aneja, Ms. Natasha Crampton, Ambassador Philip Thigo, Ms. Chenai Chair, Dr. Rachel Sibande
Argument 4
Responsible AI diffusion should not be framed as stifling innovation; safety must coexist with broad benefit
EXPLANATION
Singh emphasizes that the goal of trustworthy AI is to ensure that more users benefit from AI while maintaining responsibility, not to hinder technological progress. He stresses balancing diffusion with safety safeguards.
EVIDENCE
He notes that some people mistakenly think trusted AI aims to stifle innovation, but the primary objective is to expand AI benefits responsibly and safely for all users [105-108].
MAJOR DISCUSSION POINT
AI safety should complement, not hinder, innovation
A
Ambassador Philip Thigo
6 arguments196 words per minute1016 words309 seconds
Argument 1
Global South is systematically excluded from AI safety governance structures (Ambassador Philip Thigo)
EXPLANATION
The ambassador asserts that AI safety conversations and institutions have historically left out Global South nations, resulting in a governance model that does not reflect the majority of AI users and the harms they experience.
EVIDENCE
He states that the Global South has always been excluded from safety conversations, that Kenya is the only member of the international network of AI safety institutes, and that a model not inclusive of the global majority is unacceptable [138-141].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Systematic exclusion of the Global South from AI safety discussions is documented in [S32]; the need for Global South representation in governance is highlighted in [S27] and [S28].
MAJOR DISCUSSION POINT
Global South is systematically excluded from AI safety governance structures
AGREED WITH
Dr. Urvashi Aneja, Ms. Chenai Chair, Mr. Amir Banifatemi, Dr. Rachel Sibande
Argument 2
Proposes regional nodes, multilingual benchmark datasets, and an annual AI safety report (Ambassador Philip Thigo)
EXPLANATION
He suggests expanding the network with regional nodes across Africa’s 54 countries, creating multilingual benchmark datasets, organizing red‑teaming exercises, and publishing an annual Global South AI Safety Report to broaden participation and accountability.
EVIDENCE
He recommends regional nodes for Africa, the creation of multilingual benchmark datasets, an annual red-teaming exercise, and publishing a Global South AI Safety Report with an expansive definition of safety [170-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for multilingual benchmark datasets and regional capacity building are echoed in [S34]; the importance of cultural and linguistic diversity is discussed in [S35]; considerations of smaller-footprint solutions for resource-constrained settings appear in [S36].
MAJOR DISCUSSION POINT
Proposes regional nodes, multilingual benchmark datasets, and an annual AI safety report
AGREED WITH
Dr. Urvashi Aneja, Mr. Abhishek Singh, Ms. Natasha Crampton, Ms. Chenai Chair, Dr. Rachel Sibande
Argument 3
Benchmark design reflects power; only a few institutions should not dictate risk definitions (Ambassador Philip Thigo)
EXPLANATION
The ambassador argues that benchmarks are not neutral; when a handful of institutions define risk metrics, they concentrate governance power, which can marginalize the Global South.
EVIDENCE
He notes that benchmarks are not neutral, that only a handful of institutions should not define what risks are measured, what harms are prioritized, and what safe performance means, emphasizing that governance is about power and must be de-concentrated [161-165].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The argument that benchmark authority should not be concentrated in a handful of institutions is made explicit in [S3]; concerns about centralized governance and the need for decentralized checks are raised in [S30].
MAJOR DISCUSSION POINT
Benchmark design reflects power; only a few institutions should not dictate risk definitions
DISAGREED WITH
Mr. Abhishek Singh, Mr. Amir Banifatemi
Argument 4
Limited access to compute resources hampers Global South researchers’ ability to evaluate models (Ambassador Philip Thigo)
EXPLANATION
He highlights that without sufficient compute capacity, researchers in the Global South cannot effectively evaluate AI models, creating a structural disadvantage.
EVIDENCE
He identifies two structural gaps: the lack of teaming capacity and the issue of access to compute, stating that global majority researchers cannot evaluate models without such resources [158-160].
MAJOR DISCUSSION POINT
Limited access to compute resources hampers Global South researchers’ ability to evaluate models
AGREED WITH
Mr. Abhishek Singh, Ms. Natasha Crampton, Mr. Amir Banifatemi, Dr. Urvashi Aneja
Argument 5
Benchmarks are not neutral; concentration of benchmark authority concentrates governance power (Ambassador Philip Thigo)
EXPLANATION
Reiterating the earlier point, he stresses that benchmark authority should be decentralized to avoid power imbalances in AI governance.
EVIDENCE
He again states that benchmarks are not neutral and that only a handful of institutions should not define risk metrics, underscoring the need to de-concentrate governance power [161-165].
MAJOR DISCUSSION POINT
Benchmarks are not neutral; concentration of benchmark authority concentrates governance power
Argument 6
AI safety must include socio‑technical and environmental dimensions, such as impacts on water and ecosystems
EXPLANATION
The ambassador argues that safety considerations should go beyond algorithmic performance to cover broader societal and ecological harms, ensuring full lifecycle accountability for AI systems deployed in the Global South.
EVIDENCE
He states that safety must also address environmental harms, biases, misinformation, and specific harms to water and the environment, calling for comprehensive lifecycle accountability [154-156].
MAJOR DISCUSSION POINT
Broaden AI safety to socio‑technical and environmental impacts
D
Dr. Rachel Sibande
4 arguments144 words per minute454 words188 seconds
Argument 1
Safety must be re‑defined to reflect local cultural, gender, religious, and linguistic norms (Dr. Rachel Sibande)
EXPLANATION
Dr. Sibande argues that safety definitions need to incorporate the social, cultural, gender, religious and linguistic contexts of deployment, because models that ignore these factors can cause harm.
EVIDENCE
She explains that safety and harm must be redefined according to the social-cultural context, noting that models must understand gender dynamics, religious beliefs, political sensitivities, humor, slang and tone, especially as voice interfaces become common [216-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of incorporating cultural, linguistic, and gender dimensions into AI safety is emphasized in [S35]; multilingual evaluation frameworks that respect local contexts are described in [S34].
MAJOR DISCUSSION POINT
Safety must be re‑defined to reflect local cultural, gender, religious, and linguistic norms
AGREED WITH
Dr. Urvashi Aneja, Ambassador Philip Thigo, Ms. Chenai Chair, Mr. Amir Banifatemi
DISAGREED WITH
Mr. Abhishek Singh, Ambassador Philip Thigo, Mr. Amir Banifatemi
Argument 2
Language translation must capture lived meaning; mis‑translations can cause critical safety failures (Dr. Rachel Sibande)
EXPLANATION
She illustrates that literal translation can miss crucial contextual meanings, leading to dangerous misinterpretations in health‑related applications.
EVIDENCE
Using an example from Malawi, she describes how a phrase indicating that a pregnant woman’s water has broken could be mistranslated as “I have thrown away water,” causing a critical safety flag to be missed if the model does not understand the lived meaning [224-227].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of culturally aware translation for safety-critical applications is highlighted in [S35] and reinforced by multilingual benchmark initiatives in [S34].
MAJOR DISCUSSION POINT
Language translation must capture lived meaning; mis‑translations can cause critical safety failures
AGREED WITH
Dr. Urvashi Aneja, Mr. Abhishek Singh, Ms. Natasha Crampton, Ambassador Philip Thigo, Ms. Chenai Chair
Argument 3
The Gates Foundation will institutionalize safety evaluation at the point of deployment to catch issues early (Dr. Rachel Sibande)
EXPLANATION
She states that the Gates Foundation plans to embed safety evaluation into the deployment stage of AI solutions, ensuring that harms are identified before they spread.
EVIDENCE
She says that from the foundation side they will “really institutionalize the evaluation of safety of AI solutions right at deployment because we see now that safety issues almost emerge post deployment” [355-356].
MAJOR DISCUSSION POINT
The Gates Foundation will institutionalize safety evaluation at the point of deployment to catch issues early
Argument 4
AI companionship can create psychological dependence, raising new safety concerns
EXPLANATION
Rachel highlights that users may become emotionally reliant on AI companions, potentially substituting their own cognitive abilities and decision‑making with the system’s guidance. This form of harm extends beyond technical errors to mental‑health impacts.
EVIDENCE
She describes her personal use of an AI companion as a therapist and questions at what point the user might be overly emotionally dependent, indicating a need to track such psychological effects [230-232].
MAJOR DISCUSSION POINT
Psychological dependence on AI companions as a safety risk
M
Ms. Chenai Chair
4 arguments167 words per minute683 words244 seconds
Argument 1
Companies often overlook user experience, gender dynamics, and language diversity, leading to unintended harms (Ms. Chenai Chair)
EXPLANATION
Ms. Chair highlights that AI deployments frequently ignore the diverse contexts of users, especially gender and language nuances, which can exacerbate existing inequalities and create new harms.
EVIDENCE
She provides examples such as gender-biased voice interfaces in agricultural tools, the multitude of African languages versus limited language support, and misuse scenarios like surveillance of bags, illustrating how lack of contextual design leads to gender-based violence, misinformation, and privacy violations [236-267].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The gap between compliance-focused risk assessment and human-rights-focused evaluation, especially regarding gender and language, is discussed in [S33]; broader calls for cultural and linguistic inclusivity appear in [S35].
MAJOR DISCUSSION POINT
Companies often overlook user experience, gender dynamics, and language diversity, leading to unintended harms
AGREED WITH
Dr. Urvashi Aneja, Ambassador Philip Thigo, Mr. Amir Banifatemi, Dr. Rachel Sibande
Argument 2
Africa’s linguistic diversity (2,000+ languages) is largely ignored in AI deployments (Ms. Chenai Chair)
EXPLANATION
She points out that while Africa has over two thousand documented languages, AI projects typically support only a tiny fraction, resulting in mismatched language support and ineffective tools.
EVIDENCE
She notes that Masakhane works on only 50 African languages out of more than 2,000 documented, and that even widely spoken languages like Kiswahili have regional variations that are not accounted for in deployments [249-255].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The massive linguistic diversity of Africa and its under-representation in AI systems is noted in [S35]; efforts to develop multilingual benchmarks for African languages are described in [S34].
MAJOR DISCUSSION POINT
Africa’s linguistic diversity (2,000+ languages) is largely ignored in AI deployments
AGREED WITH
Dr. Urvashi Aneja, Mr. Abhishek Singh, Ms. Natasha Crampton, Ambassador Philip Thigo, Dr. Rachel Sibande
Argument 3
Masakhane African Language Hub will deliver a benchmarking initiative for African languages this year (Ms. Chenai Chair)
EXPLANATION
The hub commits to producing a benchmark for African languages, addressing the gap identified earlier in the discussion.
EVIDENCE
She states that the hub has a benchmarking initiative underway for the year, which will contribute to African language benchmarking efforts [356].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The development of multilingual benchmark initiatives for African languages is highlighted in [S34].
MAJOR DISCUSSION POINT
Masakhane African Language Hub will deliver a benchmarking initiative for African languages this year
Argument 4
AI technologies can be repurposed for covert surveillance, violating privacy when deployed without community consent
EXPLANATION
She warns that AI‑enabled tracking devices, originally marketed for benign uses, can be inserted into personal belongings and used for surveillance without users’ knowledge. Lack of consultation amplifies privacy risks.
EVIDENCE
She recounts that AI-enabled trackers were placed in women’s and children’s bags, creating an act of surveillance that could have been mitigated if communities had been consulted before deployment [266-269].
MAJOR DISCUSSION POINT
Unconsented AI‑driven surveillance threatens privacy
M
Mr. Quintin Chou‑Lambert
2 arguments0 words per minute0 words1 seconds
Argument 1
Provides field‑tested examples to inform standards and connects local realities with international policy (Mr. Quintin Chou‑Lambert)
EXPLANATION
He argues that real‑world, context‑specific evidence from the Global South is essential to shape AI standards and ensure that international policy reflects on‑the‑ground challenges.
EVIDENCE
He explains that low-infrastructure contexts make AI safety a fuzzy discussion that needs empirical evidence, and that networks like this help bring local challenges into global dialogues, preventing them from being ignored [191-199].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of bringing Global South evidence into UN-level AI governance is discussed in [S27]; bridging local insights with global policy is a core aim of the UN mechanisms described in [S25].
MAJOR DISCUSSION POINT
Provides field‑tested examples to inform standards and connects local realities with international policy
AGREED WITH
Dr. Balaraman Ravindran, Dr. Urvashi Aneja
Argument 2
One‑size‑fits‑all technical standards fail to capture contextual risks; empirical field evidence is crucial (Mr. Quintin Chou‑Lambert)
EXPLANATION
He stresses that universal technical standards cannot address the varied contexts of the Global South, and that field‑tested, empirical data is needed to create appropriate safety measures.
EVIDENCE
He notes that one-size-fits-all standards will not be contextually sensitive and that moving from large, expensive models to small, language-specific models turns safety into a fuzzy discussion that requires empirical evidence [193-195].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The critique of centralized benchmark authority and the call for context-sensitive standards are made in [S3]; the need for decentralized oversight is reinforced in [S30].
MAJOR DISCUSSION POINT
One‑size‑fits‑all technical standards fail to capture contextual risks; empirical field evidence is crucial
D
Dr. Balaraman Ravindran
3 arguments172 words per minute565 words196 seconds
Argument 1
Calls for coordination among overlapping initiatives to avoid duplication and increase impact (Dr. Balaraman Ravindran)
EXPLANATION
He highlights the proliferation of AI safety and capacity‑building networks and urges a coordinated framework to maximise resources and avoid fragmented efforts.
EVIDENCE
He mentions multiple initiatives across Africa, China, and UN-led capacity-building networks, calling for coordination to prevent each actor from working on a small piece of the puzzle and to create a multiplier effect [330-342].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for coordinated global AI governance and avoiding fragmented efforts is emphasized in [S25] and [S26]; inclusion of Global South perspectives is highlighted in [S27].
MAJOR DISCUSSION POINT
Calls for coordination among overlapping initiatives to avoid duplication and increase impact
AGREED WITH
Dr. Urvashi Aneja, Mr. Quintin Chou‑Lambert
Argument 2
The network will prioritize cross‑border problem solving that cannot be addressed by single‑country efforts (Dr. Balaraman Ravindran)
EXPLANATION
He proposes that the network focus on challenges requiring collaboration across borders, thereby demonstrating the added value of a shared platform beyond national initiatives.
EVIDENCE
He states that the network should target problems that necessarily require cross-border collaboration, rather than issues solvable within a single geography, and that this approach will drive the network’s relevance [353-357].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UN-led collaborative frameworks that enable cross-border AI safety work are described in [S25].
MAJOR DISCUSSION POINT
The network will prioritize cross‑border problem solving that cannot be addressed by single‑country efforts
Argument 3
Existing AI safety institutes are scarce in the Global South; AI Centers of Excellence (AC) should serve as safety nodes
EXPLANATION
Ravindran observes that most Global South countries do not have dedicated AI safety institutes, and proposes that AI Centers of Excellence become the functional nodes for safety work and representation in global governance. This reframes the network’s architecture toward existing academic and research hubs.
EVIDENCE
He remarks that the network cannot be called a global safety institute network because such institutes are largely absent; instead, AC institutes should act as nodes that give voice to unheard communities, especially in Kenya and India [335-338].
MAJOR DISCUSSION POINT
Use AI Centers of Excellence as safety nodes in the Global South
M
Ms. Natasha Crampton
3 arguments136 words per minute404 words177 seconds
Argument 1
Scaling community‑led, multilingual evaluations requires sustainable systems and infrastructure (Ms. Natasha Crampton)
EXPLANATION
Ms. Crampton explains that while community‑led, context‑aware evaluations exist, scaling them to thousands of languages and cultural settings demands a sustainable, ongoing evaluation framework and robust infrastructure.
EVIDENCE
She references the Samishka project as an example of community-led evaluation, and stresses the need to build a system that can run multilingual and multicultural evaluations at scale, continuously, not just as a one-off test [277-284].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sustainable multilingual evaluation infrastructures are discussed in [S34]; the need for culturally aware evaluation at scale is highlighted in [S35].
MAJOR DISCUSSION POINT
Scaling community‑led, multilingual evaluations requires sustainable systems and infrastructure
AGREED WITH
Dr. Urvashi Aneja, Mr. Abhishek Singh, Ambassador Philip Thigo, Ms. Chenai Chair, Dr. Rachel Sibande
Argument 2
Microsoft will honor New Delhi Frontier AI commitments, share multilingual data, and invest $50 bn in Global South infrastructure (Ms. Natasha Crampton)
EXPLANATION
She commits Microsoft to fulfilling the New Delhi Frontier AI pledges, providing multilingual benchmark data, and allocating substantial investment to build digital infrastructure across the Global South.
EVIDENCE
She states that Microsoft will help advance multilingual and multicultural evaluation work, share data to aid policy makers, and that Microsoft is making large infrastructure investments totaling $50 billion by the end of the decade [348-349].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The New Delhi Frontier AI commitments, which include multilingual data sharing, are outlined in [S9]; large-scale infrastructure investment aligns with UN development partnership goals noted in [S26].
MAJOR DISCUSSION POINT
Microsoft will honor New Delhi Frontier AI commitments, share multilingual data, and invest $50 bn in Global South infrastructure
Argument 3
Sustainable, ongoing evaluation mechanisms are needed rather than one‑off tests (Ms. Natasha Crampton)
EXPLANATION
She emphasizes that evaluations must be continuous to capture shifts over time, rather than being performed only once before product release.
EVIDENCE
She notes that evaluations need to be run on an ongoing basis to understand shifts, highlighting that one-off tests are insufficient for sustained safety assurance [281-284].
MAJOR DISCUSSION POINT
Sustainable, ongoing evaluation mechanisms are needed rather than one‑off tests
M
Mr. Amir Banifatemi
6 arguments169 words per minute844 words299 seconds
Argument 1
Lack of imagination and inclusion of local talent leads to blind spots in safety design (Mr. Amir Banifatemi)
EXPLANATION
He argues that system designers often lack awareness of local contexts and fail to involve talent from the regions affected, resulting in safety assessments that miss cultural, linguistic and contextual risks.
EVIDENCE
He points out that people building systems have no awareness of the contexts in which they operate, that language and culture are not captured, and that talent inclusion is missing, both in terms of skilling and integration, leading to blind spots [304-315].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The omission of local expertise and its impact on safety assessments is discussed in [S33]; cultural and linguistic blind spots are further emphasized in [S35].
MAJOR DISCUSSION POINT
Lack of imagination and inclusion of local talent leads to blind spots in safety design
AGREED WITH
Dr. Urvashi Aneja, Ambassador Philip Thigo, Ms. Chenai Chair, Dr. Rachel Sibande
Argument 2
Safety is not currently costed into financial planning, reducing incentives for firms to invest in it (Mr. Amir Banifatemi)
EXPLANATION
He notes that without financial penalties or budgeting for safety, companies lack motivation to prioritize safety measures in their product development cycles.
EVIDENCE
He explains that safety is not costed into financial systems, there is no penalty for being unsafe, and without a financial mandate, companies will not allocate resources to safety [307-311].
MAJOR DISCUSSION POINT
Safety is not currently costed into financial planning, reducing incentives for firms to invest in it
Argument 3
Absence of penalties for unsafe AI means companies lack financial motivation to prioritize safety (Mr. Amir Banifatemi)
EXPLANATION
He reiterates that without regulatory or financial penalties, firms have little incentive to invest in safety, perpetuating risk.
EVIDENCE
He again stresses that there is no penalty for unsafe AI, so companies lack financial motivation to prioritize safety [307-311].
MAJOR DISCUSSION POINT
Absence of penalties for unsafe AI means companies lack financial motivation to prioritize safety
Argument 4
Open‑source, culturally contextual incident‑reporting tools will be released to broaden evaluation access (Mr. Amir Banifatemi)
EXPLANATION
He announces that his labs in Bangalore and San Francisco are developing open‑source, culturally aware incident‑reporting tools to make evaluation resources publicly available.
EVIDENCE
He says the labs are working on safety evaluations, incident reporting, making tools culturally contextual and open-source, aiming to disseminate them to all partners and the public [358].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open-source multilingual evaluation tools and community-driven incident reporting are highlighted in [S34].
MAJOR DISCUSSION POINT
Open‑source, culturally contextual incident‑reporting tools will be released to broaden evaluation access
Argument 5
Latency and institutional framework gaps in the Global South delay feedback loops, compounding AI‑related harms
EXPLANATION
Amir points out that the Global South lacks the rapid feedback mechanisms present in many Global North countries, leading to slower detection and mitigation of AI safety incidents. This latency, combined with weaker rule‑of‑law institutions, means harms can grow unchecked.
EVIDENCE
He notes that only a handful of Global North countries have robust legal frameworks and active civil societies that create accelerated feedback loops for incident safety, whereas most Global South nations lack these mechanisms, causing delayed responses and amplified harms [321-322].
MAJOR DISCUSSION POINT
Latency and institutional gaps delay AI safety feedback loops
Argument 6
Evaluation tools must address whole system deployment, not just model design
EXPLANATION
Amir argues that current evaluation approaches focus narrowly on model performance, overlooking the broader system context—including APIs, data pipelines, and infrastructure—that influences safety outcomes. He calls for tools that evaluate the entire deployment ecosystem.
EVIDENCE
He suggests creating a series of evaluation tools that are built for system deployment as well as model design, and highlights incident-reporting mechanisms that can capture hidden control issues, data access problems, and regulatory gaps [319-320].
MAJOR DISCUSSION POINT
Need for system‑wide AI safety evaluation tools
M
Mr. Quintin Chou-Lambert
3 arguments135 words per minute324 words143 seconds
Argument 1
Infrastructure and energy constraints in the Global South shape AI safety requirements
EXPLANATION
He points out that limited infrastructure and unreliable energy supplies in many Global South contexts mean that AI safety measures cannot be designed as if resources were abundant. Safety solutions must be adapted to these material constraints to be effective.
EVIDENCE
He notes that there is “less, perhaps, infrastructure or energy connection to go around” and that because of this, AI safety “edges into this more contextual field” where field-tested examples are needed to surface missing considerations [191-192].
MAJOR DISCUSSION POINT
Infrastructure limitations affect AI safety design
Argument 2
The move from large, expensive models to smaller, language‑specific models makes AI safety a fuzzy, context‑dependent problem
EXPLANATION
He explains that scaling down from massive, costly AI models to tailored, small‑language models changes the safety landscape, requiring new empirical evidence and evaluation methods that account for linguistic and cultural nuances.
EVIDENCE
He describes the transition as “moving from this kind of scaling a small, a very concentrated, highly expensive model across a massive user base to more tailored, small-language models” which turns AI safety into a “more fuzzy discussion” that “needs empirical evidence” [194].
MAJOR DISCUSSION POINT
Shift to small language models creates new safety challenges
Argument 3
The expanding UN Global Dialogue on AI Governance provides a venue to embed field‑tested evidence, but must ensure local threats are not ignored; open‑source developments can help broaden participation
EXPLANATION
He highlights that the United Nations Global Dialogue now includes all 193 member states and is informed by an independent scientific panel, offering an opportunity to integrate real‑world, ground‑level insights. However, he warns that without deliberate inclusion of local perspectives, international discussions may overlook threats faced by communities, and suggests open‑source tools as a way to bring those perspectives into the dialogue.
EVIDENCE
He references the trend of increasing participation from 30 to over 100 countries and the establishment of the UN Global Dialogue on AI Governance involving all member states and an independent scientific panel [195-197]. He then stresses that networks like this are crucial to “connect and bring examples of the challenges that we face” so that “international discussions do not ignore or omit or discount the perspectives of the vast majority of people on the planet” [198-199].
MAJOR DISCUSSION POINT
Leverage UN AI governance platform and open‑source to include local evidence
Agreements
Agreement Points
All participants stress the urgent need for multilingual, culturally‑aware benchmarks and evaluation frameworks to assess AI systems in the Global South.
Speakers: Dr. Urvashi Aneja, Mr. Abhishek Singh, Ms. Natasha Crampton, Ambassador Philip Thigo, Ms. Chenai Chair, Dr. Rachel Sibande
Network will build independent evidence, contextual evaluations, and act as a bridge to global governance (Dr. Urvashi Aneja) Current benchmarks are English‑centric; multilingual benchmarks are essential for accurate assessment (Mr. Abhishek Singh) Scaling community‑led, multilingual evaluations requires sustainable systems and infrastructure (Ms. Natasha Crampton) Proposes regional nodes, multilingual benchmark datasets, and an annual AI safety report (Ambassador Philip Thigo) Africa’s linguistic diversity (2,000+ languages) is largely ignored in AI deployments (Ms. Chenai Chair) Language translation must capture lived meaning; mis‑translations can cause critical safety failures (Dr. Rachel Sibande)
Speakers repeatedly highlighted that existing English-only benchmarks miss performance in the many languages spoken across the Global South, and that building multilingual, context-sensitive benchmarks and evaluation tools is essential for trustworthy AI deployment. Dr. Aneja announced multilingual benchmark work [44-45], Singh warned about English-centric tests [73-77], Crampton pledged to advance multilingual evaluation under the New Delhi commitments [348-349], the Ambassador called for multilingual datasets and regional nodes [174-175], the Chenai Chair underscored the gap of 2,000 African languages [249-255], and Dr. Sibande gave a concrete mistranslation example [224-227].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with the launch of the Global South AI Safety Research Network, which highlighted multilingual benchmark datasets and regional nodes as priority actions [S57]. It also reflects broader calls for inclusive connectivity and culturally relevant AI evaluation noted in the IGF discussion on meaningful connectivity [S70] and Microsoft’s tech-diplomacy emphasis on multilingual resources [S61].
Broad consensus that local context, civil‑society insight, and inclusive talent are indispensable for identifying and mitigating AI risks.
Speakers: Dr. Urvashi Aneja, Ambassador Philip Thigo, Ms. Chenai Chair, Mr. Amir Banifatemi, Dr. Rachel Sibande
Independent civil society organizations are uniquely positioned to address this gap (Dr. Urvashi Aneja) Global South is systematically excluded from AI safety governance structures (Ambassador Philip Thigo) Companies often overlook user experience, gender dynamics, and language diversity, leading to unintended harms (Ms. Chenai Chair) Lack of imagination and inclusion of local talent leads to blind spots in safety design (Mr. Amir Banifatemi) Safety must be re‑defined to reflect local cultural, gender, religious, and linguistic norms (Dr. Rachel Sibande)
All speakers agreed that AI safety cannot be designed in a vacuum; it must draw on civil-society, regional expertise, and culturally aware perspectives. Dr. Aneja emphasized civil-society’s proximity to real-world deployments [19-21], the Ambassador warned of systemic exclusion and the need for regional nodes [138-141][170-176], the Chenai Chair highlighted missed gender and language nuances [236-247], Amir pointed to the absence of local talent and imagination [304-315], and Dr. Sibande called for redefining safety based on social-cultural context [216-218].
POLICY CONTEXT (KNOWLEDGE BASE)
The inclusive multi-stakeholder approach endorsed by the AI Governance Dialogue co-chairs mirrors this view [S66], and the UN-linked network’s emphasis on socio-technical issues underscores the role of civil-society and local expertise [S57][S58].
All speakers recognize the critical need for capacity‑building, compute resources, and infrastructure investments to enable effective AI safety work in the Global South.
Speakers: Mr. Abhishek Singh, Ambassador Philip Thigo, Ms. Natasha Crampton, Mr. Amir Banifatemi, Dr. Urvashi Aneja
Identifying risks is not enough; tools and benchmarks are needed to address them (Mr. Abhishek Singh) Limited access to compute resources hampers Global South researchers’ ability to evaluate models (Ambassador Philip Thigo) Microsoft will honor New Delhi Frontier AI commitments and invest $50 bn in Global South infrastructure (Ms. Natasha Crampton) Latency and institutional framework gaps in the Global South delay feedback loops, compounding AI‑related harms (Mr. Amir Banifatemi) Network will act as connective tissue between global governance and on‑the‑ground realities, building capacity (Dr. Urvashi Aneja)
Participants stressed that without technical capacity, compute power, and financial investment, safety initiatives cannot succeed. Singh called for tools and benchmarks [71-73][84-95], the Ambassador highlighted compute gaps [158-160], Crampton announced a $50 bn infrastructure pledge [348-349], Amir described latency and institutional gaps [321-322], and Dr. Aneja positioned the network as a capacity-building bridge [34-36][37-38].
POLICY CONTEXT (KNOWLEDGE BASE)
The EU 2030 Digital Agenda’s Access pillar and the IMI’s infrastructure focus provide a policy backdrop for such capacity-building [S48]; similarly, AI strategy discussions for jobs and economic development highlighted infrastructure and skills gaps as key challenges [S51]. The Global South network also calls for these investments [S57].
Consensus that AI safety should complement, not hinder, innovation and that responsible diffusion of AI must be balanced with safeguards.
Speakers: Mr. Abhishek Singh, Dr. Urvashi Aneja
Responsible AI diffusion should not be framed as stifling innovation (Mr. Abhishek Singh) Risk of amplifying existing harms without adequate safeguards (Dr. Urvashi Aneja)
Both speakers agreed that AI’s benefits must be realized while ensuring safety, rejecting the notion that safety measures impede progress. Singh explicitly said safety should not stifle innovation [105-108], while Dr. Aneja warned that unchecked AI could amplify harms [11-14].
POLICY CONTEXT (KNOWLEDGE BASE)
This balance is echoed in the Overregulation panel stressing that regulation must not stifle innovation [S53] and the AI Agents session calling for responsible deployment alongside privacy protection [S54]. Industry consensus on safe yet innovative AI standards further supports this view [S55][S73].
All participants see value in coordinated, cross‑border collaboration among the many emerging AI safety initiatives to avoid duplication and increase impact.
Speakers: Dr. Balaraman Ravindran, Dr. Urvashi Aneja, Mr. Quintin Chou‑Lambert
Calls for coordination among overlapping initiatives to avoid duplication and increase impact (Dr. Balaraman Ravindran) Network will build independent evidence, contextual evaluations, and act as a bridge to global governance (Dr. Urvashi Aneja) Provides field‑tested examples to inform standards and connects local realities with international policy (Mr. Quintin Chou‑Lambert)
Speakers highlighted the proliferation of AI safety networks and the need for a coordinated framework. Dr. Ravindran listed multiple overlapping initiatives and urged coordination [330-342], Dr. Aneja described the network as connective tissue between global and local actors [37-38], and Mr. Quintin emphasized the role of such networks in feeding field evidence into global standards [198-199].
POLICY CONTEXT (KNOWLEDGE BASE)
Coordination to avoid duplication was a key recommendation from the Digital Innovations Forum networking session [S65], and the AI Governance Dialogue highlighted the need for international cooperation and structured expert engagement [S66]. The Global South AI Safety Research Network also stresses cross-border collaboration [S57].
Similar Viewpoints
All three see the network (or their organizations) as a platform to develop and disseminate sustainable, community‑driven evaluation tools that can be scaled globally, emphasizing open‑source and infrastructural support [34-36][277-284][358].
Speakers: Dr. Urvashi Aneja, Ms. Natasha Crampton, Mr. Amir Banifatemi
Network will build independent evidence, contextual evaluations, and act as a bridge to global governance (Dr. Urvashi Aneja) Scaling community‑led, multilingual evaluations requires sustainable systems and infrastructure (Ms. Natasha Crampton) Open‑source, culturally contextual incident‑reporting tools will be released to broaden evaluation access (Mr. Amir Banifatemi)
Both highlight structural gaps – the former in governance representation, the latter in technical tooling – that prevent the Global South from effectively managing AI risks [138-141][71-73].
Speakers: Ambassador Philip Thigo, Mr. Abhishek Singh
Global South is systematically excluded from AI safety governance structures (Ambassador Philip Thigo) Identifying risks is not enough; tools and benchmarks are needed to address them (Mr. Abhishek Singh)
Both point to a systemic blind‑spot in design processes caused by insufficient inclusion of diverse user perspectives and local expertise [236-247][304-315].
Speakers: Ms. Chenai Chair, Mr. Amir Banifatemi
Companies often overlook user experience, gender dynamics, and language diversity, leading to unintended harms (Ms. Chenai Chair) Lack of imagination and inclusion of local talent leads to blind spots in safety design (Mr. Amir Banifatemi)
Unexpected Consensus
Industry (Microsoft) and diplomatic representatives (Kenyan Ambassador) both advocate for regional nodes and multilingual benchmark datasets.
Speakers: Ms. Natasha Crampton, Ambassador Philip Thigo
Scaling community‑led, multilingual evaluations requires sustainable systems and infrastructure (Ms. Natasha Crampton) Proposes regional nodes, multilingual benchmark datasets, and an annual AI safety report (Ambassador Philip Thigo)
It is notable that a corporate leader and a government envoy converge on the need for decentralized regional structures and multilingual data resources, indicating cross-sector alignment on capacity-building mechanisms [348-349][174-175].
POLICY CONTEXT (KNOWLEDGE BASE)
Microsoft’s tech-diplomacy statements describe its role in fostering regional AI hubs and multilingual resources [S61], while the Global South AI Safety Research Network explicitly calls for regional nodes and multilingual benchmarks, a position echoed by diplomatic actors [S57].
Both the UN‑linked network lead (Dr. Urvashi Aneja) and the private sector (Microsoft) stress the importance of ongoing, continuous evaluation rather than one‑off testing.
Speakers: Dr. Urvashi Aneja, Ms. Natasha Crampton
Network will build independent evidence, contextual evaluations, and act as a bridge to global governance (Dr. Urvashi Aneja) Sustainable, ongoing evaluation mechanisms are needed rather than one‑off tests (Ms. Natasha Crampton)
While the UN-focused speaker emphasizes building a continuous evidence base, the Microsoft executive explicitly calls for sustained evaluation processes, showing an unexpected alignment on the need for long-term monitoring [34-36][281-284].
POLICY CONTEXT (KNOWLEDGE BASE)
Continuous verification was highlighted by Anja Kaspersen as essential for trustworthy AI governance [S68], and the Global South network emphasizes ongoing evaluation as a core principle [S57]. Microsoft also advocates for iterative assessment in its responsible AI frameworks [S63].
Overall Assessment

The discussion reveals strong convergence around three core pillars: (1) the creation of multilingual, culturally‑aware benchmarks and evaluation tools; (2) the centrality of local civil‑society insight, inclusive talent, and contextual understanding; and (3) the necessity of capacity‑building, compute access, and financial investment to operationalise safety measures. Participants from academia, civil society, industry, and diplomacy all endorse these themes, indicating a shared vision for a coordinated, inclusive AI safety ecosystem in the Global South.

High consensus – most speakers, across sectors, articulate overlapping priorities, suggesting that future collaborative actions (regional nodes, open‑source tools, coordinated networks) have broad stakeholder buy‑in and are likely to shape policy and implementation agendas.

Differences
Different Viewpoints
Who should define and control AI safety benchmarks
Speakers: Mr. Abhishek Singh, Ambassador Philip Thigo, Mr. Amir Banifatemi
Identifying risks is not enough; tools and benchmarks are needed to address them (Mr. Abhishek Singh) Benchmark design reflects power; only a few institutions should not dictate risk definitions (Ambassador Philip Thigo) Evaluation tools must address whole system deployment, not just model design (Mr. Amir Banifatemi)
Singh calls for multilingual benchmarks to evaluate models in many languages [73-77]. The Ambassador warns that benchmarks are not neutral and should not be set by a handful of institutions, emphasizing power concentration [161-165]. Amir adds that current evaluations focus narrowly on models and lack system-wide tools, calling for broader evaluation frameworks [319-320] and noting a lack of imagination about local contexts [304-315]. These positions diverge on who should design benchmarks and how inclusive they must be.
POLICY CONTEXT (KNOWLEDGE BASE)
The debate over benchmark governance is reflected in discussions about “who watches the watchers,” where technical professional bodies like IEEE are urged to participate in AI governance and benchmark definition [S68][S64]. Calls for government and multi-stakeholder involvement in standards development also provide context [S63].
How to create incentives for AI safety compliance
Speakers: Mr. Amir Banifatemi, Mr. Abhishek Singh, Dr. Urvashi Aneja, Ms. Natasha Crampton
Safety is not currently costed into financial planning, reducing incentives for firms (Mr. Amir Banifatemi) Responsible AI diffusion should not be framed as stifling innovation (Mr. Abhishek Singh) Procurement is a lever for countries in the Global South to shape markets for responsible innovation (Dr. Urvashi Aneja) Microsoft will invest $50 bn in Global South infrastructure to support evaluation and policy work (Ms. Natasha Crampton)
Amir argues that without financial penalties or budgeting for safety, companies lack motivation to invest in safety [307-311]. Singh stresses that safety must coexist with innovation and should not hinder it [105-108]. Aneja proposes using public procurement as a policy lever to drive safe AI adoption [50-53]. Natasha commits large infrastructure investment to enable evaluation and policy support [348-349]. The speakers agree safety is needed but disagree on whether market-based procurement, regulatory penalties, or private investment are the primary driver.
POLICY CONTEXT (KNOWLEDGE BASE)
Incentive structures were addressed in the AI safety investment panel, recommending board-level responsibility and alignment of executive compensation with long-term risk mitigation [S71]. Industry-wide standards and systemic conditions that enable responsible behavior rather than punitive regulation further inform this discussion [S55][S73].
Breadth of AI safety considerations (technical vs socio‑technical and environmental)
Speakers: Mr. Abhishek Singh, Ambassador Philip Thigo, Mr. Amir Banifatemi, Dr. Rachel Sibande
Identifying risks is not enough; need technical tools and benchmarks (Mr. Abhishek Singh) AI safety must also include socio‑technical issues, environmental harms, water, etc. (Ambassador Philip Thigo) Evaluation tools must address whole system deployment, not just model design (Mr. Amir Banifatemi) Safety must be re‑defined to reflect local cultural, gender, religious, and linguistic norms (Dr. Rachel Sibande)
Singh focuses on technical risk identification and multilingual benchmarks as primary safety measures [68-73]. The Ambassador expands safety to cover environmental impacts, misinformation, and full lifecycle accountability [154-156]. Amir stresses that safety evaluation should consider the entire deployment ecosystem, not just model performance [319-320]. Rachel argues that safety definitions need to incorporate cultural and linguistic contexts [216-218]. These viewpoints differ on how broadly safety should be defined and which dimensions are essential.
POLICY CONTEXT (KNOWLEDGE BASE)
The Main Session on AI Policy Network expanded the conversation to ethical, environmental, and societal dimensions, urging a holistic view of AI safety [S56]. Virginia’s remarks on multidisciplinary governance further stress the need to go beyond purely technical metrics [S58].
Unexpected Differences
Optimism about the network’s timeliness versus perception of it being late
Speakers: Dr. Urvashi Aneja, Ambassador Philip Thigo
Network will provide visibility to real‑world impact and connect stakeholders (Dr. Urvashi Aneja) Network is timely but also late; urgency needed to scale up work (Ambassador Philip Thigo)
Aneja highlights the network’s role as connective tissue between global safety infrastructure and on-the-ground realities [37-38], expressing confidence in its impact. The Ambassador, however, characterizes the initiative as “timely but also late” and stresses an urgent need to scale up quickly [142-144], indicating a more cautious view of the network’s readiness.
Role of regulation versus voluntary industry action in driving safety
Speakers: Mr. Abhishek Singh, Mr. Amir Banifatemi
Responsible AI diffusion should not be framed as stifling innovation (Mr. Abhishek Singh) Safety is not costed into financial planning; lack of penalties reduces firm motivation (Mr. Amir Banifatemi)
Singh argues that safety measures should complement innovation and not be seen as restrictive [105-108]. Amir counters that without regulatory penalties or financial mandates, companies will deprioritize safety altogether [307-311], suggesting a need for stronger enforcement rather than purely voluntary action.
POLICY CONTEXT (KNOWLEDGE BASE)
Panels on overregulation and responsible deployment argue for a balanced approach where regulation sets guardrails but does not impede innovation, complemented by voluntary industry measures [S53][S54]. The EU GPAI Code discussion highlighted systemic conditions that enable companies to act responsibly without heavy-handed rules [S73], and data-sovereignty talks emphasized co-accountability partnerships between governments and industry [S69].
Overall Assessment

The discussion reveals broad consensus on the necessity of a Global South network for trustworthy AI, but significant divergences arise around benchmark governance, incentive structures, and the scope of safety. While speakers align on goals—building evidence, fostering capacity, and ensuring inclusive governance—their preferred pathways (regional nodes, corporate investment, open‑source tools, procurement levers, or regulatory penalties) differ markedly. These disagreements highlight challenges in harmonising technical standards, financing mechanisms, and interdisciplinary safety definitions across diverse stakeholders.

Moderate to high: The core objective is shared, yet the lack of agreement on implementation strategies and the breadth of safety considerations could impede coordinated action unless reconciled. The implications are that without a unified approach to benchmarks, incentives, and scope, the network may face fragmentation, slower adoption of standards, and uneven protection for vulnerable populations.

Partial Agreements
All speakers concur that a coordinated Global South network is essential for trustworthy AI, but they differ on the primary mechanism: Aneja emphasizes evidence generation and governance linkage [22-23]; Singh stresses compliance and capacity‑building [87-95]; the Ambassador calls for regional nodes and reporting structures [170-176]; Natasha focuses on corporate commitments and infrastructure investment [348-349]; Amir proposes open‑source tooling and incident reporting [358].
Speakers: Dr. Urvashi Aneja, Mr. Abhishek Singh, Ambassador Philip Thigo, Ms. Natasha Crampton, Mr. Amir Banifatemi
Network will build independent evidence, contextual evaluations, and act as a bridge to global governance (Dr. Urvashi Aneja) Network enables compliance with New Delhi Frontier AI commitments and capacity‑building (Mr. Abhishek Singh) Proposes regional nodes, multilingual benchmark datasets, and an annual AI safety report (Ambassador Philip Thigo) Microsoft will honor New Delhi Frontier AI commitments, share multilingual data, and invest $50 bn in Global South infrastructure (Ms. Natasha Crampton) Open‑source, culturally contextual incident‑reporting tools will be released to broaden evaluation access (Mr. Amir Banifatemi)
Takeaways
Key takeaways
AI deployment in the Global South presents huge opportunities but also significant risks of amplifying existing social, gender, linguistic, and environmental harms. Identifying risks is insufficient; concrete tools, benchmarks, and capacity‑building are needed to evaluate and mitigate those risks. The Global South is systematically under‑represented in AI safety governance; a dedicated network can provide independent, field‑tested evidence and act as a bridge to global policy forums. Current evaluation benchmarks are English‑centric and power‑concentrated; multilingual, culturally aware benchmarks are essential for trustworthy AI in diverse contexts. Capacity gaps—including limited compute resources, talent inclusion, and sustainable evaluation mechanisms—must be addressed to enable effective safety work. Governance mechanisms must be de‑centralised; benchmarks and standards should not be defined by a handful of institutions, and safety should be financially incentivised. Collaboration across overlapping initiatives (UN, regional networks, industry, civil society) is critical to avoid duplication and maximise impact.
Resolutions and action items
Launch of the Global South Network for Trustworthy AI as a coordinating platform for civil‑society, research, and policy actors. Establish regional nodes (e.g., African node) to decentralise activities and increase local relevance. Develop multilingual benchmark datasets and conduct an annual Global South AI Safety Report. Align network activities with the New Delhi Frontier AI commitments, including sharing usage data and multilingual performance benchmarks. Microsoft to honor its Frontier AI commitments, share multilingual data, and invest $50 bn in infrastructure across the Global South by 2030. The Gates Foundation will institutionalise safety evaluation at the point of deployment to capture issues early. Masakhane African Language Hub will deliver a benchmarking initiative for African languages within the year. Open‑source, culturally contextual incident‑reporting tools will be created and made publicly available (led by Amir’s labs). Coordinate with existing UN AI governance processes (UN Global Dialogue on AI Governance, scientific panel) to ensure Global South voices are included. Facilitate cross‑border problem‑solving projects that require collaboration beyond single‑country efforts.
Unresolved issues
Precise definition of ‘safety’ and ‘harm’ that reflects varied cultural, gender, religious, and linguistic contexts remains open. Mechanisms to financially cost safety into corporate planning and to impose penalties for unsafe AI have not been established. Sustainable, ongoing evaluation frameworks (beyond one‑off tests) need concrete design and funding models. How to ensure equitable access to compute resources for Global South researchers is still undetermined. Details on how the network will integrate with and influence UN‑led AI governance structures are pending. Strategies for de‑concentrating benchmark authority and preventing power imbalances in standard‑setting are not fully resolved. Methods to close the accountability loop so that citizen‑level impacts are directly addressed were discussed but not finalized.
Suggested compromises
Create regional nodes to balance the need for rapid network activation with the requirement for local contextual expertise. Adopt a shared, open‑source benchmarking framework that allows multiple institutions to contribute, mitigating concentration of power. Leverage existing commitments (New Delhi Frontier AI) as a common baseline while expanding them through collaborative, multilingual evaluation work. Combine top‑down UN engagement with bottom‑up civil‑society evidence generation to satisfy both global governance and local relevance. Use pilot projects and incremental infrastructure investments (e.g., Microsoft’s $50 bn) as stepping stones toward broader, sustainable evaluation systems.
Thought Provoking Comments
Across the Global South, AI systems are being rapidly deployed in critical social sectors … while the potential is immense, the risks and harms are also immense. It is particularly important that we figure out ways to make AI safe and trustworthy in these contexts to ensure we protect populations and build infrastructure for safe and inclusive AI adoption.
Sets the foundational problem statement, highlighting the paradox of high opportunity versus low institutional capacity, and frames the need for a dedicated network.
Established the urgency of the discussion, prompting subsequent speakers to propose concrete mechanisms (benchmarks, regional hubs, civil‑society involvement) to address the identified gap.
Speaker: Dr. Urvashi Aneja
Identifying the risk is not sufficient. We need to think of how do we address those risks. For that we need technical tools, benchmarks, especially multilingual benchmarks, because most models are evaluated only in English.
Moves the conversation from risk identification to actionable solutions, emphasizing multilingual evaluation as a concrete technical need.
Shifted the dialogue toward practical steps (benchmarks, capacity building) and reinforced the network’s relevance; later speakers referenced multilingual benchmarks as a priority.
Speaker: Mr. Abhishek Singh
We are the only member of the international network of AI safety institutes from the Global South. The model that is not inclusive to a global majority is not acceptable. There are four structural gaps: teaming capacity, access to compute, linguistic and cultural mismatch, and the non‑neutrality of benchmarks.
Highlights systemic inequities and enumerates specific structural gaps, challenging the audience to consider power dynamics and resource asymmetries.
Created a turning point by broadening the scope from technical benchmarks to governance, power, and agency; later participants (e.g., Amir, Rachel) expanded on cultural safety and sovereign capability.
Speaker: Ambassador Philip Thigo
We need to redefine what is safe and what is harmful according to the social‑cultural context … language is not just vocabulary, it’s lived meaning. Example: a pregnant mother saying ‘waters have broken’ could be mistranslated and the model would miss a critical health flag.
Provides vivid, real‑world examples that illustrate how current evaluation metrics miss contextual harms, especially in health and language nuances.
Deepened the conversation about the limits of existing benchmarks and spurred others (Chenai Chair, Natasha) to discuss user experience, gendered voice, and the need for context‑aware evaluation.
Speaker: Dr. Rachel Sibande
What often gets missed is the user experience and the diversity of the community. A voice‑enabled agricultural tool with a male‑sounding voice can exacerbate gender‑based violence if the community wasn’t consulted.
Links technical design choices (voice gender) to social harms, illustrating unintended consequences of poorly contextualized AI deployments.
Shifted the tone toward concrete design pitfalls and reinforced the call for participatory design; prompted further discussion on surveillance and misuse.
Speaker: Ms. Chenai Chair
The challenge is scaling community‑led, context‑aware evaluations sustainably. You can’t do a one‑off test; you need an ongoing system that can run at scale across thousands of languages and cultural settings.
Identifies the scalability and sustainability problem of evaluation, moving the conversation from theory to operational feasibility.
Guided the panel toward discussing infrastructure investments and the need for systematic, repeatable evaluation pipelines; later echoed in Amir’s remarks about tooling and incident reporting.
Speaker: Ms. Natasha Crampton
Safety is not costed into financial systems; there is no penalty for being unsafe. Without financial incentives or regulatory mandates, companies will not prioritize safety.
Points out a fundamental economic barrier to safety, challenging the assumption that good intentions alone will drive responsible AI.
Introduced a new dimension—economic incentives—into the debate, prompting later suggestions about integrating safety into budgeting and policy (e.g., procurement lever mentioned by Urvashi).
Speaker: Mr. Amir Banifatemi
There are too many parallel initiatives; we need coordination and a harmonized framework so that efforts are not duplicated and resources can be pooled.
Raises the meta‑issue of ecosystem fragmentation, urging strategic alignment across networks and funders.
Steered the conversation toward collaboration mechanisms, influencing the rapid‑fire round where participants mentioned concrete joint actions (benchmarking, incident reporting, UN coordination).
Speaker: Dr. Balaraman Ravindran
AI standards as technical standards don’t solve the issue because a one‑size‑fits‑all standard will not be contextually sensitive. We need empirical evidence from low‑resource, field‑tested examples.
Challenges the reliance on universal technical standards and underscores the necessity of context‑specific evidence.
Reinforced earlier points about cultural and linguistic mismatch, supporting the call for regional nodes and localized benchmarks.
Speaker: Mr. Quintin Chou‑Lambert
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the conversation from a high‑level problem statement to concrete, multidimensional solutions. Dr. Aneja’s opening framed the urgency, while Mr. Singh introduced actionable benchmarks. Ambassador Thigo’s enumeration of structural gaps broadened the lens to include power and resource inequities, prompting participants to surface real‑world examples (Rachel, Chenai) that illustrated cultural and gendered harms. Technical scalability concerns (Natasha) and economic incentives (Amir) added layers of operational complexity, and Dr. Ravindran’s call for coordination highlighted the risk of fragmented efforts. Together, these comments redirected the dialogue toward actionable, context‑aware, and collaborative pathways, culminating in a rapid‑fire round where each participant committed to concrete steps. The key comments thus acted as turning points that deepened analysis, shifted perspectives, and forged a shared agenda for the Global South Network.

Follow-up Questions
How can we ensure that evaluation work captures the societal, ethical, and distributional harms specific to Global South contexts?
She highlighted the need for evaluations to go beyond technical metrics and reflect real‑world risks in low‑capacity, inequitable settings.
Speaker: Dr. Urvashi Aneja
How can we enable compliance with the New Delhi Frontier AI commitments, particularly regarding data sharing and multilingual performance benchmarks?
He asked how to operationalize the commitments made by AI developers to share usage data and benchmark multilingual performance.
Speaker: Mr. Abhishek Singh
What tools and processes are needed to evaluate AI models in diverse languages and build capacity across Global South countries?
He emphasized the lack of linguistic benchmarks and the need for capacity‑building to assess models in many local languages.
Speaker: Mr. Abhishek Singh
What steps are required to make the Global South Network for Trustworthy AI functional and to secure necessary support from all stakeholders?
He raised concerns about moving from launch to actionable, sustainable operations with stakeholder buy‑in.
Speaker: Mr. Abhishek Singh
How can we prevent benchmarks from being neutral or dominated by a handful of institutions, ensuring diverse risk priorities and deconcentrating power?
He warned that benchmark design can embed power imbalances and called for broader, inclusive definition of risks.
Speaker: Ambassador Philip Thigo
Should the network establish regional nodes (e.g., across Africa) and how might that be organized?
He suggested creating sub‑regional hubs to better address the diversity of contexts within the Global South.
Speaker: Ambassador Philip Thigo
How can multilingual benchmark datasets be developed and an annual red‑team exercise be instituted for the Global South?
He proposed concrete mechanisms—datasets and red‑teamings—to continuously test models in local languages.
Speaker: Ambassador Philip Thigo
Can a Global South AI Safety Report be published that adopts an expansive definition of safety?
He recommended producing a regular report to synthesize findings and set a broader safety agenda.
Speaker: Ambassador Philip Thigo
How should the network’s work be integrated into multilateral processes such as the UN AI governance panels?
He asked how the network can feed its evidence into existing global governance structures.
Speaker: Ambassador Philip Thigo
How do we close the accountability loop so that evaluations translate into tangible benefits for citizens?
He highlighted the risk that technical assessments may not reach end‑users without clear pathways to impact.
Speaker: Ambassador Philip Thigo
What sustainable, community‑led system can be built to scale multilingual and multicultural evaluations globally?
She identified the challenge of turning deep, context‑aware pilots into a repeatable, large‑scale evaluation infrastructure.
Speaker: Ms. Natasha Crampton
What internal and external ecosystem changes are needed for grounded evaluations to become standard practice in industry?
He asked how companies and the broader ecosystem must evolve for context‑sensitive safety assessments to be routine.
Speaker: Mr. Amir Banifatemi
How can safety be incorporated into financial planning and create penalties or incentives for unsafe AI?
He noted that without financial stakes, companies lack motivation to prioritize safety.
Speaker: Mr. Amir Banifatemi
How can talent inclusion be improved so that diverse voices are part of safety conversations and tool development?
He pointed out the current lack of representation of people who understand local contexts in safety work.
Speaker: Mr. Amir Banifatemi
How can compute access gaps for Global South researchers evaluating models be addressed?
He identified limited access to high‑performance compute as a structural barrier to evaluation.
Speaker: Ambassador Philip Thigo
How can the many AI safety capacity‑building initiatives and networks be coordinated and harmonized globally?
He observed a proliferation of overlapping efforts and called for a coordinated framework.
Speaker: Prof. Balaraman Ravindran
What cross‑border problems require collaboration across geographies, and how can the network prioritize them?
He suggested focusing on issues that cannot be solved within a single country to drive genuine collaboration.
Speaker: Prof. Balaraman Ravindran
How can learning capabilities be accelerated, incident reporting be improved, and open‑source safety evaluation tools be disseminated?
He highlighted opportunities to build tooling and reporting mechanisms that capture contextual harms quickly.
Speaker: Mr. Amir Banifatemi
How can the evaluation of AI safety be institutionalized at the point of deployment rather than after harms emerge?
She emphasized the need for safety checks to be built into deployment workflows.
Speaker: Dr. Rachel Sibande
How can African language benchmarking initiatives be expanded beyond the current limited set of languages?
She noted that Masakhane covers only ~50 of 2,000 documented African languages, leaving many unserved.
Speaker: Ms. Chenai Chair
How can we prevent AI tools from becoming surveillance technologies without informed consent?
She gave examples of tracking devices being misused, underscoring the need for consent‑driven design.
Speaker: Ms. Chenai Chair
How can the environmental footprints of AI models be evaluated and accounted for throughout their lifecycle?
He called for full‑lifecycle accountability, including water and environmental impacts, in safety assessments.
Speaker: Ambassador Philip Thigo

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Transforming Health Systems with AI From Lab to Last Mile

Transforming Health Systems with AI From Lab to Last Mile

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Vikalp Sahni presenting an AI-driven, end-to-end health-care platform that aims to eliminate information fragmentation, simplify history collection, and free doctors from administrative tasks [6-9]. Leveraging in-house AI, the system creates a digital identity (ABHA), aggregates patient-generated records into a personal health record, and uses language-aware prompts to summarize conditions and schedule appointments, as illustrated by the Neeti use-case [14-45]. During a clinic visit, the AI-enhanced EMR provides real-time transcription (EkaScribe), alerts clinicians to drug allergies, and automatically generates multilingual discharge notes that are synced back to the patient’s PHR [46-70]. Sahni acknowledged remaining challenges such as multilingual scaling, data verification, and model evaluation at large scale [73-76].


Sindura Ganapathi then highlighted the personal relevance of the problem, noting the burden of paperwork for caregivers and raising the question of veterinary care relevance [78-88]. A panel of regulators, funders and researchers – including Dr. Richard Rukwata, Prof. Charlotte Watts, Dr. Monica Sharma and Dr. Trevor Mundel – was introduced to discuss AI’s role in health systems [91-118]. Watts emphasized the shift from hype to substantive, global conversations about integrating AI responsibly into low- and middle-income health systems and the need for rigorous real-world evidence [127-132][213-218]. Mundel stressed that technology accounts for only about ten percent of AI success, with the remaining effort focused on ecosystems and defining human-in-the-loop roles [137-141].


Rukwata described the regulator’s dilemma of accelerating innovation while ensuring safety, and outlined collaborations with the Gates Foundation to create neutral, AI-enabled drug-approval applications [159-176]. When asked about data privacy, Sahni explained adherence to HIPAA, India’s DPDP Act, and the use of end-to-end encryption, while Watts added that funded evaluations will enforce strict anonymity and ethical clearances [271-280][287-292]. Mundel further noted emerging privacy-preserving techniques such as federated learning, citing a grant that used local data to improve an ultrasound diagnostic model without central data sharing [294-301]. Sharma highlighted the funders’ commitment to shared standards, reducing fragmentation, and providing a single evaluation framework to ease researchers’ workload [240-255].


Responding to a query on AI agents for high-anxiety maternal care, Sahni advocated a multi-agent architecture with a grounding agent and continuous human oversight to ensure safety and contextual relevance [336-344]. The session concluded with calls for next-year milestones: a transparent, error-free patient-facing agent (Mundel), operational partners demonstrating real-world impact (Watts), tighter regulator-industry collaboration (Rukwata), and preserving the clinician’s final decision (Sharma) [354-366].


Keypoints


Major discussion points


EkaCare’s end-to-end AI-driven care platform – Vikalp outlined a solution that tackles three core problems: fragmented health information, cumbersome patient history collection, and doctors spending too much time on documentation instead of care. He demonstrated how a digitally-savvy patient (Neeti) creates an ABHA ID, uploads records, interacts with a multilingual AI assistant, schedules an appointment, and how the doctor’s EMR is auto-populated, with AI alerts for drug allergies and automatic translation of notes into the patient’s language [6-9][14-20][30-38][45-51][54-62][66-70].


Technical and operational challenges of scaling AI in health – The presenters acknowledged hurdles such as multilingual support, data verifiability, model evaluation, and the need for robust governance. Vikalp noted “challenges… how to build these things at scale for multiple languages… who is evaluating these capabilities” [73-77]; later, concerns about hype, regulatory pressure, and the balance between speed and safety were raised [144-149][151-158][159-180].


Regulators and funders navigating speed vs. safety – Panelists (Richard, Charlotte, Trevor, Monika) discussed the tension between accelerating innovation and ensuring patient safety, the role of regulators as the “last person to be blamed” [159-168]; funding bodies emphasized the need for rigorous real-world evidence, cost-effectiveness, and coordinated standards to avoid fragmented expectations [213-229][240-247].


Human-in-the-loop, multi-agent architecture and privacy – Trevor stressed that technology is only ~10 % of AI success, with ecosystems and people being crucial [137-141]; Vikalp described adherence to HIPAA/DPDP, encryption, and certification for data privacy [271-280]; later, a multi-agent design with grounding agents and continuous medical oversight was advocated to keep AI safe in high-anxiety contexts like maternal health [336-344][294-301].


Future aspirations for the AI health community – Participants expressed what they hope to see at the next summit: transparent, explainable patient-facing agents; deeper operational evaluations with funder-partner collaborations; stronger industry-regulator cooperation; and maintaining the human clinician’s final authority [354-359][361-366][363-364].


Overall purpose / goal of the discussion


The session aimed to showcase a concrete AI health-care solution (EkaCare), surface the technical, regulatory, and ethical challenges of deploying AI at scale, and gather perspectives from regulators, funders, and practitioners to shape collaborative, evidence-based pathways for integrating AI into global health systems.


Overall tone and its evolution


– The conversation opened with an informative and demonstrative tone as Vikalp walked through the patient-centric AI workflow.


– It shifted to a collaborative and reflective mood when panelists shared personal anecdotes and acknowledged both hype and genuine concerns.


– A serious, problem-solving tone emerged around regulatory pressures, data privacy, and the need for rigorous evaluation.


– The closing segment turned optimistic and forward-looking, with participants expressing hopes for transparent agents, coordinated funding, and concrete outcomes at the next summit.


Thus, the tone moved from demonstration → reflection → concern → optimism, mirroring the progression from presenting a solution to discussing its broader ecosystem implications.


Speakers

Charlotte Watts – Areas of expertise: public health, HIV, gender-based violence, epidemiology, mathematics; Role/Title: Executive Director of Solutions, Wellcome Trust; former UK government official and G20 participant [S1]


Participant – Areas of expertise: ; Role/Title:  (generic audience member)


Monika Sharma – Areas of expertise: biomedical research, science innovation, health-sector funding; Role/Title: Dr. Monica Sharma, Lead, No One Artists India Foundation [S6]


Vikalp Sahni – Areas of expertise: AI in healthcare, digital health platforms, EMR integration; Role/Title: Founder/Representative, EkaCare (AI for Bharat’s Health) [S7][S8]


Richard Rukwata – Areas of expertise: pharmaceutical regulation, regulatory harmonization in Africa; Role/Title: Dr. Richard Rukwata, Director General, Medicines Control Authority of Zimbabwe; chief regulator [S9]


Sindura Ganapathi – Areas of expertise: veterinary medicine, regulatory affairs, conference moderation; Role/Title: Moderator/Host of the session; involved in G20 from the India side [S11]


Trevor Mundel – Areas of expertise: pharmaceutical development, global health, health-innovation funding; Role/Title: Dr. Trevor Mundel, Rhodes Scholar, former medical doctor and PhD in mathematics, senior leader in global health and innovation funding [S12]


Additional speakers:


(none identified beyond the list above)


Full session reportComprehensive analysis and detailed insights

1. Introduction & three core challenges – Vikalp Sahni opened by asking the audience if anyone had never visited a doctor, quickly showing that virtually everyone has experience with medical care [1-3]. He then identified three persistent problems in health delivery: (i) fragmented information from appointment-booking to vitals collection, (ii) difficulty for patients to convey a complete medical history, and (iii) excessive clinician time spent on documentation rather than patient interaction [7-9]. Sahni positioned EkaCare’s end-to-end AI-driven platform as a solution that uses in-house artificial intelligence to address all three challenges [10-12].


2. Neeti patient journey – The patient-facing workflow was illustrated through “Neeti”, a 65-year-old digitally-savvy woman with diabetes [13-16]. She first creates an ABHA (Ayushman Bharat Health Account) digital identity [17-18] and uploads photographs of her legacy records into a Personal Health Record (PHR) app, where AI extracts and digitises her history [19-20]. Neeti then asks a multilingual AI assistant to “summarise my health”, receiving a concise overview of her conditions [21-25]. When she reports a fever and a foot wound in her native language, the AI asks targeted follow-up questions (e.g., wound location, swelling, odour) and presents language-appropriate prompts that simplify interaction for a senior user [26-34]. After gathering contact details, the system recognises the case as urgent, suggests available doctors on a specific date, and creates an appointment once Neeti selects a provider [35-45].


3. Doctor’s EMR interaction – At the clinic, the physician views an AI-enhanced electronic medical record that already displays Neeti’s past history and current complaints [46-51]. By activating the audio-based “EkaScribe”, the conversation is transcribed in real time, producing verifiable notes that are automatically copied into the EMR [54-58]. The AI detects Neeti’s allergy to amoxicillin, raises an alert, and the clinician promptly switches the prescription to clindamycin [59-65]. All notes are rendered in the patient’s local language and, with a single click, synchronised back to Neeti’s PHR, creating a new node for future consultations [66-70]. Sahni highlighted that this workflow consolidates fragmented data, provides safety checks, and delivers multilingual documentation [71-72].


4. Scalability & evaluation challenges – Sahni acknowledged major hurdles to large-scale deployment: supporting dozens of Indian languages, ensuring model verifiability, and defining who will evaluate performance at scale [73-77].


5. Sindura Ganapathi’s opening remarks – Sindura shifted the tone by first asking whether a veterinary doctor counted as a “doctor” [78-80] and noting the untapped business potential in pet-care [82-84]. She then shared her personal experience as a caregiver for a mother with multiple chronic conditions, confirming that the interfaces described by Vikalp mirrored real-world frustrations with paperwork [85-88]. She further described reading the CEO-of-Anthropic blog, initially feeling “bleak” about the state of AI in health, but becoming “energised” by the hustle of the summit and inviting participants to share how the last two-to-three days had made them feel [115-124].


6. Panel introduction – The discussion moved to a panel of regulators, funders and researchers: Dr Richard Rukwata (Zimbabwe Medicines Control Authority), Prof Charlotte Watts (Wellcome Trust), Dr Monika Sharma (No One Artists India Foundation) and Dr Trevor Mundel (pharmaceutical and global-health veteran) [91-118].


7. Panel discussion highlights


Charlotte Watts* used the early-day energy of the summit to call for a shift from hype to substantive, global conversations about AI in health, especially in low- and middle-income countries [127-132]. She stressed the need for rigorous real-world evidence-randomised controlled trials, cost-effectiveness analyses, and system-integration assessments-before AI can be scaled [213-229].


Trevor Mundel* reiterated that technology alone accounts for only about ten percent of AI success; the remaining effort lies in people, workflows and ecosystem design, and in defining the human-in-the-loop role [137-141]. He later advocated a multi-agent architecture with a grounding agent and continuous medical oversight for high-anxiety maternal and infant care [336-345].


Monika Sharma* contributed a personal anecdote about her 6½-year-old child’s view of AI, illustrating how early perceptions shape expectations [140-148]. She also argued for shared standards among funders to reduce fragmentation, avoid duplication, and ensure that AI investments translate into real-world impact [240-247]; she reminded the audience that the clinician’s final decision must remain central [366].


Richard Rukwata* described the “dual pressure” of accelerating innovation while remaining the ultimate point of accountability when things go wrong [159-168]. He referred to a podcast where this tension was discussed and cited a collaboration with the Gates Foundation to develop AI-enabled screening tools for marketing authorisations, aiming to create neutral applications that speed review without compromising safety [170-176].


8. Q&A – Data privacy – A participant asked for policy-level guidance on data privacy [265-267]. Vikalp responded that EkaCare follows established frameworks such as HIPAA, India’s DPDP Act and NHA guidelines, pursues relevant certifications, and employs end-to-end encryption [271-280]. Prof Watts added that funded evaluations will enforce strict anonymity, ethical clearances and privacy safeguards [287-292]. Dr Mundel introduced federated learning as a promising technique that keeps raw data local while still improving models, citing a grant-funded ultrasound diagnostic system that used federated contributions without central data sharing [294-301].


9. Q&A – TB geospatial decision-support – A participant queried the operational use of geospatial AI for active-case-finding and diagnostic-network optimisation for tuberculosis [300-306]. Charlotte Watts responded that geospatial AI can help identify hotspots, optimise resource allocation and must be evaluated for cost-effectiveness and integration with primary-care pathways [307-315]. Trevor Mundel added that funding constraints and the need for robust validation mean such tools should be piloted in partnership with national programmes before wider rollout [316-324].


10. Q&A – Maternal & infant care agents – When asked about AI agents for high-anxiety maternal and infant care, Vikalp reiterated the importance of a grounding agent and a dedicated medical team to keep the system within safe boundaries [336-345]. He noted that a single-prompt agent can narrow the worldview, whereas a collaborative multi-agent design mitigates risk, especially when mental-health considerations are involved [340-344].


11. Closing wishes for the next AI Summit (Geneva) – Each panelist offered a brief “next-year wish”:


Trevor Mundel*: a next-generation, fully transparent patient-facing agent that never makes contraindication errors and inspires complete confidence [354-359].


Charlotte Watts*: fund-partner organisations present operational learnings, moving the dialogue from hype to honest assessments of what works and what does not [361-366].


Richard Rukwata*: deeper collaboration between industry and regulators to turn the latter from perceived bottlenecks into partners for safe, effective medicines [363-364].


Monika Sharma*: unified evaluation standards to reduce fragmentation and ease researchers’ workload [240-247].


12. Core themes & remaining gaps – The session converged on four core themes: (1) AI must be built around a human-in-the-loop, ecosystem-centric design; (2) rigorous, real-world evidence and cost-effectiveness analyses are prerequisites for scaling, especially in low-resource settings; (3) unwavering commitment to data privacy through legal compliance, technical safeguards and emerging techniques such as federated learning; and (4) collaborative frameworks that align regulators, industry and funders to balance speed with safety. Unresolved issues include concrete policy guidance for privacy-by-design, scalable multilingual model verification, prospective evaluation of geospatial AI for tuberculosis case finding, and detailed specifications for reassuring maternal-health agents [73-77][137-141][213-229][271-280][240-247].


Session transcriptComplete transcript of the session
Vikalp Sahni

All of us here, we would have visited doctors at some point in time or have been sick. Anyone who has never visited a doctor, please raise hand. So practically everyone. So let’s imagine how was your experience when you visit a doctor? How do you express your symptoms? How the doctor interacts with you and how the interaction happens with the medical systems where EMRs comes in? What we are trying to show today and what we’ve built at EkaCare is an end -to -end solution that solves three key challenges that we face today. One is the fragmentation of information and clear delivery, be it right from taking an appointment or taking a vitals. The second is how easily and comfortably you can tell about your history rather than fumbling through lots of files and how easily it can be collected, collated and being.

And the last but not the least. we would want doctors to spend time with us and not with machines writing about prescriptions, rather talking to us, counseling us, connecting with us. So the solution that we have built solves for all these three challenges. Obviously, thanks to the advancement in AI, we have been able to do a lot of this due to the capabilities that we have built in -house. So I’m going to narrate a story. This story is of Neeti. She’s a 65 -year -old female, has diabetes, and she wants to now see how she can do the whole end -to -end care delivery. To start off, Neeti is quite digital savvy. She actually has created her ABHA address.

ABHA is the digital identity that India government provides. This digital identity allowed her to collect a lot of her medical records. records into the app, which is her PHR or patient health record app. She has also taken many photographs so that the AI can read through these photographs and collects her medical history in a digital format so that it can be summarized. Now what happens is Neeti wants to talk to an AI, which is a med assist or an assistant for Neeti. She goes ahead, she just picks up a prompt, says summarize my health. What is happening now is all of Neeti’s health is getting summarized. You would know, Neeti would know these are the kind of things that has come up from the medical records.

Also, there is a prompt that Neeti would get, which is very, very relevant to the kind of things that Neeti is supposed to talk about. But today Neeti came for a very different purpose. And now in a local language, she’s talking to the bot. And she’s talking to Neeti. And Neeti is actually telling that In English, she’s expressing that she has fever and there is a wound in her foot. What AI would start doing now is try to understand more about this specific condition. Where is the wound? Is it swelled? Are there any kind of smell that is coming in? And all of this is happening in the local language that Neeti understands. More importantly, it is not letting Neeti to only type or talk.

There are these prompts that are coming in that will ease off the interaction of a 65 -year -old female. After collecting more information, such as mobile number, the AI would identify that this is an important case and this needs a doctor’s intervention. But which doctor’s intervention? With which clinic? On which day? All of this information will now get collected. This will be displayed. So in this case, Neeti is being told that there is an availability of these two doctors on 14th of February. But she can always say that, okay, I want to do it in a different day. Pick up the doctor. As soon as she picks up the doctor, the appointments gets created. Neeti can actually do all of this by typing or by acting on the prompts as well.

So this is how all the information that Neeti wanted to share with the doctor gets collected, gets summarized, and now appointment is created. The next story goes to when Neeti visits the doctor’s clinic. And when Neeti visits the doctor’s clinic, this is the doctor’s view where a doctor is looking at a classical EMR screen. but how this EMR screen is fitted with these AI utilities that can help a doctor to get the better outcome is what we want to demonstrate. If you see all the current EMR and the current prescription for Neeti is completely empty. There is nothing there. Doctor is looking at the past history of Neeti as well as what are the current ailments and current issues that has been listed.

AI also ensured that it not only figures out the important information for patient, but here a doctor is also able to understand and get to know more about Neeti that there is an uncontrolled diabetes. So this is the kind of person that he’s dealing with. But more importantly, it would be very hard for a doctor to start filling all of these information. During the consultation, doctor just starts the audio -based EkaScribe, which is now doing the interaction between doctor and the patient, recording the interaction between doctor and the patient. These interactions gets converted into medical notes and these medical notes are verifiable medical notes that doctors would see. Again, this entire thing has come out just by the interaction between the doctor and the patient.

Doctor has to just do copy to EMR pad. As soon as the copy to EMR pad happens, this entire information gets filled, whatever has been discussed, all the medication that doctors wanted to do. But here we go and see that during the consultation, doctor prescribed amoxicillin. But the patient’s medical history said that he is or she is allergic to amoxicillin. The capable AI -based EMR is now alerting that the patient is allergic to amoxicillin. So, the patient is allergic to amoxicillin. So, the patient is allergic to it. without actually going deeper, a doctor can very easily go ahead now and change this medication to provide for a better outcome as well as to reduce the medical errors.

So it’s changed from amoxicillin to clindamycin. As it changed, the prompt also changed. If you look at the information, all filled, the PDF view of the patient will have the entire medications, everything created in the local language. There is a translation of all the remarks, advices, everything in the language that patient understands. And at the click of a button, this information goes and sits into the patient’s PHR app, creating another node into her medical system. That can be used for the further consultation and any kind of other ailments. So that’s what is the power of AI and the utilities that we are seeing today. The care process right from being fragmented to being consolidated, understanding the patient’s entire medical history to making sure that the doctor’s time is saved while he’s seeing more patients and more medical data is captured.

Today, all of that is possible. But yes, there are challenges. How to build these things at scale for multiple languages, how to generate that data so that your models are verifiable at those large scale. Who is evaluating these capabilities that are being built? All of these are challenges that we as a developer space. And I’m looking forward to building more and working more in this domain.

Sindura Ganapathi

I’ll ask you to take a seat. When you said, is there anyone who has not visited a doctor, instinctively I was asking, does veterinary doctor count? Because I’m a veterinarian by background. And then it’s only a half joke, actually. In the pet care industry, there is real value and business to be made there. So just a thought. And on a more serious note, you could change the name of the lady and adjust age, et cetera. That could be my mother. And I deal with this personally as a caregiver, has all these conditions, deal with so many papers. And every interface you mentioned is a leaf out of my personal life. So thank you for thinking about building a solution here.

I will invite my. Panelists one by one, please join us in the on the stage. First, Dr. Richard Rukwata. He is the chief regulator. He is the director general of Medicines Control Authority of Zimbabwe. I have very high regard for regulators because I have been working on our regulatory agency and the streamlining, and I can see how difficult job that is. And the fact that you have seen this through for ML3 recognition, that’s a wonderful accomplishment. Congratulations. Congratulations on that. Not an easy job. And also, you are involved in the regulatory harmonization work of Africa, and there is a lot of interesting thoughts you will be hopefully able to share on Next, I would like to invite Professor Charlotte Watts.

Last we saw was in G20. Hopefully, it brings back memories. Yes. Happy ones. I’d like to keep it that way. She has had extensive career in both health care, HIV, gender -based violence, epidemiology, mathematics, and a deep experience working in the government, UK government, which was the capacity she came for G20 meetings, which I was involved in from India side. So it’s a pleasure to have you back, Charlotte. And now she’s working at Wellcome Trust as Executive Director of Solutions. I would love to hear more about how you are thinking about these things. And next I would like to invite Dr. Monica Sharma. I happened to meet her just now, and she is the lead for No One Artists India Foundation.

And welcome. And… And her background is also in this both biomedical field, science innovation field, but also has extensive experience working in putting together funding programs, whether it is Newton Fund, whether it is IRTG, Germany’s International Research Training Groups, or India’s BioPharma Mission Program. So all of these, I’m sure, will come in very handy in your current role and would love to hear from you on thoughts related to the topic today. And last but not least, my dear friend and mentor, Dr. Trevor Mundell. I should say Dr. Dr. Trevor Mundell. He is both a – he has an unusual background. People who work with him smile when I say unusual. And Trevor, he did medical degree and then he figured he wanted a Ph .D.

in mathematics. So and is a Rhodes Scholar and has extensive experience in pharmaceutical industry as from early research to development and decade plus experience in global health. With that, we will get started. First to begin with, so I think hopefully you all have mics. For me personally, coming here after having read the blog that went out very famously by CEO of Anthropic, I came in with a very bleak feeling, to be very honest. It’s. Kind of depressing. what are we creating but I have to say last two three days has been energizing seeing all the chaos in terms of interactions people talking to each other hustle just hustle and people excited about the product they are building it brought me memories of vegetable market where I grew up from where people are like life there right so people are trying to sell something people are trying to buy something people are talking and the reason I talk about that as a happy thing is it’s nice to see so many human beings that’s what came to my mind in this in the backdrop of that blog so just love to hear from you what was your feeling as human beings I think this this seeing anything that you want to particularly share last two three days You have been here.

You saw all of this. What did that make you feel? Because I think going forward, this feeling of human beings, I think, will have a currency of its own. Anybody wants to volunteer and say something? An open ended question.

Charlotte Watts

Yes, I’m happy to jump in. So I just got here actually yesterday. So I actually missed, I think, the early start of the week, which I heard was fantastic because you had the youth here. As well as, you know, older people who’ve been in the global health or the global sort of sphere or in the AI world for longer. So that that mix and the drive of the kind of energy, I think, is what I was hearing people tell me about the start of the week. But now I’ve just been here. Yeah. Sort of last night and today. And for me, what I feel quite reassured about, I wonder if it’s so, you know, the change.

is so profound and so I suppose it I was sort of wary because there’s so much hype um and then clearly the risks are being articulated but what I feel reassured about in going to a number of sessions is just actually we’re starting to have the more meaningful conversations about what this really means that is getting beyond that either hyper cell or hyper fear to actually how do we navigate this space and also how do we navigate this as a global community because this is not something that’s one country’s problem to fix so so actually I’m feeling that you know this is this is a really important conference and and we’re starting to get into the we need and we’re starting to get into the nitty -gritty of of how on earth we move forward in in the in the best way that really

Sindura Ganapathi

Anybody else want to share? Trevor and then Monica.

Trevor Mundel

Well, Sundara, you know, what I’ve heard frequently in this meeting, and I hear it quite often in the AI application space, is that technology is just 10 % of the exercise in applications of AI. And the rest is really around people and ecosystems. And as soon as people say that, they then go back to talk about technology. So, I am interested in how we do more than just pay lip service to this notion that we really need to think about the ecosystem and the people involved, probably more than the technology itself. And defining the actual role for humans in the loop is going to be, I think, you know, as important as any of the technological advances.

Monika Sharma

So, Sundara, I don’t have an experience from the summit as such, because I’ve just arrived here. But I want to share a very relevant experience from this model. morning so while i was coming here i have a six and a half years old who just saw ai on my you know computer and he said where are you going i said yeah i have a meeting to attend he said ai and i was like oh he’s able to see i said so you know what is here he said yeah it’s artificial intelligence and i said what else you know about it he said yeah soon they’re going to be robots robots doing everything for us and i was like no but still you would need me and i found like oh my god that’s not a good start of a conversation like like everybody is influenced by this so thank you so much for bringing that human back to to this summit yeah that’s what i thought i’ll add as a you know a conversation from my household this morning thank

Sindura Ganapathi

you yeah no i i charlotte i hope you’re right that there is a lot of hype there now i’m praying for hype after reading how many of you have read that what i referred to the blog you by deriva modi by anthropic seal place of Okay. Okay. Few hands. I am not even sure whether I want to urge you to go read because it really makes you think. And there were some people who are in the field. They said, I am choosing not to read it because I don’t want to know. No. So it’s a good thing to hear this, that this human in the loop and the way we responsibly develop, because that’s the theme we want to explore, especially in the context of health.

That, I think, is a good segue, Dr. Richard, I want to ask you, start with you. Job of a regulator, I said, is hard. The reason I experienced it firsthand, having now very close. We work with our regulatory system, et cetera, where you have two extreme pressures on a regulator and the one it needs to move fast. It needs to be less like everybody wants it to be. regulation and you want to speed up innovation and any every day gets counted and you are held to the metric. That’s one extreme. The other extreme is, boy, if anything goes wrong, who is the first person? Who approved this? Who allowed it to come out? So, these are two extreme things and usually in a slower cycle, you are able to have some time.

So, how are you thinking about it in the age of both the eye, but in general, in reconciling these two extremes of demands put on

Richard Rukwata

Yes. Thank you for that insight. I have to think on my feet here, but you’re quite right. It’s a matter of industry wanting more results from the regulator. Thank you. investment and also wanting to retain or rather wanting the regulator to retain responsibility when things go wrong. I remember watching a very interesting podcast, I think it was called Moonshot, and in this episode they were saying, well, if all the jobs are taken by AI, regulatory jobs will be the last to remain because people always have, people should always have somebody to blame, right? We can’t say, oh, you know, somebody was harmless. No, AI did it. No, that would never work. So worst case scenario, I’ll be the last person there so that they can hang me when something goes wrong.

At least I have that job security to think about. But really, with respect to what is happening as far as industry’s expectations are concerned, we see a lot of potential in AI. We’re currently working with Grant from the Gates Foundation. on an application for screening applications for marketing authorizations. I think those in our industry, the pharma industry, know that this is the biggest source of angst amongst industrialists that regulators take too long, and we are seen as an impediment to progress, actually. So we also blame industry. We’re saying, well, you know, you submit, you know, incomplete applications and then blame it on us. So we’re hoping that with technology we’ll have, you know, applications in the near future that can work for both sides of the fence, right?

Neutral applications that don’t necessarily speak to one side, but they enable all of us to at least reach a common position very quickly. This is the beautiful thing about computers, right? They don’t feel any type of way. They don’t feel any type of way about you. They don’t necessarily like you. They don’t dislike you. so we’re hoping that this will allow us to do I was just saying not yet so we’re hoping that as we work more towards the development of these tools we’ll be able to see more traction from industry so that we become a more efficient part of the supply chain from development to market and not to be seen as the barrier to entry in this field.

Thank you.

Sindura Ganapathi

That’s very helpful and also there is both a challenge for a regulator when this AI speeds up the cycle of innovation brings new complexities but also itself a very good tool in either summarizing a complex application or building models that allows a few people to actually have the same capability as a well developed pharma so be on the same page so lots of interesting possibilities here which in India we’re also thinking about. along those lines all three of you are coming from one type of shared commonality which is funding innovation and as a funder of innovation you are also in not too dissimilar way are trying to balance promoting innovation while upholding safety and minimizing risk etc so i would like to hear from each of you because each of you are different kind of funders how you are thinking about balancing these two in the funding programs and scouring innovation and speeding that up you can go in any order you can thumb wrestle

Charlotte Watts

trevor’s pointing to me but i went first last time but now i can go um i mean we we fund uh so you know we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we a range of innovations with ambition of improving and saving lives.

Increasingly, we are funding innovation

Trevor Mundel

you know on the acceleration front we look at it you know on the acceleration front we look at it in that every month we don’t have the next generation malaria vaccine. You know, and certainly every year we’re seeing hundreds of thousands of deaths in young children. Every year we don’t have the enhanced personal coaching in education. We see a generation that is losing opportunities. So we feel a tremendous pressure, I know, from the funder side in terms of how do we speed the availability, the access to a tool which looks like it might be a solution to some of those vexing problems. But I think that it really behoves us here to think about completely focusing on fast might be slow.

And we have to have this moment of reflection because what could derail the good application of AI? You know, you think about it in the health area, which is so sensitive, the few errors, like on the regulatory front, relatively few errors that could occur, the, you know, unfortunate outcome for a patient. which can be attributed to a system which was probably misused by, you know, the people who are using it maybe, but nevertheless will be attributed to AI. And that leads to a tremendous deceleration and things not moving ahead. We take the lesson of the self -driving vehicles, you know, where they may be incredibly good drivers and better than the average human at driving, but one fatal accident puts that whole enterprise at risk.

So I think from the funder’s perspective, we need to have a situation where maybe taking a little bit of a reflective and a slower approach might be fast.

Sindura Ganapathi

Monika?

Monika Sharma

So I represent No Notice Foundation and we support health, people and planet both. So at this point, sitting with… funders, global funders. with yourself. And I think it sends a strong message how important AI is at the moment with respect to health. So while we are trying to address as funders different parts of the ecosystem addressing health, but having AI bringing evidence to this really matters. So I think I really feel that having a joint approach towards it is kind of strengthening the whole ecosystem of AI.

Sindura Ganapathi

QR code allows you to look at it and all the details I believe are there. I have not tried it. I would very quickly like to hear from any of you or all of you. What are you trying to what are you hoping from this? And after this, you know, usually panels, I find panels very boring, by the way. So and as a person sitting there or as a person trying to sit here and trying to give Gyan in two minutes. So I would love to make it more interactive. So get your questions and there is still time. So right after this, hopefully I would like to see you interacting and sharing your thoughts and sharing questions.

I’ll be coming to you. So who wants to say your hope from this call?

Charlotte Watts

Yeah, so we’re really excited about this announcement today. You’ll see it’s the big health research and innovation foundations coming together. to jointly support what is a major initiative. And essentially, what we want to do here is say, how do we generate real -world evidence on what does it really mean and are we really seeing real -world health impacts once we start to integrate AI into different health systems? So we have lots of exciting opportunities that are showing the efficacy of particular application, but what this call really wants to support is rigorous evaluations of where AI systems are integrated into clinical decision -making. Our focus is on low – and middle -income countries. We are interested in really asking a range of questions.

What does it mean for the health system? Are these new initiatives actually operable? Can it be integrated into what often is quite a big bureaucracy of a health system? what are the costs associated with that? Are these interventions actually cost effective? In the end, ministries of health have to make decisions based on affordability. So how do we learn more about the costs of this transition? And what are the things we didn’t expect? Right. And what we see, you know, if we look at the evidence base, we’ve got a lot of exciting evidence of interventions that show promise. We’ve only got a relative handful of rigorous randomized controlled trials that are actually assessing interventions when they’re implemented.

So there’s a massive gap there. And then we’re also now starting to see in different contexts anecdotal evidence of where AI has been integrated, but it’s actually butted against the system. And actually that opportunity isn’t realizing and sort of is showing it’s easier said than done. So basically this investment is to try and address that evidence gap. And I just want to call out that Jay Powell is here and APHRC. who are key partners on this in supporting the implementation and for APHRC, the contextualization of the work that we hope to be supporting in Africa.

Sindura Ganapathi

Wonderful. Thank you. Anything else, Trevor, you want to add?

Trevor Mundel

Well, I just want to say thanks to our partners that welcomed Novo Nudist Foundation on this initial effort. I hope it’s the start of even more in the future over there because the global health world has been plagued by this lack of primary data. You know, us and others have funded a lot of modeling simulation around global health problems. But you cannot transcend the lack of primary data at the end of the day. And AI is too important for that to be the constraint that impedes implementation at the end of the day.

Monika Sharma

I thought maybe. It’ll be good to also add that how as we fund this together. envision this as a commitment towards shared standards. So while we’ll be working together as part of this call, we are saying that the real world evaluation is not optional. It is the foundation. And by aligning together, we are kind of defining what good looks like so that we reduce the burden on countries and developers who would otherwise face a patch of I would say patchwork of expectations. And secondly, I would say that by joining hands, we are reducing fragmentation in a rapidly evolving field. And now that we are coordinated, we are getting away with the risk of duplication, the quality that we want to see in the applications or in the products.

And we make sure that the investments that we do are getting into the real world. I mean, they do create an impact because of the coordination that is part of this whole process. And I also say that when we sit together, it adds the seriousness. to the ecosystem that what we are doing is not a side experiment. This is something that we are creating as infrastructure for a long -term process that I would say governments have been asking for it. And the best part as a researcher, I would say, is that we don’t, like the researchers don’t have to navigate three different timelines. Okay, yeah, that’s great for that. And no three different criterias. We just have not like one agreed aligned criteria.

And I would say that no three different deadlines, no timeline. So it makes really life easy as a researcher, I would say. I hope you get some interesting calls from it.

Sindura Ganapathi

So if we have questions, is there a mic going around? I hope there is. If not, I’ll give you mine so I don’t have to answer your questions. And please, there is one hand up there. Okay, please direct your question. Including to Vikalp, if you have questions. Yep. Let’s start with the gentleman at the back and then you’re up next.

Participant

Thank you, folks. Very interesting. My questions around data privacy and data privacy by design. And the lady mentioned three different parameters. Could you elaborate more on how data privacy can be incorporated, at least at a policy level?

Sindura Ganapathi

Anyone, anyone wants to take that question, at least in the context of this call, I guess you can are in general. Yeah. How are you handling this? Yeah.

Vikalp Sahni

So I think health data is quite sensitive and I mean, more sensitive data rather when it comes to country, when it comes to individual, when it comes to even places such as police, military, et cetera. So. So it’s a pretty valid question. Some of the things that we as an organization try to follow is the general guidelines that has been provided by the competent authorities, such as be it HIPAA on the healthcare data or DPDP, which is the Act for Data Privacy in India. And more importantly, if we look at the data exchanges, such as NHA in India have also created clear guidelines. I think following those guidelines and getting yourself tested against those guidelines are fundamentally important.

And it has become so sensitive that today a lot of our customers, they do ask us whether you have a continuous, continuous like sort of applicable certificates from these. privacy authorities as well as these privacy -based frameworks. So that’s how we solve for it. And I think it’s a good thing. In health, it is fundamentally critical. And the technology, how it is growing, I think there are multiple other ways as well, like an end -to -end encryption and so on and so forth, where we can use it to keep things private.

Sindura Ganapathi

So there are two aspects to it. One is technological, and another is policy. There are other sessions entirely focused on people who are working on it. So I wouldn’t put you in the shoes to answer that. But on the technological front, both Charlotte, if you want to address, or Trevor, on what are some of the things, model learning without data being exchanged, or synthetic data, so many aspects of it which have been at the forefront. And Charlotte, whatever you want to add.

Charlotte Watts

I mean, I just… I just wanted to say, in terms of the evaluation… that we want to support through this funding. We’re very much expecting clearly an anonymity of, you know, basically for those evaluations to adhere to high quality research standards. So the kind of bars and checks and controls that you’d expect if you’re doing any sort of research study on health and the sort of ethical guidance and clearance procedures that you need to adhere to. So for us, that’s just an important part of any aspect of research that we support and that we’ll be supporting in this initiative. And that includes issues of privacy and other things.

Sindura Ganapathi

Do you want to say anything about the technological emergence of any new technology that has been helping with preserving data privacy, but not the innovative learnings and improvements of the models?

Trevor Mundel

Yeah, Sundar, you know, so I think that for us, it’s. There’s no compromise on. patient data, privacy from the clinical trial, as Charlotte has mentioned over here. But AI does raise a lot of other issues that go almost beyond that. So, for instance, you know, the various models of federated learning that people have introduced where you can have locally private data, but you contribute to the evolution of a model, which improves because it has access to a very diverse data source. Now, has that actually been regulated? We had an example of one of our grantees who produced a very good system for using ultrasound to diagnose certain chest diseases, and it was based on a federated contribution from different groups that kept their own data local and private, but they contributed to the model.

And, you know, that hasn’t really been tested, and all of the policies around is that a disclosure, which is acceptable now in the age of AI, I think it’s something that we may want to. So, I think it’s something that we encourage with the right framework.

Sindura Ganapathi

Thank you. Do you have the mic? Okay. Then if you have another mic, you can take it to the gentleman, madam, and then after you.

Participant

My question is to Professor Watts. You mentioned about clinical decision support. So the context from an Indian healthcare setting, as you’re well aware, is majority of our health is run at the front line. So there’s also an element of operational decision support as such. So there’s a bunch of geospatial AI models that we are working with Google for geospatial inferencing in the tuberculosis space, mostly active case finding and then diagnostic network optimization. So my question is, from an evidence perspective, we obviously are doing some retrospective analysis, and we plan to follow it up with a prospective analysis as such, although it’s a single user. So I’m wondering if you have any thoughts on that. I think it’s a great question.

I think it’s a great question. I think it’s a great question. but would this be of interest and what is your level of inclination to operational decision support because I’m a physician myself, I’m a medical informaticist as a PhD. I can tell you for one thing for sure is the patients who come into the system, they’re for the most part taken care of but those are all the silent patients who are out there undetected in the community. So what’s your inclination indeed in this research grant for such solutions?

Charlotte Watts

It’s a wonderful question because essentially I come from public health. So our interest, I think our collective interest is actually how do we in particular focus our evaluations and generate evidence where there’s the greatest opportunity to improve health and to strengthen systems and some of that aspect might be actually how do you other opportunities to other opportunities to really help to improve health and to strengthen systems and some of outreach and improved care for the underserved and so we’re not going to say you know this works and this fits and this isn’t fit but ultimately we are interested in how does that integrate within the system in the call we mentioned the importance of looking at interventions as the primary care level not only at tertiary care and I think the things that will resonate in our interest is is really are there areas where actually the opportunity is big enough that actually it merits that assessments to say is this really translating into tangible health impacts and is the return on that actually affordable and is it something that could be scaled so that issue of you know how does it connect with the system is an important part of the question as well that we’re interested in.

Trevor Mundel

Now I do think it’s a very important question because you’re probably all aware of the constraints that we face now in the global health space in terms of funding. some of the exciting new technologies that are coming along, whether it be at the level of the Global Fund or of Gavi, who both have not met quite the standard that we would like to in their replenishments. So there’s just a reduced amount of funding available for those critical commodities that could be life -changing. And when we get a TB vaccine, which we hope we might have in, say, three years, how are we going to afford to actually put that out to the people who need it?

So it’s exactly the kind of targeting that you’re talking about in terms of risk targeting that can make all the difference in terms of taking now the lesser amount that we can afford, but putting it to where the need is the greatest. And that matching, which the AI systems and that geospatial targeting that you’re talking about, is exactly the solution that we need to promote and understand how it works.

Sindura Ganapathi

So, person who has a mic, and then you can hand it to the person after you ask the question.

Participant

It has been like a great session. So how do we go about building AI agents that are not only intelligent, but reassuring in very high anxiety environments like maternal and infant care? How do we go about that? I would love to hear your thoughts because we’re building something on the same.

Sindura Ganapathi

When you say high anxiety, just so that.

Participant

High anxiety for maternal and infant care, because even myself as a new mother, I feel that there are a lot of open areas where the mother doesn’t know what to do. Right. And it’s an open field. And the pediatricians, Gainax and the mother support system is very low. When you go down to tier two and tier three cities. How do we go about building that? I would love to get some thoughts.

Sindura Ganapathi

Take it.

Vikalp Sahni

So I think one of the things that we have done while we build, a lot of these agentic pipelines for doctors, for users is. having human in the loop while the development is happening is extremely, extremely important. And that’s what Trevor also mentioned, because today, how this can go and where it can lead is not something that you can fully control. And so there are these systems that are specifically designed where an anonymous de -identified conversations are practically being distilled to see if the agents are working together in tandem. The second thing, and that’s more technical that we have sort of figured out is the models are quite capable. But when you are running them with a single goal, or a single agent with a single prompt, that practically, at times narrows down the whole worldview.

But if you are running multiple agents collaborating together where there is a grounding agent whose job is to make sure that the other agent is not sort of going beyond what the boundaries are, I think that is fundamental in healthcare. If it is just a single agent, single prompt and a very, so, and that’s what we should avoid because it’s a quite deep workflow, especially if we look at maternal health and things where the mental health comes into being. It’s fundamentally important that we follow some good technical principle of creating a multi -agent architecture, but more importantly, have a human in the loop. Because we, as a company, hasn’t been able to find a way to get out of it.

That’s why we have like a strong 10 member medical team, which is also growing where these are doctors working with the technology.

Sindura Ganapathi

Thank you. And unfortunately, I have been told we are out of time, but speakers will be available. If you can please come up to them. And one very quick thing, just before we go, anything you want to share, what you would like to see next year when we come back to AI Summit? I just heard that it is being hosted in Geneva. So we are all showing up there. We have all these aspirations. What would it look like when we show up there to say, OK, this year we did something together? Anything that comes to your mind?

Trevor Mundel

You know, I’d love to see the next iteration of VicAlp’s patient facing agent, and that would be an agent that would be able to guide you in your health pathway, would be completely transparent. And that I would actually understand why it made its decisions. And I would have 100 % confidence. that in that anxiety -provoking situation, it never made an error related to guidances, drug contraindications. It was always correct in those things. And I wouldn’t have to be concerned about that. That’s the next iteration that I’d love to see next year.

Sindura Ganapathi

Next year, maybe.

Charlotte Watts

And what I would like to have next year is instead of all of us as funders sitting up here, I would like to see some of the partners that we’re funding who are doing work to really understand what this looks like operationally and to have really honest conversations about what’s working and what’s not working. And so we’re moving away from the hype to really actually starting to get into the nitty -gritty of what this could be and can be.

Richard Rukwata

Okay, so quickly, I would like to see a situation where there’s more collaboration between industry and regulators because ultimately we’re on the same side. We want the same thing. better quality safe and effective medicines for all our people so development in that area would be very exciting

Sindura Ganapathi

final word to you

Monika Sharma

i think i i would still love to see that no matter how much evidence we generate from ai no matter what we do we still have that last uh word from the doctor who is sitting there and never forget the human angle while we navigate the ai space that’s what i always want to thank you so much

Sindura Ganapathi

yes thank you so much next time we meet i hope we all feel as optimistic as we do and some some more thank you so much for attending here thank you speakers thank you speakers we just have a souvenir for you from india side for the session thank you so much okay where we go Yeah. Yeah. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you you you Thank you. Thank you. Thank you. Thank you. you

Related ResourcesKnowledge base sources related to the discussion topics (36)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“She first creates an ABHA (Ayushman Bharat Health Account) digital identity”

The knowledge base confirms that ABHA is the digital health identity issued by the Indian government as part of the Ayushman Bharat Digital Mission, with hundreds of millions of IDs created [S7] and described as the government-provided digital identity for health records [S10] and [S104].

Confirmedhigh

“Sahni acknowledged major hurdles to large‑scale deployment: supporting dozens of Indian languages, ensuring model verifiability, and defining who will evaluate performance at scale”

Sahni’s identified challenges match the technical issues highlighted in the knowledge base, which cites the need to build AI systems that work across multiple Indian languages, generate verifiable data, and determine who evaluates AI capabilities in healthcare [S1].

Additional Contextmedium

“ABHA is the digital identity that India government provides”

Additional context from the knowledge base explains that ABHA IDs are linked to a federated health record architecture and are a core component of the Ayushman Bharat Digital Mission, enabling health records to move across providers [S42] and [S104].

External Sources (107)
S1
Transforming Health Systems with AI From Lab to Last Mile — -Charlotte Watts: Executive Director of Solutions at Wellcome Trust, extensive career in healthcare, HIV, gender-based v…
S2
The Power of Satellites in Emergency Alerting and Protecting Lives — Alexandre Vallet: Thank you very much Dr. Zavazava. Thank you very much both of you for this introductory remark. I will…
S3
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Participant** – (Role/title not specified – appears to be Dr. Esther Yarmitsky based on context)
S4
Leaders TalkX: Moral pixels: painting an ethical landscape in the information society — – **Participant**: Role/Title: Not specified, Area of expertise: Not specified
S5
Leaders TalkX: ICT application to unlock the full potential of digital – Part II — – **Participant**: Role/Title not specified, Area of expertise not specified
S6
Transforming Health Systems with AI From Lab to Last Mile — -Monika Sharma: Dr. Monica Sharma, Lead for No One Artists India Foundation, background in biomedical field and science …
S8
Transforming Health Systems with AI From Lab to Last Mile — – Vikalp Sahni- Richard Rukwata
S9
Transforming Health Systems with AI From Lab to Last Mile — -Richard Rukwata: Dr. Richard Rukwata, Director General of Medicines Control Authority of Zimbabwe, Chief Regulator, inv…
S10
https://dig.watch/event/india-ai-impact-summit-2026/transforming-health-systems-with-ai-from-lab-to-last-mile — I will invite my. Panelists one by one, please join us in the on the stage. First, Dr. Richard Rukwata. He is the chief …
S11
Transforming Health Systems with AI From Lab to Last Mile — -Sindura Ganapathi: Conference moderator/host, has veterinary background, works with regulatory agencies, was involved i…
S12
Transforming Health Systems with AI From Lab to Last Mile — -Trevor Mundel: Dr. Dr. Trevor Mundel (medical degree and Ph.D. in mathematics), Rhodes Scholar, extensive experience in…
S13
https://dig.watch/event/india-ai-impact-summit-2026/transforming-health-systems-with-ai-from-lab-to-last-mile — And welcome. And… And her background is also in this both biomedical field, science innovation field, but also has ext…
S14
Keynote-Vishal Sikka — “Bridging that gap requires delivering correct systems, trusted, verifiable, reliable systems that deliver value to peop…
S15
Safe and Responsible AI at Scale Practical Pathways — Ashish Srivastava brought a practitioner’s perspective, highlighting three critical challenges: data interoperability ac…
S16
Harnessing Collective AI for India’s Social and Economic Development — <strong>Moderator:</strong> sci -fi movies that we grew up watching and what it primarily also reminds me of is in speci…
S17
Beyond North: Effects of weakening encryption policies | IGF 2023 WS #516 — Importantly, WhatsApp collaborates with other companies and civil society groups to resist encryption regulations. This …
S18
WS #241 Balancing Acts 2.0: Can Encryption and Safety Co-Exist? — Audience: No problem. My name is Vinicius Fortuna and I work on internet access resilience and privacy at Jigsaw and tha…
S19
The AI Pareto Paradox: More computing power – diminishing AI impact?  — To break through this plateau, we have to reverse the ratio. The real breakthroughs, the 80% of successes that actually …
S20
Building Population-Scale Digital Public Infrastructure for AI — Excellent point. Excellent point, Trevor. And I think you brought out the inherent stress in the phrase diffusion pathwa…
S21
Keynote-Roy Jakobs — “Innovation and governance must advance together With speed Because trust determines adoption … If they move at differ…
S22
The Foundation of AI Democratizing Compute Data Infrastructure — Federated learning approach that allows data contribution to global models while maintaining local ownership and control
S23
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — He cautioned against techno-solutionist approaches: “If we throw resources at AI, we can fix the healthcare system. So w…
S24
Conversational AI in low income &amp; resource settings | IGF 2023 — AI technologies can bridge the digital divide in healthcare. Existing care solutions have the potential to become global…
S25
https://dig.watch/event/india-ai-impact-summit-2026/how-the-global-south-is-accelerating-ai-adoption_-finance-sector-insights — And I think that’s true in the short term when the ecosystem is getting prepared. But in longer term, frauds and mis -se…
S26
Panel Discussion AI in Healthcare India AI Impact Summit — “One of the big barriers is multilingual.”[1]. “Maybe use cases, and I briefly hit on this before, but I think certainly…
S27
Cracking the Code of Digital Health / DAVOS 2025 — Key points included the need for better data liquidity and interoperability to fully leverage AI’s potential in healthca…
S28
Secure Finance Risk-Based AI Policy for the Banking Sector — The discussion revealed several unresolved tensions, particularly the fundamental disagreement between risk-based and em…
S29
How Trust and Safety Drive Innovation and Sustainable Growth — And an organization like the ICO is there for both sides to see, well, there’s someone actually overseeing that. And tha…
S30
Policymaker’s Guide to International AI Safety Coordination — OECD Secretary General Mathias Cormann emphasized that trust is built through inclusion and objective evidence. He ident…
S31
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — And how do we demonstrate that the risks have been managed well? And that is where the assurance ecosystem that Rebecca …
S32
Overview of AI policy in 15 jurisdictions — Summary China remains a global leader in AI, driven by significant state investment, a vast tech ecosystem and abundant …
S33
Global Digital Governance &amp; Multistakeholder Cooperation for WSIS+20 — Ebert calls for creating transparent governance rules that can keep pace with rapid AI development while ensuring benefi…
S34
How to make AI governance fit for purpose? — The discussion maintained a collaborative and optimistic tone throughout, despite representing different national perspe…
S35
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Development | Infrastructure Examples include tumor board preparation, holistic patient data aggregation, post-discharg…
S36
AI for Good Innovation Factory Grand Finale 2025 — – **Accessibility and Affordability Criteria**: Judges consistently emphasized the importance of solutions being deploya…
S37
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — A human rights-based approach with community solutions is advocated AI policies in Africa should ideally espouse a cont…
S38
Foster AI accessibility for building inclusive knowledge Societies: a multi-stakeholder reflection on WSIS+20 review — 5. Information accessibility, endeavouring to ensure the availability, affordability, and accessibility of information t…
S39
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Evidence-Based Policymaking: Mechanisms and Challenges ## Industry Perspectives: Systems Integration Challenges ## …
S40
OECD releases AI Incidents Monitor to address AI challenges with evidence-based policies — The OECD.AI Observatoryreleaseda beta version of the AI Incidents Monitor (AIM). Designed by the OECD.AI Observatory, th…
S41
Why science metters in global AI governance — And also mentioned here. So this is where we are suggesting that this could be one way to look at. It’s not that everyth…
S42
MedTech and AI Innovations in Public Health Systems — Does it actually save cost, as sir was mentioning. And the third element of institutionalization, sir, is also the use c…
S43
Obama’s 2013 Inaugural: a doctor’s diagnosis — In the section about Health, the construction resonates twice over as: ‘these things [Medicare, Medicaid, Social Securit…
S44
A Guide for Practitioners — – What are the current macroeconomic, political and social environments, and how do they relate to health? A thoro…
S45
Global AI Policy Framework: International Cooperation and Historical Perspectives — Despite coming from different backgrounds (diplomatic/legal vs academic), both speakers advocate for patience and carefu…
S46
AI could save billions but healthcare adoption is slow — AI is being hailed as atransformative force in healthcare, with the potential to reduce costs andimprove outcomesdramati…
S47
The mismatch between public fear of AI and its measured impact — Inmedicine and science, AI has shown promise in pattern recognition and data analysis. Deployment is cautious, as clinic…
S48
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Verified AI extends beyond accuracy to encompass complete transparency in decision-making processes. Brey advocated for …
S49
Transforming Health Systems with AI From Lab to Last Mile — A central tension throughout the discussion involved balancing the urgency of addressing healthcare challenges with the …
S50
Indias AI Leap Policy to Practice with AIP2 — This unexpected disagreement emerges around the pace of AI deployment. Fred emphasizes the dual nature of AI and the nee…
S51
Building Population-Scale Digital Public Infrastructure for AI — Balancing speed of diffusion with safety, especially in health applications
S52
AI in healthcare gains regulatory compass from UK experts — Professor Alastair Dennistonhas outlinedthe core principles for regulating AI in healthcare, describing AI as the ‘X-ray…
S53
Safe and Responsible AI at Scale Practical Pathways — Guardrails, Human‑in‑the‑Loop, and Risk‑Assessment Mechanisms Are Essential for Reliable Deployment
S54
Leveraging AI4All_ Pathways to Inclusion — “First, access is a multi -layered problem”[16]. “Good technology by itself does not bring in or include people”[18]. “T…
S55
WS #460 Building Digital Policy for Sustainable E Waste Management — The discussion identified several technological applications: AI for predictive analytics, IoT for real-time tracking, a…
S56
Secure Talk Using AI to Protect Global Communications &amp; Privacy — This story brought a visceral reality to the discussion, moving beyond abstract statistics to show the personal and inst…
S57
WS #283 AI Agents: Ensuring Responsible Deployment — Will Carter: Quite a lot of thought. This has been core to our mission at Google from the beginning, from our earliest d…
S58
Agentic AI in Focus Opportunities Risks and Governance — “And of course, humans have to have full oversight end -to -end.”[64]. “And we want these agentic payments to be safe an…
S59
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S60
AI agent autonomy rises as users gain trust in Anthropic’s Claude Code — A new study from Anthropicoffersan early picture of how people allow AI agents to work independently in real conditions….
S61
Transforming Health Systems with AI From Lab to Last Mile — The first challenge addressed was the fragmentation of healthcare information and delivery systems. Traditional healthca…
S62
https://dig.watch/event/india-ai-impact-summit-2026/transforming-health-systems-with-ai-from-lab-to-last-mile — All of us here, we would have visited doctors at some point in time or have been sick. Anyone who has never visited a do…
S63
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — But I think today it’s affecting our tasks. It’s affecting tasks of efficiency. You know, we’ve already started doing pr…
S64
Conversational AI in low income &amp; resource settings | IGF 2023 — Furthermore, the CEO asserts that trust can be bolstered in healthcare through the implementation of AI solutions. For i…
S65
Cracking the Code of Digital Health / DAVOS 2025 — Key points included the need for better data liquidity and interoperability to fully leverage AI’s potential in healthca…
S66
Safe and Responsible AI at Scale Practical Pathways — The panel revealed that making data AI-ready is fundamentally a governance challenge rather than merely technical. The a…
S67
Laying the foundations for AI governance — The panel showed relatively low levels of direct disagreement, with most speakers identifying similar obstacles (time, u…
S68
Panel Discussion AI in Healthcare India AI Impact Summit — One of the big barriers is multilingual. So. So you can’t use a model that’s good in English, but it’s not good in other…
S69
Secure Finance Risk-Based AI Policy for the Banking Sector — The discussion revealed several unresolved tensions, particularly the fundamental disagreement between risk-based and em…
S70
WEF Business Engagement Session: Safety in Innovation – Building Digital Trust and Resilience — – **Balancing Safety and Innovation**: A central theme was dispelling the notion that safety and innovation are incompat…
S71
Policymaker’s Guide to International AI Safety Coordination — OECD Secretary General Mathias Cormann emphasized that trust is built through inclusion and objective evidence. He ident…
S72
Multistakeholder Partnerships for Thriving AI Ecosystems — We’re also joined by Nakul Jain, who’s the CEO and managing director of Wadwani AI Global. Nakul is a mission -driven te…
S73
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — He argues that AI should augment clinicians while keeping humans central to decision‑making, acknowledging the difficult…
S74
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — And as several of our panelists emphasized, if we don’t address that gap deliberately, the shift towards AI agents is on…
S75
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Governments have collectively affirmed the importance of building trust by governing AI based on human rights, and that …
S76
What is it about AI that we need to regulate? — Multiple sessions identified the need to strengthen the IGF Secretariat and institutional capacity. Thedecision-making w…
S77
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Audience:Good afternoon. Good morning. My name is Paola Galvez. I’m Peruvian, right now based in Paris. I just finished …
S78
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — The discussion maintained an optimistic and collaborative tone throughout, with speakers consistently emphasizing human …
S79
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S80
AI Transformation in Practice_ Insights from India’s Consulting Leaders — The tone was pragmatically optimistic and refreshingly candid. Both speakers were honest about challenges and uncertaint…
S81
AI as critical infrastructure for continuity in public services — The discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s p…
S82
Fireside chat with Dr Matthew Meselson — The tone was largely conversational and reflective, with Meselson recounting personal anecdotes and experiences in a war…
S83
Dynamic Coalition Collaborative Session — The discussion began with an optimistic, collaborative tone as panelists shared their expertise and perspectives. Howeve…
S84
How AI Is Transforming Diplomacy and Conflict Management — The discussion maintained a consistently thoughtful and cautiously optimistic tone throughout. Participants demonstrated…
S85
Building Inclusive Societies with AI — -Collaborative spirit: All panelists demonstrated willingness to work together across sectors The tone remained consist…
S86
Main Topic 2 –  GovTech Dynamics: Navigating Innovation and Challenges in Public Services — Attendees were allotted a 15-minute interlude, ensuring a structured pause within the schedule. In summation, the event …
S87
Revamping Decision-Making in Digital Governance and the WSIS Framework — The discussion maintained a constructive and collaborative tone throughout, with speakers building upon each other’s poi…
S88
High Level Session 3: AI &amp; the Future of Work — The discussion maintained a cautiously optimistic tone throughout, with speakers acknowledging both the tremendous poten…
S89
Delegated decisions, amplified risks: Charting a secure future for agentic AI — The tone was consistently critical and cautionary throughout, with Whittaker maintaining a technically informed but acce…
S90
Main Topic 2: Neurotechnology and privacy: Navigating human rights and regulatory challenges in the age of neural data — The discussion maintained a serious, academic tone throughout, with speakers expressing both fascination with the techno…
S91
Can National Security Keep Up with AI? / Davos 2025 — The overall tone was serious and analytical, with panelists offering measured perspectives on complex issues. There were…
S92
(Plenary segment) Summit of the Future – General Assembly, 4th plenary meeting, 79th session — The tone of the discussion was generally optimistic and forward-looking, with speakers emphasizing the need for urgent a…
S93
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S94
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S95
Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track — The discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mut…
S96
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S97
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-in-healthcare-india-ai-impact-summit — No, I think that’s true. So we have been talking to medical device companies who are now targeting new age diagnostic to…
S98
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-bharats-health_-addressing-a-billion-clinical-realities — And especially the whole vision of making India a developed country, we have to leapfrog. And many of these technologies…
S99
Current Developments in DNS Privacy | IGF 2023 — This created a fragmented system depending upon the registry or registrar involved and introduced a number of key issues…
S100
Artificial intelligence (AI) – UN Security Council — In conclusion, while AI-powered content moderation offers significant benefits, it is essential to recognize and address…
S101
WS #231 Address Digital Funding Gaps in the Developing World — ### Neeti Biyani – APNIC Foundation (Session Moderator) – **Neeti Biyani** – Works with the APNIC Foundation, Session m…
S102
Day 0 Event #83 Empowering Afghan Women: Bridging Digital Gaps for Education — – Neeti Biyani: Senior advisor of strategy and development with the APNIC Foundation Amrita Choudhury: I’ll try to ans…
S103
29, filed Jan. 22, 2010, at 9-10. — CCHT led to a 25% reduction in the number of bed days of care and a 19% drop in hospital admissions. At $1,600 per patie…
S104
Equi-Tech-ity: Close the gap with digital health literacy | IGF 2023 — The Ayushman Bharat Health Account number (ABHA number) is being rolled out.
S105
AI as a companion in our most human moments — A few months ago, I met someone whose story stayed with me. A friend of my cousin had recently received a cancer diagnos…
S106
Contact — data from the system and give it to a specified person.
S107
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-agriculture-scaling-intelegence-for-food-and-climate-resiliance — And at the back end, we will, based on the consent, access the details of where the farmer is from, what is the crop bei…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
V
Vikalp Sahni
5 arguments140 words per minute1769 words753 seconds
Argument 1
Solves fragmentation of information and clear delivery, be it right from taking an appointment or taking a vitals.
EXPLANATION
Vikalp describes an AI‑driven platform that aggregates patient data from multiple sources, summarises health records and automates appointment booking and vital capture, thereby eliminating fragmented information flow. The end‑to‑end solution streamlines the patient journey from registration to clinical encounter.
EVIDENCE
He outlines the three key challenges-including fragmentation-and then walks through the Neeti story where the AI collects her ABHA-linked records, photographs, summarises her health, and automatically schedules appointments, showing how fragmented steps are removed [6-8], [13-45].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion highlights how AI aggregates fragmented patient data, automates appointment booking and vital capture, eliminating disjointed steps [S1] and demonstrates the use of ABHA-linked records to create a unified health view [S10].
MAJOR DISCUSSION POINT
Fragmentation of health information
Argument 2
Provides real‑time safety checks such as drug‑allergy alerts, multilingual prescription generation and automatic sync with the patient’s PHR.
EXPLANATION
The platform uses AI during the consultation to instantly flag contraindications, generate prescriptions in the patient’s language and push the completed record to the personal health‑record app, ensuring safety and accessibility for both clinician and patient. This real‑time feedback reduces medical errors and improves patient understanding.
EVIDENCE
During Neeti’s visit the AI alerts the doctor to an amoxicillin allergy, suggests clindamycin, creates a translated PDF of the prescription and synchronises it to her PHR app, demonstrating safety alerts, multilingual output and automatic data sync [59-66], [68-69].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Real-time safety alerts, multilingual prescription PDFs and automatic sync to personal health records are described as core features of the platform [S1] and further illustrated through the Neeti case study [S10].
MAJOR DISCUSSION POINT
Real‑time safety and multilingual support
Argument 3
Highlights challenges of scaling the solution across multiple languages and ensuring model verifiability at large scale.
EXPLANATION
Vikalp acknowledges that extending the AI system to many regional languages and validating models on massive datasets are major technical and operational hurdles. He calls for robust data generation and verification processes to maintain reliability as the platform grows.
EVIDENCE
He explicitly mentions the difficulty of building at scale for multiple languages, generating data for model verification, and evaluating capabilities at large scale [73-76].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vikalp explicitly mentions the difficulty of multilingual scaling and the need for verifiable large-scale model training [S1]; a related overview of his technical challenges appears in a dedicated AI-for-Bharat briefing [S7].
MAJOR DISCUSSION POINT
Scaling and verification challenges
AGREED WITH
Charlotte Watts, Trevor Mundel
Argument 4
Development must keep clinicians in the loop, use multi‑agent architectures with a grounding agent, and rely on a dedicated medical team for oversight.
EXPLANATION
Vikalp stresses that AI agents should operate under human supervision, employing several cooperating agents plus a grounding agent that enforces safety boundaries, while a medical team reviews outputs. This design reduces the risk of autonomous errors in healthcare.
EVIDENCE
He describes a pipeline where human-in-the-loop oversight, multi-agent collaboration, a grounding agent, and a ten-member medical team ensure safe development and deployment [336-345].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human-in-the-loop design, multi-agent collaboration and a grounding agent are outlined as safety mechanisms, with a medical team providing oversight [S1]; the importance of human-centred oversight and data verification is reinforced in a responsible-AI perspective piece [S15].
MAJOR DISCUSSION POINT
Human‑in‑the‑loop design
AGREED WITH
Trevor Mundel
DISAGREED WITH
Trevor Mundel
Argument 5
Commits to complying with HIPAA, India’s DPDP Act and NHA guidelines; pursues certifications and end‑to‑end encryption to protect health data.
EXPLANATION
Vikalp outlines the organization’s adherence to established privacy regulations, obtaining relevant certifications, and employing technical safeguards such as encryption to ensure data confidentiality. These measures aim to meet legal and ethical standards for health information.
EVIDENCE
He references HIPAA, India’s DPDP Act, NHA guidelines, customer demands for privacy certifications, and the use of end-to-end encryption as core privacy controls [271-280].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The platform’s privacy strategy references HIPAA, India’s DPDP Act, NHA guidelines and end-to-end encryption as core controls [S1]; broader discussions on encryption standards and policy implications are provided in encryption-focused analyses [S17][S18].
MAJOR DISCUSSION POINT
Data privacy compliance
AGREED WITH
Charlotte Watts, Trevor Mundel, Participant
DISAGREED WITH
Trevor Mundel, Charlotte Watts, Participant
T
Trevor Mundel
5 arguments169 words per minute981 words346 seconds
Argument 1
Technology accounts for only ~10 % of AI success; the rest is people, workflows and ecosystem design.
EXPLANATION
Trevor argues that technological capability is a small fraction of AI impact; successful health AI requires supportive people, processes and ecosystem alignment. He warns against focusing solely on technology without addressing human factors.
EVIDENCE
He states that AI technology is only about ten percent of the effort and that the remainder depends on people, ecosystems, and workflow design, noting the tendency to revert to tech talk [137-141].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI Pareto paradox emphasizes that 80 % of impact comes from people, processes and institutional knowledge, echoing the 10 % technology claim [S19]; the panel also notes the gap between tech hype and human-centred implementation [S1].
MAJOR DISCUSSION POINT
Ecosystem over technology
AGREED WITH
Vikalp Sahni
Argument 2
Warns that rapid deployment without thorough evaluation can backfire; a reflective, slower‑approach may ultimately accelerate trustworthy adoption.
EXPLANATION
Trevor cautions that hasty AI roll‑outs risk errors that can damage trust and slow progress; a measured, reflective pace can lead to faster, reliable adoption. He draws parallels with self‑driving car incidents to illustrate the risk.
EVIDENCE
He discusses the need for reflection, the danger of a single fatal accident derailing an entire enterprise, and argues that a slower, thoughtful approach can ultimately speed trustworthy deployment [190-197].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker cautions against premature roll-outs and advocates a reflective pace to preserve trust, a view echoed in the panel’s discussion of speed versus safety [S1] and in a keynote on aligning innovation with governance speed [S21].
MAJOR DISCUSSION POINT
Need for cautious deployment
AGREED WITH
Charlotte Watts, Vikalp Sahni
DISAGREED WITH
Richard Rukwata, Charlotte Watts
Argument 3
Highlights federated learning as a promising technique that keeps raw data local while still improving models, but notes the regulatory uncertainty around it.
EXPLANATION
Trevor describes federated learning where institutions keep data on‑site yet contribute to a shared model, offering privacy benefits. However, he points out the lack of clear regulatory frameworks governing such approaches.
EVIDENCE
He explains federated learning, gives an example of an ultrasound diagnostic system built on federated contributions, and notes that regulatory guidance for such models is still lacking [294-301].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Federated learning is presented as a privacy-preserving model-training approach, with ongoing regulatory ambiguity highlighted in a dedicated federated-learning overview [S22]; the broader panel also mentions regulatory gaps for such techniques [S1].
MAJOR DISCUSSION POINT
Federated learning and regulation
DISAGREED WITH
Vikalp Sahni, Charlotte Watts, Participant
Argument 4
Argues that targeted AI‑driven risk‑allocation is essential given limited global‑health funding, and that such tools can maximise the reach of scarce interventions.
EXPLANATION
Trevor emphasizes that AI can help prioritize limited resources, such as targeting TB vaccine deployment, by identifying high‑need populations, thereby improving cost‑effectiveness. Strategic targeting is crucial under funding constraints.
EVIDENCE
He discusses limited global-health funding, the need for risk-targeting, and how AI-driven geospatial targeting can help allocate scarce interventions like a future TB vaccine [318-322].
MAJOR DISCUSSION POINT
AI for resource targeting
DISAGREED WITH
Charlotte Watts
Argument 5
Envisions next‑generation patient‑facing agents that explain their reasoning, avoid contraindication errors and inspire full confidence among users and clinicians.
EXPLANATION
Trevor imagines future AI agents that are fully transparent, providing explanations for every decision, guaranteeing safety (no drug‑contraindication errors), and earning 100 % trust in high‑anxiety scenarios such as maternal health.
EVIDENCE
He describes a desired patient-facing agent that is completely transparent, never makes contraindication errors, and gives users and clinicians total confidence in stressful situations [354-359].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The vision of transparent, error-free agents aligns with discussions of intelligent agents that reduce fraud and guarantee safety [S25] and with calls for trusted, verifiable AI layers that deliver value while ensuring safety [S14].
MAJOR DISCUSSION POINT
Transparent patient‑facing agents
DISAGREED WITH
Vikalp Sahni
C
Charlotte Watts
3 arguments189 words per minute1321 words417 seconds
Argument 1
Emphasises the massive evidence gap: need for rigorous real‑world evaluations, randomized trials, cost‑effectiveness analyses, especially in low‑ and middle‑income countries.
EXPLANATION
Charlotte points out that while many AI health pilots exist, few have robust randomized controlled trial evidence, particularly in LMICs. She calls for systematic evaluation of impact, cost, and scalability to inform policy and funding decisions.
EVIDENCE
She outlines the paucity of RCTs, the need for real-world evidence, cost-effectiveness studies, and mentions partners such as APHRC and Jay Powell supporting implementation in Africa [213-232].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel stresses the paucity of RCTs and the need for robust real-world evidence, particularly in LMICs, and notes a funding call that will support such evaluations [S1][S22].
MAJOR DISCUSSION POINT
Evidence gap in AI health
AGREED WITH
Trevor Mundel, Vikalp Sahni
DISAGREED WITH
Trevor Mundel
Argument 2
Stresses that funded research must guarantee participant anonymity, ethical clearance and strict privacy safeguards.
EXPLANATION
Charlotte notes that any funded AI evaluation must adhere to high ethical standards, including anonymisation of data, obtaining ethics approvals, and implementing privacy protections, aligning with best research practices.
EVIDENCE
She emphasizes expectations for anonymity, ethical clearance, and privacy safeguards as essential components of any research study she supports [287-292].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Requirements for anonymisation, ethics approval and privacy protections are highlighted in responsible-AI guidelines and data-governance best practices [S15] as well as in the broader discussion of ethical implementation [S1].
MAJOR DISCUSSION POINT
Ethical and privacy standards in research
AGREED WITH
Vikalp Sahni, Trevor Mundel, Participant
DISAGREED WITH
Vikalp Sahni, Trevor Mundel, Participant
Argument 3
Indicates interest in AI that supports frontline primary‑care decisions, integrates with existing health‑system bureaucracy, and demonstrates affordable, scalable impact.
EXPLANATION
Charlotte expresses interest in AI tools that operate at the primary‑care level, fit within existing health‑system processes, and are cost‑effective and scalable, especially for underserved populations. She wants to see how such interventions can be operationalised within bureaucratic health systems.
EVIDENCE
She discusses focusing on primary-care integration, affordability, system integration, and tangible health impact as key evaluation criteria [316-326].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Conversational AI for low-resource primary-care settings and its potential for scalable impact are discussed in a low-income-setting AI briefing [S24]; integration with health-system workflows is also mentioned in the panel’s overview of system-level AI adoption [S1].
MAJOR DISCUSSION POINT
Frontline AI decision support
R
Richard Rukwata
2 arguments143 words per minute445 words186 seconds
Argument 1
Regulators face dual pressure: accelerate innovation while remaining accountable for safety; AI can help create neutral, faster‑review applications.
EXPLANATION
Richard describes the regulator’s dilemma of needing speedy approvals while ensuring patient safety, and suggests AI can produce neutral applications that speed up review without compromising accountability.
EVIDENCE
He outlines the two extremes-speed versus accountability-and notes that AI can generate neutral, faster-review applications to ease the regulator’s burden [153-158].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The tension between speed and accountability for regulators and the role of AI in streamlining review processes are examined in a keynote on innovation-governance alignment [S21] and reiterated in the panel’s discussion of regulator dilemmas [S1].
MAJOR DISCUSSION POINT
Balancing speed and safety in regulation
AGREED WITH
Sindura Ganapathi, Monika Sharma
DISAGREED WITH
Trevor Mundel, Charlotte Watts
Argument 2
Collaboration with funders (e.g., Gates Foundation) is underway to pilot AI‑driven screening tools for marketing authorisations.
EXPLANATION
Richard notes a partnership with the Gates Foundation to develop AI tools that screen marketing authorisation applications, aiming to reduce delays and improve efficiency in the regulatory process.
EVIDENCE
He mentions working with the Gates Foundation on an AI-driven screening application for marketing authorisations, highlighting industry-regulator collaboration [170-175].
MAJOR DISCUSSION POINT
Funders‑regulator AI collaboration
M
Monika Sharma
1 argument166 words per minute629 words227 seconds
Argument 1
Proposes a joint funding framework with shared standards to avoid fragmented expectations, reduce duplication and ensure that AI projects deliver measurable health impact.
EXPLANATION
Monika advocates for coordinated funding with common standards, reducing patchwork requirements and duplication, while aligning evaluation criteria so AI interventions produce real‑world impact efficiently.
EVIDENCE
She describes shared standards, reduction of fragmentation, avoidance of duplication, and ensuring that investments translate into measurable health outcomes [240-256].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for coordinated funding, common standards and avoidance of fragmented expectations are echoed in a policy-research roadmap critiquing techno-solutionist fragmentation [S23] and in the panel’s emphasis on unified funding mechanisms [S1].
MAJOR DISCUSSION POINT
Coordinated funding and standards
S
Sindura Ganapathi
2 arguments86 words per minute1852 words1280 seconds
Argument 1
Panel moderation stresses the need for interactive, participant‑driven discussions rather than static presentations.
EXPLANATION
Sindura emphasizes making panels more engaging by encouraging audience interaction, questioning the boring nature of traditional panels, and inviting participants to share their thoughts directly.
EVIDENCE
She asks the audience to share, remarks that panels are boring, and explicitly invites interactive participation throughout the session [135-136], [202-210].
MAJOR DISCUSSION POINT
Interactive panel format
Argument 2
Calls for closer industry‑regulator collaboration to turn regulators from perceived bottlenecks into partners for safe, effective medicines.
EXPLANATION
Sindura urges stronger partnership between industry and regulators, highlighting shared goals of quality, safety and efficacy, and suggesting that collaboration can transform regulators from obstacles into allies in the innovation pipeline.
EVIDENCE
She frames regulator pressures-speed versus accountability-and calls for collaboration, noting the need for industry-regulator partnership to improve the supply chain [150-158].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for aligned innovation and regulation to prevent trust erosion and foster partnership is highlighted in a keynote on speed versus governance [S21] and reinforced by the panel’s discussion of regulator-industry dynamics [S1].
MAJOR DISCUSSION POINT
Industry‑regulator partnership
P
Participant
3 arguments162 words per minute385 words142 seconds
Argument 1
Participants request concrete policy‑level guidance on embedding privacy‑by‑design in AI health systems.
EXPLANATION
The participant asks for detailed, actionable guidance on how data privacy can be incorporated at the policy level within AI‑enabled health solutions, moving beyond generic statements.
EVIDENCE
He explicitly asks the panel to elaborate on how data privacy can be incorporated at a policy level [265-267].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidance on privacy-by-design, data governance and encryption standards is provided in responsible-AI and data-privacy frameworks [S15] and in analyses of encryption best practices [S17][S18].
MAJOR DISCUSSION POINT
Policy guidance on privacy‑by‑design
DISAGREED WITH
Vikalp Sahni, Trevor Mundel, Charlotte Watts
Argument 2
Seeks evidence on geospatial AI models for TB active‑case finding and health‑system optimisation, questioning prospective evaluation.
EXPLANATION
The participant describes work on geospatial AI for tuberculosis active‑case finding and diagnostic network optimisation, and asks how such tools can be evaluated prospectively to generate robust evidence.
EVIDENCE
He outlines the use of geospatial AI for TB case finding, mentions retrospective analysis and plans for prospective study, and asks for thoughts on evaluation [308-313].
MAJOR DISCUSSION POINT
Evidence for geospatial AI in TB
Argument 3
Calls for AI agents that are transparent, error‑free and able to provide calm guidance in stressful maternal‑health scenarios.
EXPLANATION
The participant stresses the need for AI systems in maternal and infant care that are trustworthy, explain their reasoning, avoid errors, and reduce anxiety for new mothers, especially in low‑resource settings.
EVIDENCE
He describes high-anxiety maternal care, the lack of guidance for new mothers, and requests ideas for building reassuring, transparent AI agents for this context [265-334].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The desire for transparent, trustworthy agents aligns with discussions of intelligent agents that reduce errors and build confidence [S25] and with calls for verifiable, trusted AI layers that explain reasoning [S14].
MAJOR DISCUSSION POINT
Trustworthy AI for maternal health
Agreements
Agreement Points
Human‑in‑the‑loop and ecosystem design are essential; technology alone is insufficient for safe AI health deployment.
Speakers: Vikalp Sahni, Trevor Mundel
Development must keep clinicians in the loop, use multi‑agent architectures with a grounding agent, and rely on a dedicated medical team for oversight. Technology accounts for only ~10 % of AI success; the rest is people, workflows and ecosystem design.
Both speakers stress that AI systems in health must be built around human oversight and supportive ecosystems rather than relying solely on technical capability [336-345][137-141].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with safe-AI-at-scale frameworks that stress guardrails and human-in-the-loop mechanisms, and echoes concerns that technology without supporting infrastructure and human factors is inadequate [S53][S54][S55][S56][S59].
Rigorous evaluation, cautious rollout, and model verifiability are required before large‑scale AI health adoption.
Speakers: Charlotte Watts, Trevor Mundel, Vikalp Sahni
Emphasises the massive evidence gap: need for rigorous real‑world evaluations, randomized trials, cost‑effectiveness analyses, especially in low‑ and middle‑income countries. Warns that rapid deployment without thorough evaluation can backfire; a reflective, slower‑approach may ultimately accelerate trustworthy adoption. Highlights challenges of scaling the solution across multiple languages and ensuring model verifiability at large scale.
All three highlight that AI health tools must be validated through robust evidence and careful scaling to avoid errors and maintain trust [213-232][190-197][73-76].
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects evidence-based AI policy recommendations calling for systematic evaluation, incident monitoring, and transparent model documentation, as outlined in OECD’s AI Incidents Monitor and AI policy roadmaps [S39][S40][S42][S45][S48][S49][S53].
Strong commitment to data privacy and privacy‑by‑design in AI health systems.
Speakers: Vikalp Sahni, Charlotte Watts, Trevor Mundel, Participant
Commits to complying with HIPAA, India’s DPDP Act and NHA guidelines; pursues certifications and end‑to‑end encryption to protect health data. Stresses that funded research must guarantee participant anonymity, ethical clearance and strict privacy safeguards. Highlights federated learning as a promising technique that keeps raw data local while still improving models, but notes regulatory uncertainty. Requests concrete policy‑level guidance on embedding privacy‑by‑design in AI health systems.
Each speaker underscores the necessity of legal compliance, technical safeguards, and policy guidance to ensure privacy of health data [271-280][287-292][294-301][265-267].
POLICY CONTEXT (KNOWLEDGE BASE)
Consistent with privacy-by-design principles highlighted in secure AI communication frameworks and AI incident monitoring that prioritize privacy governance [S56][S40].
Need for stronger collaboration between regulators, industry, and funders to accelerate safe AI innovation.
Speakers: Richard Rukwata, Sindura Ganapathi, Monika Sharma
Regulators face dual pressure: accelerate innovation while remaining accountable for safety; AI can help create neutral, faster‑review applications. Calls for closer industry‑regulator collaboration to turn regulators from perceived bottlenecks into partners. Proposes a joint funding framework with shared standards to avoid fragmented expectations and ensure measurable health impact.
All three call for coordinated action across regulatory bodies, industry players and funding agencies to balance speed, safety and impact [159-168][170-175][150-158][240-256].
POLICY CONTEXT (KNOWLEDGE BASE)
Echoes calls for multi-stakeholder cooperation in AI health governance, as described in WHO roundtables and OECD analyses on building digital public infrastructure [S52][S39][S51][S45].
Emphasis on inclusive, multilingual AI solutions that serve underserved and low‑resource populations.
Speakers: Vikalp Sahni, Charlotte Watts, Participant, Sindura Ganapathi
Provides real‑time safety checks such as multilingual prescription generation and automatic sync with the patient’s PHR. Focuses on primary‑care level integration in low‑ and middle‑income countries, assessing affordability and system integration. Seeks AI agents that are transparent, error‑free and reassuring in high‑anxiety maternal and infant care settings. Advocates for interactive, participant‑driven discussions to ensure diverse voices are heard.
Speakers converge on the need for AI health tools that are linguistically accessible, culturally appropriate and designed for frontline use in resource-constrained settings [33-35][68-69][217-224][265-334].
POLICY CONTEXT (KNOWLEDGE BASE)
Matches criteria from AI for Good Innovation Factory and inclusive AI initiatives that stress affordability, offline capability, and multilingual access for marginalized communities [S36][S37][S38][S54].
Similar Viewpoints
Both see AI as a tool to make scarce health resources and regulatory processes more efficient, thereby addressing funding and speed constraints [318-322][159-168].
Speakers: Trevor Mundel, Richard Rukwata
AI can help prioritize limited resources and improve efficiency in health interventions. AI can create neutral, faster‑review applications to ease regulatory bottlenecks.
Both stress that coordinated funding coupled with robust evaluation standards is essential to generate credible evidence and avoid fragmented efforts [213-232][240-256].
Speakers: Charlotte Watts, Monika Sharma
Need for rigorous real‑world evidence and cost‑effectiveness analyses. Joint funding framework with shared standards to ensure measurable health impact.
Unexpected Consensus
Transparent, error‑free patient‑facing agents for high‑anxiety maternal health scenarios.
Speakers: Participant, Trevor Mundel
Calls for AI agents that are transparent, error‑free and able to provide calm guidance in stressful maternal‑health situations. Envisions next‑generation patient‑facing agents that are completely transparent, never make contraindication errors and inspire full confidence.
A lay participant’s request for trustworthy maternal-health AI aligns directly with Trevor’s vision of fully transparent, safety-guaranteed agents, showing an unexpected convergence between user needs and a funder’s strategic outlook [265-334][354-359].
POLICY CONTEXT (KNOWLEDGE BASE)
Supports the ‘glass-box’ AI transparency agenda and guardrail recommendations for patient-facing tools to build trust and reduce anxiety [S48][S53][S47].
Overall Assessment

The panel shows strong convergence on four core themes: (1) human‑in‑the‑loop and ecosystem‑centric design; (2) the necessity of rigorous, real‑world evaluation before scaling; (3) unwavering commitment to data privacy and privacy‑by‑design; (4) the importance of collaborative frameworks linking regulators, industry and funders. Additionally, there is broad agreement on building inclusive, multilingual AI solutions for underserved populations.

High consensus – most speakers echo each other’s positions across technical, regulatory and ethical dimensions, indicating a shared understanding that responsible AI in health requires coordinated governance, robust evidence, privacy safeguards and inclusive design. This consensus paves the way for joint initiatives, shared standards and funding mechanisms to advance AI‑enabled health care while mitigating risks.

Differences
Different Viewpoints
Pace of AI deployment in health regulation and the balance between speed and safety
Speakers: Richard Rukwata, Trevor Mundel, Charlotte Watts
Regulators face dual pressure: accelerate innovation while remaining accountable for safety; AI can help create neutral, faster‑review applications. Warns that rapid deployment without thorough evaluation can backfire; a reflective, slower‑approach may ultimately accelerate trustworthy adoption. Emphasises the massive evidence gap: need for rigorous real‑world evaluations, randomized trials, cost‑effectiveness analyses, especially in low‑ and middle‑income countries.
Richard argues that AI can speed up regulatory review while keeping safety, whereas Trevor cautions that moving too fast can erode trust and advocates a slower, reflective rollout; Charlotte adds that rigorous evidence (RCTs, cost-effectiveness) is needed before scaling, supporting a more cautious path. The three speakers share the goal of safe AI-enabled health systems but disagree on how quickly and under what evidentiary standards AI should be introduced. [153-158][190-197][213-222]
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects ongoing debate highlighted in AI policy roadmaps and roundtables that caution against rapid rollout without safeguards, emphasizing a measured pace [S45][S46][S47][S49][S50][S51].
Degree of human oversight versus fully autonomous, transparent AI agents in patient care
Speakers: Vikalp Sahni, Trevor Mundel
Development must keep clinicians in the loop, use multi‑agent architectures with a grounding agent, and rely on a dedicated medical team for oversight. Envisions next‑generation patient‑facing agents that explain their reasoning, avoid contraindication errors and inspire full confidence among users and clinicians.
Vikalp stresses a human-in-the-loop design, employing multiple cooperating agents and a medical team to guard against errors, while Trevor imagines future agents that are fully transparent and error-free, implying minimal need for continuous human supervision. This reflects a split on how much autonomy is acceptable for health AI. [336-345][354-359]
POLICY CONTEXT (KNOWLEDGE BASE)
Tied to discussions on autonomy limits, emphasizing mandatory human oversight and safety controls in AI agents [S53][S57][S58][S59][S60].
Approaches to ensuring data privacy and governance in AI‑enabled health systems
Speakers: Vikalp Sahni, Trevor Mundel, Charlotte Watts, Participant
Commits to complying with HIPAA, India’s DPDP Act and NHA guidelines; pursues certifications and end‑to‑end encryption to protect health data. Highlights federated learning as a promising technique that keeps raw data local while still improving models, but notes the regulatory uncertainty around it. Stresses that funded research must guarantee participant anonymity, ethical clearance and strict privacy safeguards. Participants request concrete policy‑level guidance on embedding privacy‑by‑design in AI health systems.
Vikalp focuses on compliance with existing regulations and technical safeguards (encryption, certifications); Trevor proposes federated learning to keep data local but points out the lack of clear regulatory frameworks; Charlotte emphasizes anonymity and ethical approvals for research; the Participant asks for actionable policy guidance. The speakers agree on the importance of privacy but diverge on the primary mechanism to achieve it. [271-280][294-301][287-292][265-267]
POLICY CONTEXT (KNOWLEDGE BASE)
Linked to frameworks that integrate privacy-by-design with incident monitoring and governance structures for health AI deployments [S56][S40][S41].
Methodology for generating evidence on AI health interventions
Speakers: Charlotte Watts, Trevor Mundel
Emphasises the massive evidence gap: need for rigorous real‑world evaluations, randomized trials, cost‑effectiveness analyses, especially in low‑ and middle‑income countries. Argues that targeted AI‑driven risk‑allocation is essential given limited global‑health funding, and that such tools can maximise the reach of scarce interventions.
Charlotte calls for systematic, rigorous evaluation (RCTs, cost-effectiveness) before wide deployment, while Trevor stresses the immediate utility of AI for resource targeting in constrained funding environments, suggesting a more pragmatic, quicker-to-use approach. Both aim to improve health outcomes but differ on the evidentiary pathway. [213-222][318-322]
POLICY CONTEXT (KNOWLEDGE BASE)
Aligned with evidence-based policymaking roadmaps that call for systematic data collection, use-case libraries, and rigorous impact assessment [S39][S40][S42][S45][S49].
Unexpected Differences
Definition of “doctor” in the opening poll
Speakers: Vikalp Sahni, Sindura Ganapathi
All of us here, we would have visited doctors at some point in time or have been sick. Anyone who has never visited a doctor, please raise hand. So practically everyone. When you said, is there anyone who has not visited a doctor, instinctively I was asking, does veterinary doctor count? Because I’m a veterinarian by background.
Vikalp assumes the term “doctor” refers exclusively to human medical practitioners, while Sindura expands it to include veterinary doctors, revealing an unexpected semantic disagreement early in the session. [1-3][79-80]
Level of autonomy expected from AI agents for high‑anxiety maternal care
Speakers: Participant, Vikalp Sahni
How do we build AI agents that are not only intelligent, but reassuring in very high anxiety environments like maternal and infant care? Development must keep clinicians in the loop, use multi‑agent architectures with a grounding agent, and rely on a dedicated medical team for oversight.
The Participant seeks fully reassuring, possibly autonomous agents for maternal health, whereas Vikalp insists on a human-in-the-loop, multi-agent system, showing an unexpected clash between user expectations for autonomy and the developer’s safety-first design. [329-334][336-345]
POLICY CONTEXT (KNOWLEDGE BASE)
Connects to autonomy debates and transparency requirements for high-stakes health agents, reinforcing the need for controlled autonomy and explainability [S48][S57][S58][S60].
Overall Assessment

The panel displayed moderate disagreement centered on the speed of AI rollout, the extent of human oversight versus autonomous agents, and the optimal strategy for data privacy and evidence generation. While participants shared a common vision of leveraging AI to improve health outcomes, they diverged on implementation pathways—ranging from rapid, AI‑driven regulatory acceleration to cautious, evidence‑based deployment, and from strict human supervision to fully transparent autonomous agents.

The disagreements are substantive but not irreconcilable; they highlight the need for coordinated policy frameworks that balance speed, safety, privacy, and rigorous evaluation. Without addressing these divergent views, scaling AI health solutions may face regulatory push‑back, trust deficits, and fragmented implementation.

Partial Agreements
All three agree that AI should be deployed to improve health outcomes, but Vikalp stresses human oversight, Trevor stresses ecosystem and people factors, and Charlotte stresses rigorous evidence before scaling. The shared goal is safe, effective AI in health, yet the pathways (human‑in‑the‑loop, ecosystem focus, evidence generation) differ. [336-345][137-141][213-222]
Speakers: Vikalp Sahni, Trevor Mundel, Charlotte Watts
Development must keep clinicians in the loop, use multi‑agent architectures with a grounding agent, and rely on a dedicated medical team for oversight. Technology accounts for only ~10 % of AI success; the rest is people, workflows and ecosystem design. Emphasises the massive evidence gap: need for rigorous real‑world evaluations, randomized trials, cost‑effectiveness analyses, especially in low‑ and middle‑income countries.
All three want robust privacy protection, but Vikalp relies on regulatory compliance and encryption, Trevor proposes technical federated learning with pending regulation, and the Participant seeks concrete policy‑by‑design guidance. The goal of privacy is shared, yet the means differ. [271-280][294-301][265-267]
Speakers: Vikalp Sahni, Trevor Mundel, Participant
Commits to complying with HIPAA, India’s DPDP Act and NHA guidelines; pursues certifications and end‑to‑end encryption to protect health data. Highlights federated learning as a promising technique that keeps raw data local while still improving models, but notes the regulatory uncertainty around it. Participants request concrete policy‑level guidance on embedding privacy‑by‑design in AI health systems.
Takeaways
Key takeaways
An AI‑enabled end‑to‑end patient‑care platform can reduce information fragmentation, automate appointment scheduling, provide real‑time safety checks (e.g., allergy alerts), generate multilingual prescriptions, and sync with patients’ personal health records. Human‑in‑the‑loop, ecosystem design, and multi‑agent architectures are critical; technology alone accounts for only ~10 % of success. Regulators face dual pressure to accelerate innovation while remaining accountable for safety; AI can help create neutral, faster‑review applications, but closer industry‑regulator collaboration is needed. Funders stress the large evidence gap and call for rigorous real‑world evaluations, randomized trials, cost‑effectiveness analyses, especially in low‑ and middle‑income countries, with shared standards to avoid fragmented expectations. Data privacy must be addressed through compliance with regulations (HIPAA, India DPDP, NHA guidelines), certifications, end‑to‑end encryption, and emerging techniques such as federated learning, though regulatory clarity is still lacking. Operational decision‑support tools for frontline settings (e.g., geospatial AI for TB case finding) are of high interest, but require prospective evaluation and integration with existing health‑system workflows. Designing AI agents for high‑anxiety contexts (maternal and infant care) requires transparency, explainability, error‑free guidance, a grounding agent, and continuous human oversight.
Resolutions and action items
Commitment by the funding coalition (Wellcome Trust, Gates Foundation, etc.) to support rigorous real‑world evidence generation for AI in health, with emphasis on LMICs (Charlotte Watts). Agreement to develop shared evaluation standards and coordinated funding criteria to reduce duplication and fragmentation (Monika Sharma). Regulatory office (Zimbabwe Medicines Control Authority) will explore AI‑driven screening tools for marketing authorisations in partnership with funders (Richard Rukwata). EkaCare (Vikalp Sahni) will continue building the platform with a multi‑agent architecture, a grounding agent, and a dedicated medical team for human‑in‑the‑loop oversight. Panelists expressed intent to collaborate more closely between industry, regulators, and funders to turn regulators from perceived bottlenecks into partners (Richard Rukwata, Sindura Ganapathi). Future work will explore federated learning approaches while seeking appropriate regulatory guidance (Trevor Mundel). Next AI Summit in Geneva will showcase next‑generation patient‑facing agents that are fully transparent and error‑free (Trevor Mundel).
Unresolved issues
Concrete policy‑level guidance on embedding privacy‑by‑design and handling data‑privacy questions raised by participants. How to evaluate and scale geospatial AI models for TB active‑case finding and health‑system optimisation in prospective, real‑world settings. Specific technical and regulatory pathways for federated learning in health AI applications. Detailed roadmap for scaling the platform across multiple Indian languages and ensuring model verifiability at large scale. Design specifications and validation protocols for AI agents that provide reassuring support in high‑anxiety maternal and infant care scenarios.
Suggested compromises
Adopt a reflective, slower‑approach to rapid AI deployment to ensure safety and maintain trust (Trevor Mundel). Balance regulator speed with accountability by using neutral AI‑driven applications that satisfy both industry and safety requirements (Richard Rukwata). Use multi‑agent systems with a grounding agent plus human oversight to mitigate risks of single‑agent failures (Vikalp Sahni). Align funding standards across agencies to reduce fragmented expectations and duplication, while still encouraging innovation (Monika Sharma). Encourage collaboration between industry and regulators to shift the perception of regulators from bottlenecks to partners (Sindura Ganapathi).
Thought Provoking Comments
Technology is just 10 % of the exercise in applications of AI. The rest is really around people and ecosystems… defining the actual role for humans in the loop is going to be as important as any of the technological advances.
Highlights the often‑overlooked socio‑technical dimension of AI in health, shifting focus from pure tech to ecosystem design and human oversight.
Prompted others to discuss ecosystem challenges, led to deeper conversation about regulator‑industry dynamics and the need for human‑in‑the‑loop safeguards, influencing later remarks by Richard and Vikalp about multi‑agent architectures and regulatory roles.
Speaker: Trevor Mundel
I remember watching a podcast… if all the jobs are taken by AI, regulatory jobs will be the last to remain because people always have somebody to blame… we’ll be the last person there so that they can hang me when something goes wrong.
Uses humor to expose the paradox regulators face: pressure to accelerate innovation while being the ultimate liability, underscoring the tension between speed and safety.
Set the stage for a discussion on balancing rapid AI deployment with accountability, leading to his description of AI‑assisted application review and the call for neutral tools that satisfy both industry and regulators.
Speaker: Richard Rukwata
We’re starting to have more meaningful conversations about what this really means… moving beyond hype or fear to actually how do we navigate this space as a global community.
Signals a turning point from speculative excitement to a call for rigorous, collaborative evaluation, especially in low‑ and middle‑income contexts.
Steered the panel toward concrete topics such as real‑world evidence, cost‑effectiveness, and operational integration, influencing subsequent questions about evidence gaps and the need for rigorous trials.
Speaker: Charlotte Watts
The AI‑based EMR alerted that the patient was allergic to amoxicillin, prompting an immediate medication change to clindamycin.
Provides a concrete, patient‑safety example of AI augmenting clinical decision‑making, illustrating the practical value of the technology.
Grounded the abstract discussion in a real‑world use case, prompting participants to consider safety benefits and prompting later concerns about validation and privacy.
Speaker: Vikalp Sahni (narration)
When you said, ‘is there anyone who has not visited a doctor’, instinctively I was asking, does veterinary doctor count? … In the pet care industry, there is real value and business to be made there.
Introduces a broader perspective on healthcare AI beyond human medicine, expanding the scope of the conversation to include veterinary applications.
Broadened the audience’s view of AI’s market potential and prompted acknowledgment of diverse stakeholder needs, subtly shifting the tone to a more inclusive, entrepreneurial outlook.
Speaker: Sindura Ganapathi
I think that for us, it’s… no compromise on patient data privacy… federated learning… locally private data contributes to model improvement without moving the data.
Raises a cutting‑edge technical solution (federated learning) to the privacy challenge, linking technology to policy and regulatory gaps.
Spurred a brief technical‑policy exchange, leading Charlotte to mention ethical clearance and Vikalp to discuss encryption, deepening the discussion on privacy safeguards.
Speaker: Trevor Mundel
We need to evaluate AI interventions in primary‑care settings, especially for underserved populations, and generate real‑world evidence on cost‑effectiveness and scalability.
Articulates a clear research agenda that ties AI deployment to health system impact, emphasizing evidence generation in low‑resource contexts.
Guided the subsequent Q&A toward operational decision‑support, TB geospatial models, and funding priorities, aligning the panel around measurable outcomes.
Speaker: Charlotte Watts
If you run a single agent with a single prompt, you narrow the worldview. Multi‑agent architecture with a grounding agent ensures safety, especially in maternal health where mental health is involved.
Introduces a nuanced technical design principle (multi‑agent with grounding) to mitigate risks in high‑anxiety health domains.
Provided a concrete answer to the participant’s question on building reassuring AI agents, influencing the conversation toward system design considerations rather than just policy.
Speaker: Vikalp Sahni
One fatal accident in self‑driving cars puts the whole enterprise at risk… we may need a slower, reflective approach that ultimately makes us faster.
Uses an analogy from autonomous vehicles to illustrate the high stakes of AI errors in health, advocating for cautious acceleration.
Reinforced the earlier cautionary notes from Charlotte and Richard, shaping the panel’s consensus on balancing speed with safety and influencing the final wishes for future summit focus.
Speaker: Trevor Mundel
I would love to see partners we fund actually present operational learnings next year, moving away from hype to honest conversations about what’s working and what’s not.
Calls for transparency and accountability in funded projects, emphasizing the need for practical, evidence‑based dialogue.
Summarized the panel’s collective desire for concrete outcomes, setting a forward‑looking agenda for the next summit and reinforcing the shift from hype to rigor.
Speaker: Charlotte Watts
Overall Assessment

The discussion pivoted from an enthusiastic product showcase to a nuanced debate about the real‑world integration of AI in health. Key comments—Trevor’s ecosystem reminder, Richard’s regulator paradox, Charlotte’s call for rigorous evidence, and Vikalp’s concrete patient‑safety example—served as turning points that redirected the conversation toward accountability, privacy, and measurable impact. These insights introduced new dimensions (regulatory pressure, multi‑agent design, federated learning, veterinary care) and prompted participants to explore practical challenges and solutions rather than remaining in speculative hype. The cumulative effect was a collective shift toward a balanced vision: rapid, innovative AI deployment tempered by robust human oversight, rigorous evaluation, and cross‑sector collaboration.

Follow-up Questions
Does a veterinary doctor count as a doctor visit in the context of this discussion?
Clarifies the scope of the conversation and explores potential applications of AI in pet care, an emerging market.
Speaker: Sindura Ganapathi
How can regulators reconcile the twin pressures of accelerating innovation while ensuring safety and accountability in the age of AI?
Addresses a core regulatory challenge that impacts the speed of AI adoption in healthcare and the protection of patients.
Speaker: Sindura Ganapathi (to Dr. Richard Rukwata)
How can AI health solutions be built at scale for multiple languages, how can we generate verifiable data for large‑scale models, and who should evaluate these capabilities?
Identifies technical and governance gaps that are essential for widespread, trustworthy deployment of AI in diverse linguistic contexts.
Speaker: Vikalp Sahni
How do we move beyond lip‑service to truly integrate ecosystems and people, and define the role of humans in the loop for AI in health?
Highlights the need for concrete strategies to embed human oversight in AI workflows, crucial for safety and acceptance.
Speaker: Trevor Mundel
What real‑world evidence is needed to assess the health impact, operability, cost‑effectiveness, and system integration of AI interventions, especially in low‑ and middle‑income countries?
Calls for rigorous evaluation frameworks to inform policy, funding, and scaling decisions for AI in health.
Speaker: Charlotte Watts
How can data privacy and privacy‑by‑design be incorporated at the policy level for AI health platforms?
Seeks guidance on regulatory and policy mechanisms to protect sensitive health data, a prerequisite for user trust and compliance.
Speaker: Participant (unidentified)
What emerging technologies (e.g., federated learning, synthetic data) can preserve data privacy while still enabling model improvement?
Explores technical solutions that could allow collaborative AI development without compromising patient confidentiality.
Speaker: Sindura Ganapathi (prompt to panel)
Is operational decision‑support (e.g., geospatial AI for TB case finding and network optimisation) of interest for funding, and how should its evidence be generated?
Seeks clarification on funding priorities for AI tools that target underserved, undetected patient populations.
Speaker: Participant (unidentified)
How can we build AI agents for maternal and infant care that are both intelligent and reassuring in high‑anxiety environments?
Addresses a critical need for safe, trustworthy AI support for vulnerable mothers and newborns, especially in lower‑tier cities.
Speaker: Participant (unidentified)
What would participants like to see at the next AI Summit (e.g., concrete demos, operational insights, collaborations)?
Aims to shape future conference agendas to focus on actionable outcomes rather than hype.
Speaker: Sindura Ganapathi
How should funders balance promoting innovation with upholding safety and minimizing risk in their funding programmes?
Seeks strategies for responsible investment that accelerate beneficial AI while guarding against harm.
Speaker: Sindura Ganapathi (to panel)
What is the optimal pacing for AI deployment in global health to avoid catastrophic failures while meeting urgent needs?
Raises the research need for frameworks that balance speed of innovation with safety and public trust.
Speaker: Trevor Mundel
How can shared standards and coordinated evaluation criteria reduce fragmentation and duplication in AI health funding?
Calls for harmonised guidelines to streamline development, assessment, and implementation of AI solutions across countries.
Speaker: Monika Sharma
How can a multi‑agent architecture with a grounding agent and human‑in‑the‑loop be designed to ensure safety for maternal health AI applications?
Proposes a technical research direction to improve reliability and ethical compliance of AI agents in sensitive health domains.
Speaker: Vikalp Sahni

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.