The Future of Public Safety AI-Powered Citizen-Centric Policing in India

The Future of Public Safety AI-Powered Citizen-Centric Policing in India

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on how the Ministry of Panchayati Raj (MOPR) is leveraging AI and language technology to transform rural governance in India. Shri Alok Prem Nagar from MOPR and Amit Kumar discussed the ministry’s innovative use of Bhashini, India’s AI-powered language platform, to make digital governance more inclusive and accessible to rural communities.


The conversation highlighted two major AI implementations: eGram Swaraj, a portal that enables all 250,000 gram panchayats to conduct planning and financial management digitally, and Sabha Sar, an AI-enabled tool that converts audio/video recordings of gram sabha meetings into structured minutes in local languages. Since its launch in August 2025, Sabha Sar has processed over 115,000 gram sabha meetings, significantly reducing the administrative burden on panchayat secretaries who previously spent 65% of their time on meeting documentation.


The speakers emphasized how language AI has been transformative for rural governance, allowing citizens to access information in their native languages rather than relying on English-literate intermediaries. They discussed successful implementations across states like Uttar Pradesh, which onboarded all 59,000 gram panchayats to eGram Swaraj in just 40 days, and states like Odisha, Tamil Nadu, and Tripura that have adopted Sabha Sar extensively.


The discussion also covered the Swamitva scheme, where drone survey data was enhanced with AI to identify solar panel installation potential on village rooftops, demonstrating innovative use of existing data. Both speakers acknowledged implementation challenges including connectivity issues, dialect diversity, and training needs, but emphasized that rural communities have shown remarkable receptiveness to AI-enabled systems when they address real problems with simple, accessible solutions. The conversation concluded with optimism about India’s potential to lead global efforts in population-scale, multilingual AI for governance, building on the country’s successful track record with digital public infrastructure like Aadhaar and UPI.


Keypoints

Major Discussion Points:

Digital transformation of rural governance through AI and language technology: The Ministry of Panchayati Raj’s journey from 2004 to present, focusing on empowering panchayats through digital platforms like eGram Swaraj and overcoming language barriers using Bhashini for local language support.


Sabha Sar implementation and impact: The launch of an AI-enabled voice-to-text meeting summarization tool that has processed over 115,000 gram sabha meetings, addressing the critical challenge of meeting documentation that was consuming 65% of panchayat secretaries’ time.


Practical applications of AI in rural development: Discussion of multiple AI implementations including solar potential mapping through drone surveys (Swamitva scheme), spatial development planning with visualization tools, and the Pancham WhatsApp-based chatbot platform for two-way communication.


Challenges and solutions in rural AI adoption: Addressing infrastructure limitations, language diversity, connectivity issues, and the importance of creating frugal, user-friendly solutions that work with basic devices like mobile phones rather than requiring expensive new infrastructure.


Scalability and future vision for AI in governance: Exploring how India can lead in population-scale multilingual AI for governance, emphasizing open architecture, sovereignty, and the potential for AI to strengthen participatory democracy at the grassroots level.


Overall Purpose:

The discussion aimed to showcase how the Ministry of Panchayati Raj has successfully implemented AI and language technology solutions to enhance rural governance, improve transparency, and increase citizen participation in democratic processes. The conversation served to demonstrate practical applications of AI in government services and explore the potential for scaling these innovations across India’s vast rural landscape.


Overall Tone:

The discussion maintained an optimistic and collaborative tone throughout, with speakers expressing enthusiasm about technological achievements while remaining pragmatic about challenges. The tone was informative and celebratory, highlighting successes like UP’s rapid adoption of eGram Swaraj across 59,000 gram panchayats in 40 days. Both speakers demonstrated humility by acknowledging they weren’t “AI persons” but rather problem-solvers using technology as a tool. The conversation concluded on an inspiring note about India’s potential to lead global efforts in population-scale AI governance solutions.


Speakers

Speakers from the provided list:


Shri Alok Prem Nagar: Senior official from the Ministry of Panchayati Raj (MOPR), Government of India. He discusses the ministry’s role in empowering panchayats, overseeing finance commission grants, and implementing AI-enabled solutions like eGram Swaraj portal, Sabha Sar (meeting summarization tool), and integration with Bhashini for multilingual support.


Amit Kumar: Government official working on AI implementation and digital transformation in public sector for over 20 years. He focuses on the technical aspects of AI deployment, infrastructure challenges, open architecture, and sovereignty in AI systems for governance.


Moderator: Session facilitator who guides the discussion on AI in rural governance, asks questions about implementation challenges, language AI, and the impact of digital solutions on panchayat functioning.


Additional speakers:


Ms. Deepika: Mentioned at the end of the transcript as someone called to felicitate Mr. Alok, but does not participate in the main discussion.


Full session reportComprehensive analysis and detailed insights

Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy

This comprehensive discussion explored how the Ministry of Panchayati Raj (MOPR) has pioneered the use of artificial intelligence and language technology to revolutionise rural governance in India, featuring insights from Shri Alok Prem Nagar from MOPR and Amit Kumar on the transformative potential of AI-enabled governance systems.


The Genesis of Digital Rural Governance

The Ministry of Panchayati Raj was established in 2004 to address a unique governance challenge: while rural local governance remains a state subject under India’s constitutional framework, central coordination was essential for empowering the country’s 250,000 gram panchayats. Shri Alok Prem Nagar explained that the ministry’s core mission centres on transforming panchayats into genuinely self-governing, responsible local bodies whilst maintaining oversight of finance commission grants that flow directly to citizens’ bank accounts.


The catalyst for AI integration emerged from a profound personal realisation during a gram sabha meeting in Karnataka, where Nagar experienced firsthand the language barrier that excludes citizens from understanding governance processes conducted in English. This moment of clarity—recognising that citizens cannot meaningfully participate in decisions about public money when they cannot understand the proceedings—became the driving force behind the ministry’s embrace of Bhashini, India’s AI-powered multilingual platform.


The breakthrough came during the 2023 Manthan event, where the ministry invited industry experts to suggest improvements to their digital systems. It was during these discussions that the potential of Bhashini for rural governance became apparent, leading to what Nagar described as a “magic” moment when citizens could finally see government expenses displayed in their own languages.


eGram Swaraj: Digital Infrastructure at Scale

The eGram Swaraj portal represents one of the world’s largest digital governance platforms, encompassing all 250,000 gram panchayats from planning to payment stages. Initially operating only in English, the platform created significant barriers for rural participation. The integration with Bhashini transformed this limitation into an opportunity for genuine digital inclusion.


The scale of successful implementation is exemplified by Uttar Pradesh’s remarkable achievement of onboarding all 59,000 gram panchayats to eGram Swaraj in just 40 days. This massive undertaking involved registering digital signing certificates for each panchayat and completely transitioning from traditional cheque-based payments to digital systems. Nagar emphasised that this success stemmed from creating a solution that addressed both the ministry’s need for financial accountability and the panchayats’ requirement for user-friendly systems—a principle of “meeting halfway” that proved crucial for adoption.


The significance of this scale becomes apparent when considering that Uttar Pradesh alone has a population comparable to the top 10 countries globally, demonstrating India’s unique position to test AI governance solutions at unprecedented scale.


Sabha Sar: Revolutionising Meeting Documentation

The development of Sabha Sar emerged from empirical research conducted using RapidPro by UNICEF, which surveyed approximately 8,000 panchayat secretaries nationwide about their time allocation. The findings revealed that 65% of respondents identified meeting conduct and documentation as their most time-consuming activity, creating a clear target for AI intervention.


Sabha Sar’s elegant solution requires only basic recording equipment—typically a mobile phone—to capture meeting audio or video. The system sidesteps rural connectivity challenges by allowing offline recording and subsequent upload when internet access becomes available. Bhashini processes the recordings, converting them to English for AI-powered summarisation, then translating the structured minutes back into local languages.


Since its launch, Sabha Sar has processed over 115,000 gram sabha meetings, representing a significant transformation in rural governance documentation. States like Odisha, Tamil Nadu, and Tripura have emerged as leaders in second-stage implementation, moving beyond basic adoption to develop applications that convert meeting minutes into activity tracking and follow-up systems, demonstrating the platform’s evolution from documentation tool to accountability mechanism.


Language AI as Democratic Enabler

The impact of language AI extends far beyond mere translation. Citizens can now access governance information at their leisure in their native languages, eliminating dependence on educated intermediaries who previously served as gatekeepers to public information. This democratisation enables diaspora populations working in cities like Mumbai to monitor their village panchayats near Pune, creating new forms of civic engagement and accountability.


The expansion of language support remains an ongoing challenge and opportunity. Currently, Bhashini is being enhanced to support 11 additional languages, including Assamese, Bodo, Maitali, and Santal, requiring collaboration with states to provide linguistic expertise for training AI models. This expansion is crucial for ensuring that no rural community remains excluded from AI-enabled governance due to language barriers.


Swamitva Scheme: Innovative Data Utilisation for Solar Potential

The ministry’s AI initiatives extend into creative applications of existing data through the Swamitva scheme, originally designed for drone surveys to create property rights through orthorectified images (geometrically corrected aerial photographs). Rather than discarding the dense point cloud information—detailed 3D spatial data captured during surveys—AI analysis converted rooftop data into solar panel installation potential assessments.


This innovation now covers 238,000 of the 330,000 gram panchayats where drone surveys have been completed, allowing citizens to access gram Manchitra (village maps), zoom into their villages, and receive roof-specific solar panel capacity calculations. Integration with the PM Suryaghar Yojana portal enables gram panchayats to drive solarisation campaigns, creating economic opportunities whilst supporting renewable energy goals.


Spatial Development Planning and Citizen Engagement

Spatial development planning represents another frontier where AI enhances citizen engagement. Initial attempts to introduce spatial plans for 34 gram panchayats near highways met resistance until AI-powered visualisation tools were deployed. These tools help citizens understand how spatial plans will transform their communities over time, significantly improving acceptance and participation in planning processes.


Advanced Service Delivery Systems

The ministry’s vision extends to AI-powered service delivery systems that can automatically assign citizen-reported issues to appropriate departments and track resolution progress. A pilot project in Guwahati demonstrated this potential, where buses equipped with cameras automatically identified civic issues like potholes and assigned them to relevant departments for resolution.


The Pancham platform, a WhatsApp-based chatbot system, offers opportunities for AI-generated audio-video messaging to enhance two-way communication with sarpanchas and panchayat secretaries nationwide. Integration with the Meteorological Department now provides daily weather forecasts specifically tailored for gram panchayats, demonstrating the expanding scope of AI-enabled services.


Implementation Challenges and Pragmatic Solutions

The discussion acknowledged significant operational challenges including infrastructure limitations, training requirements, dialect diversity, and connectivity issues. However, the speakers emphasised that rural communities demonstrate remarkable receptiveness to AI-enabled systems when solutions address genuine problems with accessible, user-friendly interfaces.


The ministry’s frugal innovation approach proves crucial for scalability. Rather than requiring expensive new infrastructure, solutions leverage existing mobile phones and work around connectivity limitations. This approach aligns with Amit Kumar’s observation that AI development cannot follow the traditional “bullock cart to bullet train” paradigm that excludes 900+ million rural residents.


Success requires balancing automation with human oversight, avoiding both complete AI autonomy and excessive manual intervention. The human-in-the-loop approach allows for corrections and improvements whilst maintaining efficiency gains from AI processing.


Sovereignty and Open Architecture

The conversation addressed critical questions about technological sovereignty and long-term sustainability. Kumar distinguished between self-reliance and isolationism, emphasising that India will always utilise some external technologies whilst designing systems that remain “ready to shift” to maintain operational independence during geopolitical uncertainties.


Open architecture principles prove essential for avoiding vendor lock-in whilst maintaining data residency within India. The platform approach enables multiple AI use cases whilst preserving interoperability and standards that support future scalability and adaptation.


Cross-Ministry Collaboration and Future Vision

The success of MOPR’s AI initiatives has attracted interest from other departments. The Department of Drinking Water and Sanitation has approached the ministry about using Bhashini for village water committee meetings, demonstrating the potential for cross-ministry collaboration and solution replication. However, Nagar cautioned that advising other ministries represents “dangerous territory” since many departments already have robust systems in place.


Future developments focus on expanding AI capabilities while maintaining the core principle of solving genuine problems rather than showcasing technology. The ministry continues to explore new applications while ensuring that each innovation addresses specific stakeholder needs and enhances democratic participation.


Scaling Participatory Democracy Through AI

The discussion concluded with optimistic assessments of AI’s potential to strengthen participatory governance. Both speakers emphasised that technology should remain secondary to clear problem definition and stakeholder needs assessment. Success depends on understanding which aspects of governance challenges can be addressed through specific AI tools rather than pursuing technology-first approaches.


The speakers expressed confidence in India’s potential to lead global efforts in population-scale multilingual AI for governance, building on the country’s proven track record with digital public infrastructure including Aadhaar, UPI, and GST. The scale of implementation—with individual states like Uttar Pradesh exceeding the population of most countries—positions India uniquely to demonstrate AI governance solutions at unprecedented scale.


Implications for Democratic Innovation

This conversation reveals AI’s transformative potential for strengthening rather than replacing human-centred governance processes. The Ministry of Panchayati Raj’s journey demonstrates that successful AI implementation requires domain expertise focused on solving genuine problems rather than showcasing technological capabilities.


The emphasis on meeting stakeholders “halfway”—addressing both institutional accountability needs and user convenience requirements—provides a replicable framework for AI adoption in governance contexts. The frugal innovation approach, leveraging existing infrastructure whilst addressing real challenges, offers lessons for developing nations seeking to implement AI solutions at scale.


Most significantly, the discussion positions AI as a democratic enabler that removes barriers to participation rather than creating new forms of exclusion. By eliminating language barriers, simplifying documentation processes, and enabling citizens to engage with governance systems in their preferred languages and at their convenience, AI tools like Sabha Sar and eGram Swaraj strengthen the foundational principles of panchayati raj.


The conversation ultimately presents a vision where AI serves democracy by empowering citizens, enhancing transparency, and enabling more effective participation in governance processes that directly affect rural communities across India’s vast and diverse landscape. This approach demonstrates that when properly implemented with clear problem focus and stakeholder engagement, AI can become a powerful tool for strengthening participatory democracy rather than undermining it.


Session transcriptComplete transcript of the session
Shri Alok Prem Nagar

Just a little background, why Ministry of Panchayati Raj exists at the centre, because rural local governance is a state subject. We are rather new in this business, we came into being in the year 2004. Our objective was, or the purpose why we exist, is how we can empower panchayats, how we can nudge states into having acts that really transform our people into self -governing, responsible local bodies and so on. So, as a part of our job, we also have oversight over how the ministry, how the panchayats spend their finance commission grants. Finance commission grants are devolution grants, they go directly to the people in their… bank accounts and then subsequently… all panchayats, all two and a half lakh of them they are present on eGram Swaraj right from planning to the payment stage, everything is done on a portal which is called eGram Swaraj this portal works in the English language so I will tell you in 2019 when we were starting something called the People’s Plan Campaign, I happened to attend a Gram Sabha in the state of Karnataka I was there for something like 45 minutes and I was felicitated and sat on stage and I didn’t understand a thing and then it struck me I had this thing that how do you expect these people really to relate to what is happening because it is public money Everybody in the panchayat needs to know what kind of plans are uploaded How many works got done that were asked for the plans How much did it cost them to do it And subsequently they can raise issues in the meetings pertaining to the works close to their residences And along came Bhashini I think we had in the year 2023 an event called Manthan Where we invited a lot of people from the industry to tell us how we could conduct our business better And so Bhashini was a revelation And imagine that a person from a panchayat is looking at the expenses page for his gram panchayat or her gram panchayat And then by the end of the month, he has to pay the expenses And by a click of a button, they are able to see it in their own language It was magic That was the starting point.

Yeah, and subsequently, of course, we went from there and we found out through a survey that what really hurts a panchayat secretary is not to be able to produce the minutes of meeting in time, which are very important, which are the only record of a panchayat’s proceedings. And then again, using Bhashani and another tool, we were able to create Sabha Sar, in which if you input the video slash audio recording of your meeting, you are able to get a minuted draft, which you can then edit and upload. So that was miracle number two. And briefly, if I could also address Swamitva, the scheme that you mentioned. The Swamitva is a scheme where. Drone surveys are carried out over all the village habitations, so there are these pictures.

that are subsequently converted to orthorectified images and they lead to property rights for the people living inside those villages. But the way the images have been captured, there is dense point cloud information, all of which was getting wasted. Why? Because we were confining our attention only to the orthorectified images. So we had the AI guys look at that and then they converted all those rooftops that they could see into the solarization potential. As a result of which now, out of the 3 .3 lakh gram panchayats where drone surveys have been carried out, in 2 .38 lakh gram panchayats, you can go to gram Manchitra, and you can zoom into your village and then you can click the icon corresponding to the solar ability potential and it will tell you roof -wise how many panels can you fit there.

We’ve gone further and we’ve integrated that with the PM Surigar Yojana portal. As a result of which, the Gram Panchayat can drive it like a campaign and lead to greater rewards for everybody all around.

Moderator

Actually, it reaches the last mile citizen when you talk about those benefits. So India’s last mile operates in local languages and dialects, as you mentioned, solving that problem. So in your view, how critical is language AI in ensuring that digital governance platforms are inclusive and participatory and increases citizen trust and participation in Gram Sabhas?

Shri Alok Prem Nagar

Like I said, people are now able to follow what… what was something that was written in, they could still see it, of course. In the English language, then they’d have to go to the person who they knew to be very smart in the village and they’d have this person read it out to them. Now they can see it at their leisure. Not just people here, but people outside who are working in Mumbai can see what is happening in their panchayats close to Pune or something and immediately they can get active about it. And the militarization tool that I mentioned, that opens a whole new set of avenues now. You can have a record, then against that you can have action -taken reports and then you could have follow -up in the next meeting.

It makes it all amenable to a very systematic representation on portals. So that is what some of the states have already started doing. And it is truly remarkable that, anybody can go in there. And when I say anybody, I don’t mean just the panchayat secretaries. Anybody in a village can drill into their gram panchayat’s record and see that corresponding to the finance commission grants for any year, what was the plan against which how much has been executed, how many bills were prepared against each activity, and what is the status of the payment, whether it has been completed, where the asset exists, the geotags, and then you can zoom in and maybe see it on gram panchayat.

So there are great rewards for everybody all around. And we need to, of course, now intensify it through a capacity building training program. That is something we started doing from the previous year. But it has been an incredible journey. And it is being adopted all over.

Moderator

So, Alokji, let’s talk about… Let’s talk a little bit about Sabha Saar Impact. let’s let our audience know about it and with its launch on 14th August 2025 MOPR introduced an AI enabled voice to text meeting summarization tool powered by Bhashini ASR services. So as of 4th February 2026 over 1 ,15 ,100 15 gram sabha meetings have been processed. So this is a good number I need a thank you for the round of applause. So what structural changes have you observed in the panchayat functioning after sabha sar?

Shri Alok Prem Nagar

Sabha sar was one thing that we carried out for the convenience of the panchayats and the panchayat secretaries as opposed to E. Gram Swaraj which was which was our selfish motive we wanted panchayats to plan there and show all their vouchers there so that we could tell that this is how the money has been spent but sabha sar actually came through and As a part of a survey that was carried out using RapidPro by UNICEF, we asked something like 8 ,000 panchayat secretaries all over the country that how do you spend your time? How much of it is spent in inspections and attending programs and meetings and making records? So one thing that came through was the conduct and recording of meetings, meetings was the, in 65 % of the respondents, that was the activity that was sitting, you know, very heavy on their entire time availability.

And so having realized this and having the help of Bhashani, we converted it into a tool. So in Bhashani, it’s very simple. There is no big. Standard operating procedure as it were. So if you’re standing having a meeting, there has to be a recording device. It could well be your mobile phone. And then through audio or video recording, you can just place it each time somebody speaks. And later on, you input this into the Sabasa tool. The Sabasa tool is not something that is a part of the device on which you carried out your recording. So the issue related to connectivity in villages is something that we’ve been able to sidestep. And once you do that, it gives you a draft minute of meeting.

So Bhashani turns it into English. And the English thing is monetized using the AI engine. Again, Bhashani gives it back to them in their own language. And that efficiency. Yes, it’s voila. The person can just make a few changes and upload it. And we have. We’ve had some heartfelt gratitude coming to us from villages. as a result of this.

Moderator

Okay. So has the structured documentation improved transparency, participation tracking or monitoring of meeting frequency and agenda quality too?

Shri Alok Prem Nagar

Now that the minute is ready, if there are five items, ten items, so the states that have really gone ahead and adopted it, which is Odisha, which is Tamil Nadu, which is Tripura, all these people are into the second stages now where they are looking at the minutes of meeting and converting it into or refining it into tools that help them keep track of the activities after they have been created. We also realized through our meetings that why is the number just 1 ,15 ,000? So there are a whole lot of people whose languages do not exist on Bhashani. So from there, we ask those states to provide Bhashani with the necessary expertise so that they can train their bots.

And they’re already working on something like 11 more languages, which includes Assamese and Bodo and Maitali and Santal and whatnot. Yes. So those languages are also. So it’s been a very gratifying experience. And then the learning continues.

Moderator

Yeah, it’s commendable that things have reached to that level. So over to you, Amitji, from an accountability lens, does structured documentation change behavior with the governance systems?

Amit Kumar

Thank you. So I think, you know, so if you have understood the enormity of the situation, right, what we are talking about, 200. 150 ,000 plus gram panchayats and different kind of languages. so just to circle back if you look at the frugality of the situation right so so so for example if you look at in india we generally people talk about either we live in a bullock cart stage right or we are aspiring for bullet train right so so the point is if ai has to tell us in terms of you know how we learn in the future how will we transform so we cannot i mean leave out 900 plus million people who are living in villages absolutely so the idea is not to make it very very urbanized you know very very kind of elitist idea that you know that ai is only for urban ai is only for industries ai is only for commercial sector so obviously this is a journey right so you have to start somewhere so for example i mean the frugality what i was talking about that we did not ask gram panchayat to invest anything right all they need to have is a lot of money and they have a mobile phone which any which way they have right and the idea is just to kind of record and upload obviously there will be some you know challenges and kind of resistance also in the beginning But, you know, once they get used to it, so for example, today we are asking them to kind of, you know, upload your recording, right?

The rest is done by system. And system also has a provision of, you know, human in the loop so that we can go and correct it. Now tomorrow we see the next step what we will be doing, what we can do perhaps, right? When the next meeting happens, we can also populate the agenda from last meeting, right? So what was discussed last time, what was committed, whether you are doing or not doing, right? And then everything goes to kind of public domain. So generally the people who live in city, they know that, you know, when there is a RWA meeting, nobody goes and attend, right? But they all, you know, wages, warfare in the WhatsApp group, right?

So same in the village also, it’s not easy to bring people, right? But once they start getting the hang of it, right? That okay, there is a meeting, I am getting the mom and it’s available in the public domain. We are using AI, AI is for good. AI can do it. AI can also be leveraged for rural sector, right? Why it has to be very, very elitist only for passport, say, wallet, say, right? Right. So so that’s just a beginning. It’s just a journey. Right. And also, if you see from an idea point of view, I mean, this is a phenomenal idea for Ministry of Panchayati Raj. Let me congratulate sir and the entire team to think of something like that.

Right. Because the AI is all about idea and use case. Right. If you have the right idea, you can do wonders. But you have to have idea and kind of, you know, muscles to execute it. So that way, I believe that in this whole documentation will do wonders for them. Graham Panchayat will also realize something which was missing in the most part of the word that, you know, the record keeping accountability, transparency, so on, so forth. Because generally these decisions were taken by some people only and executed by some. And the large population was largely kept out of it, knowingly or unknowingly. Right. So I think that’s what I said, that, you know, it will change the way they were.

it will change the way they think because this is only for a you know kind of we are starting only with a let’s say meetings but now they will start thinking and there will be demand from states and otherwise right what more can be done with AI so broader case would be achieved yeah Sabasa is an example like Praman we are doing we have launched this Pancham you know bought also for all elected and selected representatives so I think it’s a great you know kind of experience efficiency would obviously help them adopt I mean let me tell you in our own corporate meetings we are still some of us making notes right despite being on teams despite using co -pilots despite having all tools at our disposal but we are still using it right we expect a junior guy to take notes and circle back so that’s a cultural change which you have to also see and these changes and these changes couldn’t have been possible if we wouldn’t have the infrastructure like Bhashni right because ministry on its own how ministry got benefited, we have infrastructure like Basni, right?

We have the you know, GPUs got available to us through the NDIA mission, right? Otherwise, you know, procurement itself could have been a big challenge, right? So we have a team to kind of build applications. So I think you know, it takes a village to move something, right? So that’s what has happened here.

Shri Alok Prem Nagar

Thank you for sharing your thoughts. In fact, just continuing with that, the Department of Drinking Water and Sanitation has actually approached us that the meetings of their village VWCs village water committees they want to use Bhashini for that and there has been some initial interaction between the two.

Moderator

That’s commendable, I would say. That’s awesome. So Alokji, let’s talk of some implementation challenges in rural India with AI. AI in rural governance is transformating but complex. So what are the biggest operational challenges, infrastructure, though a bit, I think Amit you were about to share that, but then infrastructure, training, dialect diversivity and connectivity. So what challenges are you facing? How receptive are panchayat functionalities and rural citizens to AI -enabled systems?

Shri Alok Prem Nagar

Challenges, of course, there are many and you would have anybody tell you. What we have found out, the adoption of eGram Swaraj by our villages is gram panchayats. A case in point, Uttar Pradesh has got something like 59 ,000 gram panchayats. And for Uttar Pradesh to onboard eGram Swaraj seemed like an impossible task because it involved registering your digital signing certificates and then everybody agreeing to completely dispense with checkbooks. All your payments were then going to be… Can you imagine Uttar Pradesh did it in 40 days flat, all 59 ,000 gram panchayats. So my point was that if you are ready with a product that addresses their needs and it is friendly and it meets, of course, my need was that I needed the money well accounted for and their need was a system that could make it very easy for them to do it.

So we met halfway and if UP can do it in 59 ,000, I am not prepared to hear an excuse from any other state in the country. It’s a trial by fire. Likewise for Sabasar. Sabasar, again, I said initially that there was a demand that was indicated from the state. So when we set out to meet that, we were clear what is it that we are looking for and people were so forthcoming. In fact, Bhashini also enabled me. to write letters to the states in their languages and people were gushing with affection and what not. I got a letter in Telugu for the first time and all that. So there are challenges but then the Ram Panchayats are predisposed to meet you halfway.

So you need to begin that journey and we have seen that with regard to a number of things. There have been campaigns every year they carry out a campaign from 2nd October to the 31st of December which extends to January typically where all two and a half lakh Ram Panchayats prepare their Ram Panchayat development plans and upload it on the portal. So 2 .5, 250 ,000 Ram Panchayats all of them planning for the next year and so before you enter the next financial year their plans are ready. I mean we don’t… We don’t do it in the departments, in the ministries. And all these Ram Panchayats have… not done it once, twice. They started in 2018. They’ve continued to do it ever since.

In the COVID year, there was a request that campaign. So there was a massive pushback from the states that no, we want to do it. The inertia was so great that they still did it. So there are challenges but if we make an application like you were saying that this is a simple recording device, this is a mobile phone, there aren’t things that you need to procure to set it up. So if you make a simple tool, people would grab it with both hands. So I think that is the embracing of challenges rather with the response we are getting with Bhashani.

Moderator

So for ministries delivering last mile services such as Ministry of Rural Development and the Ministry of Agriculture and Farmers Welfare, what lessons from MOPR’s AI journey would you share? How important is open architecture and in your sense?

Shri Alok Prem Nagar

That is dangerous territory. I am not in a position where I could start advising anybody because they’ve got pretty robust systems of their own. If you look at Manrega Soft and the PM Avas Yojana, because they are running schemes which are very pointed. Avas Yojana is just about houses. Manrega is a scheme where there is, of course, it’s as large as the things that you do in the Finance Commission grants, but it is fairly well organized. And in all of these, typically, the beneficiary is the individual. In Panchayati Raj, there are individuals at the end of it, but our emphasis is on the institution, the panchayat, and not just E. Gram Swaraj and the things that we do for their accounting and planning.

We also hooked up with the… Meteorological Department… and there are daily forecasts being generated for every gram panchayat. This people are able to see on their phones and all with the similar ability as they are able to see everything using Bhashini. So it’s a great enablement all around and it can only get better.

Moderator

Absolutely. So, Amitji, over to you. How critical is open architecture ensuring long -term sustainability and avoiding vendor lock -in?

Amit Kumar

If I can take a minute and talk about the previous question also.

Moderator

Sure, please go ahead.

Amit Kumar

Sir rightly mentioned that different ministries have got a different mandate. It’s not an apple to apple comparison. But see, you also have to see the panchayati raj, the main role of panchayati raj, what I understand is a mobilization. because they are not running major schemes on their own compared to others. And generally the best practices doesn’t have to be in form of technology or architecture only. The idea is that if you go down from top, there are two different ministries and if you go to the village, you will see the same infrastructure, same set of people are only working from both departments. So the idea is if one can do, others can also do. So there is a lot of learning in terms of method that how we could overcome, how could we mobilize, how we could implement some of these solutions.

And I am sure we know that RD and agriculture are also doing a lot of things, but their mandate is much bigger. But they can also take a lot of pride or learning from the success which we have. What was the second question? The second one was that how? Critical is open architecture in ensuring long -term sustainability and avoiding the window. So you must be hearing this word called sovereignty quite a lot, right, nowadays. So the whole idea of, you know, being sovereign in any part of the, you know, technology, be it defense, be it IT, be it any way, is the survivability, right? So the idea is despite, in spite any kind of, you know, geopolitical risk, we should survive.

Our system should run, right? So for that, generally people confuse sovereignty with also making India local, et cetera. So that’s not the case, right? We will always have some technology from outside. But we have to design in a way that it is kind of ready to shift, right? So either from a technology point of view, we have the interoperability, the standards which we have chosen, the models which we have chosen, the infrastructure which we can move around, and the teams which we can control, right? Right. So the data residency has to be within India and data is with us. So obviously if we have trained on one, we can train on another, something else also.

So the idea is also to look a little bit long term. See, what has happened that when we started, obviously, there were a lot of POCs. Nobody knew, right, how AI will behave. Still, we don’t know. Still, we don’t know, right? I mean, so obviously, that you have to start somewhere, right? And then you have to also ensure that in future, when we start with one use case, it becomes easy, right? When the department itself becomes fully AI enabled and we have 10 AI use cases running, then it becomes a problem, right? Problem of management. So that’s where I think we need to plan better for future so that, you know, we plan. I mean, it’s not that a use case is defined, then we found an easy method of procurement of infra or the model which I knew.

So going forward, I think there will be a platform approach, right? So where we have to think for future also that, okay, these AI cases are likely to come in future as well. Different kind of AI, right? DL. DL. DL. DL. DL. DL. DL. DL. DL. DL. DL. DL. DL. and accordingly we have to have open architecture like the way we did in a normal digital transformation. Even digital transformation, there used to be time where we created our own independent monolith applications. But now we are creating applications, you know, which are more API -based, can integrate with anybody, right? And futuristic, can scale our modular. So same concepts have to be used for AI

Moderator

Well said. So I think adoption comes with responsibility and that’s what you are scaling at, looking at. Swalokhji, Sabha sir demonstrates how language AI can power grassroots governance. After Sabha sir’s success, what deeper integrations do you envision with Bhashini and what does the next phase of collaboration looks like? Let’s talk about that.

Shri Alok Prem Nagar

About 16 have already started providing all those common minimum services. So minimum se nahi chalega. We wanted more. So now we had like a model list, union set, if you will, of all the desirable services that were being delivered. And the ministry carried out an exercise through an expert committee. And we have a much bigger list now. So we are not satisfied with the minimum. Now we are working towards that. But I think that AI has great potential in helping us. Thank you. So service delivery is something people don’t know to expect. and we would like through and people are going to be speaking in any number of languages. I think the next step, my government is something that has always been very invested in providing services to making ease of living easier as it were and providing all manner of things.

Everything is finally a service. You need to look at a doctor. You need your road fixed. You need a street light to be working. You want the log water to be drained or something. She needs more attention than us. Okay, over to you. So people should come to expect. they should demand these services from their Gram Panchayats. There are mechanisms of doing that because Gram Panchayats don’t have a lot of resources in terms of manpower, in terms of people who are at their beck and call to carry out the activities that are flowing from the Charter. So there are systems in a lot of these villages. You have common service centres in some states. They have their own system of common service centres like UP, like Bapuji Seva Kendra in Karnataka, like Me Seva.

So we need to take that further and we need people to be able to talk and find out if a certain service that is available to them, can they avail it in their village? If they are to do that, what is the mechanism? And if they’ve already made an application, that what should be able to tell them that where that thing currently stands? so that is a very wide area like I said that there are a number of services we also learnt of a pilot that was carried out in Guwahati where the bus used to have a camera it used to drive through, capture all number of images and basis that it would assign issue labels to them as it were if there is a drain overflowing so it takes note of that if there is a pothole then it takes note of that and then it assigns it to all these agencies whose job it becomes now to fix that so not that but maybe we have a mobile interface called Meri Panchayat which ports a lot of information from E Gram Swaraj Meri Panchayat also has the capability of capturing images of the issue that is being reported I think the next step is that image it makes sense of the image and it assigns it to the necessary department.

There are people who are mapped whose job it is to carry it out and within a certain amount of time it doesn’t happen, then there is escalation. We need to go deeper into that system. That, I think, is the next frontier. And, of course, because it involves vocalization of your demands, so bhajani is absolutely critical in this. So when we say there is a long way to go, I think that phrase is no more relevant. It’s a short way, but not even a big journey, an intelligent journey to move

Moderator

So India is building public digital infrastructure for AI at scale. So how do we balance scale with accountability and public trust? We have talked much about how we are building things. But let’s talk about the other side. And can India lead the world in population scale?

Amit Kumar

Of course it can. I am sure about that. But then multilingual AI for governance, when it comes, you would like to have a short -ended first. So one thing you all have to realize that whatever we do is a population scale and unparallel because of our size. So even our POCs exceed the kind of performance of European countries Our UP sir talked about UP 60 ,000 panchayats If you look at UP maybe it will be in top 10 country in terms of population and size. I think the world is vouching for us when it comes to the use cases So see if you look at that we have got that scale now. We have the experience behind us We did Aadhaar, we did UPI, we did Fastag, we did GST and we did Income Tax.

So now we have that confidence behind us that we can do anything of scale and with the same Prugal approach we will do 10 times cheaper than Western world and certainly not worse, better only. and also from last decade we have evolved right so for example the concept of privacy like dpdp act consent based usage like you know adhar brought so a lot of things have improved from a policy side of it now now once you have policies in place systems are easy because system themselves act as a rule you know once you have policies in place then you don’t need so much of human intervention or discretion so since we have done it since we have kind of you know done so much so now if you look at the very simple case bhashni i remember four five years back and i and amitabh used to i mean kind of debate also whether we need a bhashni okay right because we we had some of the google translate services so on for forth right but the idea is that i mean in the hindsight that was the right call right in future we have to have something called sovereignty word right we have we don’t have to depend on I mean we need to be frugal and we don’t want to use you know the applications which are very expensive from a taxpayer money point of view so similar things we have done a lot right so I think the next step for example if you look at roam around in AI summit you will see how many LLMs and SLMs we are building on our own right honorable ministers talked about five layers application I think we have ample talent to build applications LLMs we use open LLMs but we are developing our own and Basni also like one of the you know common infrastructure energy will take care right infra and chips anyway will have dependency but that’s the rest of the world also has a dependency right not that everybody has a rare earth and everybody is building chips so that way I believe that you know that and because we have that technical know how also I mean that’s our kind of bread and bread and butter nowadays right so we’ll be able to take the learnings from all these systems and we’ll move forward as of now we were a bit slow in last year or two because AI itself was new for everyone so we took some time but now I think from this year onwards we’ll really scale it up because we have tested the blood, we have seen the success and we will

Moderator

sure, thank you for sharing that so as we come towards the closure of this conversation I would like to leave with the one final thought which is like if Panchayati Raj institutions are the foundation of democracy can AI when built on a public stack and powered by language inclusion become the strongest enabler of participatory governance in 21st century just closing thoughts from you both Alokji, would you

Shri Alok Prem Nagar

absolutely he was just telling you that that we’ve been able to do things at scale this thing about UP that I told you I wear it like a badge that to have done it in some place so and it’s not an easy ask because there are so many stakeholders they’ve got various kinds of issues of their own you’ve got to engage with them address those things and if my problem is well defined and if I know what kind of a thing is going to help me redress that like Bhashini did for us I think that what you said is going to come true because that is so being able to understand my problem and knowing what parts of the problem can be fixed in what manner using the various tools that are available that is the key and I it’s not an over simplification but good servant bad master so that is something that stays and it is not it’s not going to land you in the right places if you just let it go around like an animal.

But then if you know where to put it, what modules to be inserted, where, what has been used in the background. And so that would make you more confident. I’m not really an AI person, so I’m just speaking on the strength of what I have learned. And the experience thus far has been outstanding, partly because we’ve had a very good partner. But other than that, I am not, you know, I’m not throwing it all open out to AI. I don’t wear T -shirts saying I love AI or something. But I have a problem, and it needs fixing. And I need to be able to know what aspects of AI can help me fix that in the best possible manner.

And that’s my thing on

Amit Kumar

So like Sir said, you know, Sir is not an AI person, neither am I. So if you look at, you know, that… But he was transparent enough to share that. No, no. So look at that way that none of us were, right? Yes, exactly. Because if you’re talking about AI, I have been doing this, you know, digital transformation for public sector for over 20 years. Yeah. Obviously, there was no AI, even there was no DPI, DPG also, you know, what we kind of retrofitted with the names, right? Right. So if you look at the idea of Panchayati Raj itself is a participative governance, right? That people have to assemble in the Gram Sabha and decide on the money which they’re getting, how to spend and prioritize.

Absolutely. And if AI tools like Praman and Sabha Sar and, you know, Pancham can help that strengthen, what best, you know, you can expect from a participative government, from a democratization point of view. Yes. So I think this sometimes, you know, that technology becomes secondary. Yes. In my view, most of the time, right? The ideas have to be clear in terms of what you want to achieve. and what problem you want to solve, what scale you want to solve, what are the guardrails you have to kind of, you know, also put in place. So, for example, when we do AI, that it cannot be 100 % autonomous, right? Of course. And it cannot be 100 % human in the loop also.

Because if we have each and every transaction being, you know, approved by human in the loop, then it defeats the purpose of AI. And there is no AI, right? Then we are still living in the rule -based algorithms. Algorithms. So the idea of, you know, that AI will be that we also train, monitor, have the mechanism to take complaints, have the mechanism to perfectly, you know, kind of train it better so that we improve our accuracy. So that is how AI journey. So AI journey is slightly different from the previous digital transformation journey, which were more like a transactional systems, right? So that way, I think, if you look at currently also Sabasar, I think whatever I am hearing from people market teams also, So it is giving great accuracy, right, in terms of translation and summarization.

And I’m sure whatever there are little bit areas to improve, it will improve on its own. So we cannot stop it, right? So once we have boarded a flight, then we can only get down at where we have to, right? So I think future is bright. And also from a MOPR experience point of view, it will also, I’m sure, energize and motivate a lot many others. I can say with my experience that if MOPR can do in rural, we can use AI tools. There is no stopping for us as a nation. This is truly an achievement when it comes to MOPR with the government.

Moderator

So you want to say anything regarding this, Alokji?

Shri Alok Prem Nagar

I thought of another application that works. That is something we’ve been working on, which was spatial development plans. Okay. we again engaged with a lot of panchayats that were close to the highways okay so typically if a panchayat is on a national highway close to a big city and have a population of 10 000 plus then you were eligible to participate in this program okay so there were 34 gp that we involved and we got the planning and architecture colleges to prepare spatial plans for them spatial plan would be futuristic it would zone and it would you know assign it would look into the future and see how this place was going to grow it would devise road networks or something and tell people what they would become over a period of time we had a conference with the with gram panchayats around bhopal bill and the people were so annoyed We don’t need a spatial plan.

Over a period of time, of course, we told them what it was going to be, but we had this epiphany that people need to be able to see what this spatial plan will help them become. And then we went into the next national conference. We had for each of these 34 spatial development plans a visualization. And we showed people that if you want to become this, you have to do this. And then there was greater enthusiasm. So the people on whom this plan is, who are going to be subjected to this plan, if I could use those words. So these people, if they’re not on board, there is no way you can carry it out. And that, I think, is wide open.

And we’ve had after that. But the entire state of Andhra Pradesh has gone ahead and said that all their planning is going to be spatial plans. So that is something that is amenable to AI tools. And a final thing that I remembered that lots of times we need to convey through audio video messages. He mentioned Pancham. So Pancham is a WhatsApp -based chatbot platform which allows us to have two -way conversation with all the sarpanchas and panchayat secretaries in the country. So all these people. And so if there is messaging that needs to be conveyed, if there are videos that need to be quickly created using AI tools, that is something that would be hugely effective in getting the message across in the quickest possible way.

Thank you.

Moderator

Thank you so much for such endeavor. insights on the Grampanchayath and how things are working behind. Actually, I’m sure the audience was truly, they were unknown about what’s happening around and this conversation has given a new tangent to how we look at the rural development. Thank you so much Shri Alok and thank you so much Shri Amin for sharing these thoughts on Grampanchayath development. Thank you so much for this fireside chat. Thank you. I would like to call Ms. Deepika. to please felicitate Mr. Alok.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Shri Alok Prem Nagar
13 arguments139 words per minute3601 words1546 seconds
Argument 1
Ministry of Panchayati Raj exists to empower panchayats and transform them into self-governing bodies, with oversight of finance commission grants through eGram Swaraj portal
EXPLANATION
The Ministry of Panchayati Raj was established in 2004 to empower panchayats and encourage states to create acts that transform local bodies into self-governing, responsible institutions. The ministry oversees how panchayats spend their finance commission grants through the eGram Swaraj portal, which covers all 2.5 lakh panchayats from planning to payment stages.
EVIDENCE
All two and a half lakh panchayats are present on eGram Swaraj portal from planning to payment stage; the portal initially worked only in English language
MAJOR DISCUSSION POINT
Digital transformation of rural governance through centralized oversight and empowerment
Argument 2
Language barriers prevented rural citizens from understanding governance processes, leading to the adoption of Bhashini for multilingual support
EXPLANATION
During a Gram Sabha meeting in Karnataka in 2019, the speaker realized that rural citizens couldn’t understand proceedings conducted in English, creating a barrier to participation in governance. This led to the adoption of Bhashini to enable citizens to access information in their local languages and participate more effectively in local governance.
EVIDENCE
Personal experience at a Karnataka Gram Sabha where the speaker didn’t understand anything for 45 minutes; integration with Bhashini allowed citizens to see expenses and plans in their own language with a click of a button
MAJOR DISCUSSION POINT
Language barriers as obstacles to inclusive governance and citizen participation
AGREED WITH
Amit Kumar
Argument 3
Survey revealed that 65% of panchayat secretaries spent excessive time on meeting documentation, leading to the development of Sabha Sar tool
EXPLANATION
A survey conducted by UNICEF using RapidPro with 8,000 panchayat secretaries across the country revealed that meeting conduct and recording was the most time-consuming activity for 65% of respondents. This insight led to the development of Sabha Sar, an AI-enabled tool that converts audio/video recordings into meeting minutes using Bhashini.
EVIDENCE
Survey of 8,000 panchayat secretaries using RapidPro by UNICEF; Sabha Sar uses simple recording devices like mobile phones and converts recordings into draft minutes through Bhashini
MAJOR DISCUSSION POINT
Addressing operational challenges in rural governance through AI-powered solutions
Argument 4
Over 115,000 gram sabha meetings have been processed using AI-enabled voice-to-text summarization powered by Bhashini
EXPLANATION
Since the launch of Sabha Sar on August 14, 2025, the tool has successfully processed over 115,000 gram sabha meetings as of February 4, 2026. The tool converts audio/video recordings of meetings into text summaries in local languages, significantly reducing the administrative burden on panchayat secretaries.
EVIDENCE
Specific numbers: 115,100 meetings processed between August 14, 2025 and February 4, 2026; tool works by inputting audio/video recordings and generating draft minutes in local languages
MAJOR DISCUSSION POINT
Measurable impact and scale of AI implementation in rural governance
Argument 5
States like Odisha, Tamil Nadu, and Tripura have adopted Sabha Sar and are developing second-stage tools for activity tracking
EXPLANATION
These states have not only implemented Sabha Sar but are advancing to create additional tools that use the meeting minutes to track activities and follow-up actions. They are developing systems to monitor what was discussed, what was committed, and what actions were taken based on the structured documentation.
EVIDENCE
Specific mention of Odisha, Tamil Nadu, and Tripura as states that have adopted the tool and are working on second-stage implementations for activity tracking
MAJOR DISCUSSION POINT
Progressive adoption and evolution of AI tools across different states
Argument 6
Expansion to 11 additional languages including Assamese, Bodo, Maitali, and Santal to address language gaps
EXPLANATION
The ministry recognized that many panchayats couldn’t use Sabha Sar because their languages weren’t available on Bhashini. To address this gap, they are working with states to provide expertise to train bots for 11 additional languages, ensuring broader linguistic inclusion.
EVIDENCE
Specific mention of 11 new languages being added including Assamese, Bodo, Maitali, and Santal; states are providing necessary expertise to train the language bots
MAJOR DISCUSSION POINT
Expanding linguistic coverage to ensure no community is left behind
Argument 7
Language AI enables citizens to access governance information in their local languages, allowing diaspora to monitor their village panchayats remotely
EXPLANATION
With Bhashini integration, citizens can now view panchayat information in their preferred languages at their convenience, without needing to rely on educated intermediaries. This also enables people working in cities like Mumbai to monitor what’s happening in their home villages near Pune and become actively involved in local governance.
EVIDENCE
Example of people working in Mumbai being able to monitor panchayats near Pune; citizens can now see information at their leisure instead of asking smart village residents to read English documents
MAJOR DISCUSSION POINT
Democratizing access to governance information and enabling remote participation
Argument 8
Bhashini integration allows citizens to view expenses, plans, and payment status in their own language with a simple button click
EXPLANATION
The integration of Bhashini with eGram Swaraj portal enables any citizen to access detailed financial information about their gram panchayat in their local language. They can see finance commission grants, execution plans, bills, payment status, asset locations with geotags, and even zoom into specific locations on the gram panchayat portal.
EVIDENCE
Citizens can drill into their gram panchayat’s records to see finance commission grants, execution plans, bills prepared for each activity, payment status, asset geotags, and zoom functionality on the portal
MAJOR DISCUSSION POINT
Transparency and accessibility of financial information through language AI
Argument 9
Despite infrastructure and connectivity challenges, Uttar Pradesh successfully onboarded all 59,000 gram panchayats to eGram Swaraj in 40 days
EXPLANATION
Uttar Pradesh’s achievement of onboarding all 59,000 gram panchayats to eGram Swaraj in just 40 days demonstrates that large-scale digital transformation is possible even in challenging environments. This involved registering digital signing certificates and completely transitioning from checkbooks to digital payments.
EVIDENCE
Specific achievement: 59,000 gram panchayats in UP onboarded in 40 days flat; involved digital signing certificates and complete transition from checkbooks to digital payments
MAJOR DISCUSSION POINT
Proof of concept for large-scale digital transformation in rural areas
Argument 10
Rural communities are receptive to AI-enabled systems when tools address their specific needs and are user-friendly
EXPLANATION
The speaker argues that if you create products that address genuine needs of panchayats and are user-friendly, rural communities will readily adopt them. The success with eGram Swaraj and Sabha Sar demonstrates that panchayats are predisposed to meet implementers halfway when the tools solve real problems.
EVIDENCE
Success stories with eGram Swaraj adoption across states; annual campaigns where 250,000 gram panchayats prepare development plans and upload them to portals; continued participation even during COVID when states requested to continue the campaign
MAJOR DISCUSSION POINT
Rural readiness for technology adoption when solutions are relevant and accessible
AGREED WITH
Amit Kumar
Argument 11
Connectivity issues in villages are addressed by allowing offline recording and later upload to Sabha Sar system
EXPLANATION
The Sabha Sar tool is designed to work around connectivity challenges in rural areas by separating the recording process from the processing. Users can record meetings on their mobile phones offline and later upload the recordings when connectivity is available, making the system practical for rural implementation.
EVIDENCE
Sabha Sar tool allows recording on mobile phones offline; the processing tool is separate from the recording device; users can upload recordings later when connectivity is available
MAJOR DISCUSSION POINT
Technical solutions designed to overcome rural infrastructure limitations
AGREED WITH
Amit Kumar
Argument 12
Integration of drone survey data with AI to identify solar potential across 238,000 gram panchayats, linked to PM Suryaghar Yojana
EXPLANATION
The Swamitva scheme’s drone surveys, originally conducted for property rights, generated dense point cloud data that was being wasted. AI analysis of this data now provides roof-wise solar panel installation potential across 238,000 gram panchayats, integrated with the PM Suryaghar Yojana portal for implementation.
EVIDENCE
3.3 lakh gram panchayats had drone surveys; 2.38 lakh now have solar potential data available; citizens can zoom into their village on gram Manchitra and see roof-wise solar panel capacity; integration with PM Suryaghar Yojana portal
MAJOR DISCUSSION POINT
Innovative reuse of existing data through AI to create new public services
Argument 13
Next phase involves AI-powered service delivery systems that can assign issues to appropriate departments and track resolution
EXPLANATION
The ministry envisions expanding AI capabilities to automatically categorize citizen-reported issues (like overflowing drains or potholes) from images and assign them to appropriate departments for resolution. This would include escalation mechanisms and tracking systems to ensure timely resolution of citizen complaints.
EVIDENCE
Reference to a Guwahati pilot where bus cameras captured images and assigned issue labels; mention of Meri Panchayat mobile interface that can capture images of reported issues; vision for automatic assignment to departments with escalation mechanisms
MAJOR DISCUSSION POINT
Future vision for comprehensive AI-powered citizen service delivery
A
Amit Kumar
9 arguments184 words per minute2674 words870 seconds
Argument 1
AI should not be limited to urban and commercial sectors but must include 900+ million village residents to avoid leaving them behind
EXPLANATION
Kumar argues that AI development in India cannot be elitist and confined to urban areas, industries, and commercial sectors. With over 900 million people living in villages, AI solutions must be designed to include rural populations to ensure equitable development and prevent digital exclusion.
EVIDENCE
Specific mention of 900+ million people living in villages; emphasis on avoiding elitist AI that only serves urban, industrial, and commercial sectors
MAJOR DISCUSSION POINT
Inclusive AI development that serves rural populations alongside urban areas
AGREED WITH
Shri Alok Prem Nagar
Argument 2
Frugal approach using existing mobile phones for recording without requiring additional investment from gram panchayats
EXPLANATION
The implementation strategy focuses on frugality by leveraging existing infrastructure rather than requiring new investments. Gram panchayats only need mobile phones they already possess for recording, with all processing done by the system, making the solution accessible and cost-effective for rural implementation.
EVIDENCE
No additional investment required from gram panchayats; only need existing mobile phones for recording; system handles all processing; human-in-the-loop provision for corrections
MAJOR DISCUSSION POINT
Cost-effective implementation strategies for rural technology adoption
AGREED WITH
Shri Alok Prem Nagar
Argument 3
Structured documentation through Sabha Sar changes governance behavior by improving accountability, transparency, and public participation
EXPLANATION
Kumar argues that structured documentation creates behavioral changes in governance systems by making proceedings transparent and accessible to the public. This leads to increased accountability as decisions and commitments are recorded and can be tracked, encouraging broader public participation in governance processes.
EVIDENCE
Comparison to urban RWA meetings where people don’t attend but engage in WhatsApp discussions; mention that decisions were previously taken by few people with large populations kept out; AI enables record keeping and accountability that was missing
MAJOR DISCUSSION POINT
How technology-enabled transparency changes governance dynamics and citizen engagement
AGREED WITH
Moderator
Argument 4
Success requires meeting stakeholders halfway – addressing both ministry’s accountability needs and panchayats’ operational requirements
EXPLANATION
Kumar emphasizes that successful implementation requires understanding and addressing the needs of all stakeholders. The ministry needed proper accounting of funds while panchayats needed user-friendly systems, and success came from creating solutions that met both sets of requirements rather than imposing top-down solutions.
EVIDENCE
Example of eGram Swaraj meeting ministry’s need for fund accounting while providing panchayats with easy-to-use systems; UP’s 40-day implementation success as proof of this approach
MAJOR DISCUSSION POINT
Stakeholder-centered approach to technology implementation in governance
Argument 5
India’s experience with population-scale digital infrastructure (Aadhaar, UPI, GST) provides confidence for AI implementation at unprecedented scale
EXPLANATION
Kumar argues that India’s successful implementation of large-scale digital systems like Aadhaar, UPI, FastTag, and GST demonstrates the country’s capability to implement AI solutions at population scale. This experience provides confidence that India can achieve AI implementation that is both larger in scale and more cost-effective than Western approaches.
EVIDENCE
Specific mention of successful implementations: Aadhaar, UPI, FastTag, GST, and Income Tax systems; claim of doing things 10 times cheaper than Western world with better results; UP’s 60,000 panchayats compared to top 10 countries by population
MAJOR DISCUSSION POINT
India’s proven track record in large-scale digital infrastructure as foundation for AI scaling
AGREED WITH
Moderator
Argument 6
Open architecture and sovereignty are critical for long-term sustainability and avoiding vendor lock-in while maintaining data residency
EXPLANATION
Kumar emphasizes the importance of technological sovereignty and open architecture to ensure India can survive geopolitical risks and maintain control over its systems. This includes interoperability, standard models, moveable infrastructure, controllable teams, and data residency within India to enable flexibility and independence.
EVIDENCE
Emphasis on survivability despite geopolitical risks; mention of interoperability, standards, models, infrastructure mobility, team control, and data residency requirements; distinction between sovereignty and making everything locally
MAJOR DISCUSSION POINT
Strategic considerations for maintaining technological independence and flexibility
Argument 7
AI democratizes governance by making information accessible to all village residents, not just educated intermediaries
EXPLANATION
Kumar argues that AI tools eliminate the need for citizens to rely on educated intermediaries to understand governance information. By providing direct access to information in local languages, AI democratizes participation in governance and ensures that all citizens, regardless of education level, can engage with government processes.
EVIDENCE
Reference to previous system where people had to ask smart village residents to read English documents; AI enables direct access without intermediaries
MAJOR DISCUSSION POINT
AI as a tool for democratizing access to governance information
Argument 8
Technology should be secondary to clear problem definition and understanding of what needs to be achieved at scale
EXPLANATION
Kumar emphasizes that successful AI implementation requires clear understanding of problems to be solved, the scale of implementation, and appropriate guardrails, rather than focusing primarily on technology. The approach should prioritize problem-solving over technological sophistication, with clear ideas about objectives and constraints.
EVIDENCE
Statement that technology becomes secondary most of the time; emphasis on having clear ideas about problems, scale, and guardrails; mention that AI cannot be 100% autonomous or 100% human-in-the-loop
MAJOR DISCUSSION POINT
Problem-first approach to AI implementation rather than technology-first approach
AGREED WITH
Shri Alok Prem Nagar
Argument 9
AI implementation requires balance between automation and human oversight, avoiding both complete autonomy and excessive manual intervention
EXPLANATION
Kumar argues that effective AI implementation must find the right balance between automation and human oversight. Complete autonomy is not acceptable, but excessive human intervention defeats the purpose of AI. The goal is to create systems that can operate effectively while maintaining appropriate human oversight and feedback mechanisms.
EVIDENCE
Explicit statement that AI cannot be 100% autonomous or 100% human-in-the-loop; mention of training, monitoring, complaint mechanisms, and accuracy improvement processes
MAJOR DISCUSSION POINT
Finding the optimal balance between AI automation and human oversight in governance
M
Moderator
7 arguments133 words per minute604 words271 seconds
Argument 1
Language AI is critical for ensuring digital governance platforms are inclusive and participatory, increasing citizen trust and participation in Gram Sabhas
EXPLANATION
The moderator emphasizes that India’s last mile operates in local languages and dialects, making language AI essential for solving accessibility problems. Digital governance platforms must be inclusive and participatory to build citizen trust and encourage participation in local governance structures like Gram Sabhas.
EVIDENCE
India’s last mile operates in local languages and dialects
MAJOR DISCUSSION POINT
The critical role of language AI in making digital governance inclusive and accessible
AGREED WITH
Shri Alok Prem Nagar
Argument 2
Structured documentation through Sabha Sar can improve transparency, participation tracking, and monitoring of meeting frequency and agenda quality
EXPLANATION
The moderator questions whether the implementation of structured documentation systems like Sabha Sar leads to measurable improvements in governance processes. This includes better transparency in decision-making, improved tracking of citizen participation, and enhanced monitoring of how frequently meetings occur and the quality of their agendas.
EVIDENCE
Reference to Sabha Sar’s launch on August 14, 2025, and processing of over 115,100 gram sabha meetings by February 4, 2026
MAJOR DISCUSSION POINT
Measuring the impact of AI-enabled documentation on governance quality
AGREED WITH
Amit Kumar
Argument 3
Structured documentation changes behavior within governance systems from an accountability perspective
EXPLANATION
The moderator raises the question of whether implementing structured documentation systems fundamentally alters how governance actors behave, particularly in terms of accountability. This suggests that when proceedings are properly documented and made transparent, it may change how officials conduct themselves and make decisions.
MAJOR DISCUSSION POINT
The behavioral impact of transparency and documentation on governance accountability
Argument 4
Implementation challenges in rural India with AI include infrastructure, training, dialect diversity, and connectivity issues
EXPLANATION
The moderator identifies key operational challenges that must be addressed when implementing AI solutions in rural governance contexts. These challenges span technical infrastructure limitations, the need for capacity building and training, the complexity of managing multiple dialects and languages, and poor connectivity in rural areas.
EVIDENCE
Mention of infrastructure, training, dialect diversity, and connectivity as specific challenge areas
MAJOR DISCUSSION POINT
Operational barriers to AI implementation in rural governance settings
Argument 5
Open architecture is critical for ensuring long-term sustainability and avoiding vendor lock-in for ministries delivering last mile services
EXPLANATION
The moderator emphasizes the importance of open architecture principles for government systems, particularly for ministries that deliver services directly to citizens. Open architecture ensures that systems remain flexible, sustainable over time, and prevent dependence on single vendors that could limit future options or increase costs.
EVIDENCE
Reference to Ministry of Rural Development and Ministry of Agriculture and Farmers Welfare as examples of ministries delivering last mile services
MAJOR DISCUSSION POINT
Strategic technology architecture decisions for sustainable government systems
Argument 6
India can lead the world in population-scale multilingual AI for governance by balancing scale with accountability and public trust
EXPLANATION
The moderator suggests that India has the potential to become a global leader in implementing AI solutions for governance at unprecedented scale, particularly in multilingual contexts. However, this leadership requires successfully balancing the benefits of large-scale implementation with maintaining accountability mechanisms and building public trust in AI systems.
EVIDENCE
Reference to India building public digital infrastructure for AI at scale
MAJOR DISCUSSION POINT
India’s potential for global leadership in population-scale AI governance
AGREED WITH
Amit Kumar
Argument 7
AI built on public stack and powered by language inclusion can become the strongest enabler of participatory governance in the 21st century
EXPLANATION
The moderator concludes that if Panchayati Raj institutions are truly the foundation of democracy, then AI systems built on public infrastructure and designed with language inclusion can fundamentally transform participatory governance. This represents a vision where technology serves to strengthen democratic participation rather than replace it.
EVIDENCE
Reference to Panchayati Raj institutions as the foundation of democracy
MAJOR DISCUSSION POINT
AI as a transformative enabler of democratic participation and governance
Agreements
Agreement Points
Language barriers are a critical obstacle to inclusive governance and citizen participation
Speakers: Shri Alok Prem Nagar, Moderator
Language barriers prevented rural citizens from understanding governance processes, leading to the adoption of Bhashini for multilingual support Language AI is critical for ensuring digital governance platforms are inclusive and participatory, increasing citizen trust and participation in Gram Sabhas
Both speakers agree that language barriers prevent meaningful citizen participation in governance and that multilingual AI solutions like Bhashini are essential for making digital governance platforms truly inclusive and accessible to rural populations.
AI implementation must be inclusive and serve rural populations, not just urban elites
Speakers: Shri Alok Prem Nagar, Amit Kumar
Language barriers prevented rural citizens from understanding governance processes, leading to the adoption of Bhashini for multilingual support AI should not be limited to urban and commercial sectors but must include 900+ million village residents to avoid leaving them behind
Both speakers emphasize that AI development cannot be confined to urban areas and must actively include rural populations. They agree that technology solutions must address the needs of India’s vast rural population to prevent digital exclusion.
Frugal, practical approaches using existing infrastructure are key to successful rural AI implementation
Speakers: Shri Alok Prem Nagar, Amit Kumar
Connectivity issues in villages are addressed by allowing offline recording and later upload to Sabha Sar system Frugal approach using existing mobile phones for recording without requiring additional investment from gram panchayats
Both speakers advocate for cost-effective implementation strategies that leverage existing resources like mobile phones and work around infrastructure limitations rather than requiring significant new investments from rural institutions.
Technology should address genuine problems rather than being technology-first
Speakers: Shri Alok Prem Nagar, Amit Kumar
Rural communities are receptive to AI-enabled systems when tools address their specific needs and are user-friendly Technology should be secondary to clear problem definition and understanding of what needs to be achieved at scale
Both speakers agree that successful AI implementation requires understanding and solving real problems first, with technology serving as a means to address specific needs rather than being an end in itself.
Structured documentation improves transparency and accountability in governance
Speakers: Amit Kumar, Moderator
Structured documentation through Sabha Sar changes governance behavior by improving accountability, transparency, and public participation Structured documentation through Sabha Sar can improve transparency, participation tracking, and monitoring of meeting frequency and agenda quality
Both speakers recognize that implementing structured documentation systems fundamentally changes governance dynamics by making processes more transparent, accountable, and enabling better tracking of participation and decision-making.
India has the potential to lead in population-scale AI governance
Speakers: Amit Kumar, Moderator
India’s experience with population-scale digital infrastructure (Aadhaar, UPI, GST) provides confidence for AI implementation at unprecedented scale India can lead the world in population-scale multilingual AI for governance by balancing scale with accountability and public trust
Both speakers express confidence in India’s ability to become a global leader in AI governance implementation, citing the country’s proven track record with large-scale digital systems and its unique position to demonstrate population-scale multilingual AI solutions.
Similar Viewpoints
Large-scale digital transformation in rural areas is achievable when solutions address the needs of all stakeholders and demonstrate clear value propositions, as evidenced by successful implementations across hundreds of thousands of panchayats.
Speakers: Shri Alok Prem Nagar, Amit Kumar
Despite infrastructure and connectivity challenges, Uttar Pradesh successfully onboarded all 59,000 gram panchayats to eGram Swaraj in 40 days Success requires meeting stakeholders halfway – addressing both ministry’s accountability needs and panchayats’ operational requirements
AI-powered language solutions fundamentally democratize access to governance information by eliminating the need for educated intermediaries and enabling direct citizen engagement regardless of location or education level.
Speakers: Shri Alok Prem Nagar, Amit Kumar
Language AI enables citizens to access governance information in their local languages, allowing diaspora to monitor their village panchayats remotely AI democratizes governance by making information accessible to all village residents, not just educated intermediaries
Government systems must be built on open architecture principles to ensure flexibility, sustainability, and independence from vendor dependencies, particularly for systems serving large populations at scale.
Speakers: Amit Kumar, Moderator
Open architecture and sovereignty are critical for long-term sustainability and avoiding vendor lock-in while maintaining data residency Open architecture is critical for ensuring long-term sustainability and avoiding vendor lock-in for ministries delivering last mile services
Unexpected Consensus
Rural communities’ readiness for AI adoption
Speakers: Shri Alok Prem Nagar, Amit Kumar
Rural communities are receptive to AI-enabled systems when tools address their specific needs and are user-friendly AI democratizes governance by making information accessible to all village residents, not just educated intermediaries
There is unexpected consensus that rural communities are not resistant to AI technology but are actually quite receptive when solutions are designed appropriately. This challenges common assumptions about technology adoption in rural areas and suggests that the barrier is not rural resistance but rather the design and relevance of solutions.
Scale as an advantage rather than a challenge
Speakers: Shri Alok Prem Nagar, Amit Kumar
Despite infrastructure and connectivity challenges, Uttar Pradesh successfully onboarded all 59,000 gram panchayats to eGram Swaraj in 40 days India’s experience with population-scale digital infrastructure (Aadhaar, UPI, GST) provides confidence for AI implementation at unprecedented scale
Both speakers view India’s massive scale as an advantage for AI implementation rather than a challenge, suggesting that large-scale deployment creates momentum and demonstrates feasibility. This is unexpected as scale is typically viewed as a complicating factor in technology implementation.
Overall Assessment

There is strong consensus among all speakers on the fundamental principles of inclusive AI governance: the critical importance of language accessibility, the need for frugal and practical implementation approaches, the value of addressing real problems over technology showcase, and India’s potential for global leadership in population-scale AI governance. The speakers demonstrate alignment on both strategic vision and practical implementation approaches.

High level of consensus with complementary perspectives rather than conflicting viewpoints. The implications are significant as this alignment suggests a clear path forward for AI implementation in rural governance, with shared understanding of both challenges and solutions. This consensus provides a strong foundation for scaling AI governance solutions across India’s vast rural landscape.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The discussion shows remarkable consensus among all speakers on the value and implementation of AI in rural governance, with no significant disagreements identified

Very low disagreement level. All speakers are aligned on the benefits of AI for rural governance, the importance of language inclusion, and the success of current implementations. The speakers complement each other’s perspectives rather than contradict them, with Nagar providing operational insights, Kumar offering strategic and technical perspectives, and the Moderator facilitating discussion around key themes. This high level of agreement suggests strong institutional alignment and shared vision for AI-enabled governance transformation.

Partial Agreements
Both speakers agree that successful AI implementation requires addressing genuine user needs, but they emphasize different aspects – Nagar focuses on making tools simple and user-friendly while Kumar emphasizes the need to balance different stakeholder requirements
Speakers: Shri Alok Prem Nagar, Amit Kumar
Rural communities are receptive to AI-enabled systems when tools address their specific needs and are user-friendly Success requires meeting stakeholders halfway – addressing both ministry’s accountability needs and panchayats’ operational requirements
Both agree on the need for practical solutions to rural infrastructure challenges, but Nagar emphasizes technical workarounds for connectivity while Kumar focuses on cost-effective implementation using existing resources
Speakers: Shri Alok Prem Nagar, Amit Kumar
Connectivity issues in villages are addressed by allowing offline recording and later upload to Sabha Sar system Frugal approach using existing mobile phones for recording without requiring additional investment from gram panchayats
Both agree on the importance of open architecture for sustainability, but Kumar emphasizes sovereignty and geopolitical considerations while the Moderator focuses on practical vendor management for service delivery
Speakers: Amit Kumar, Moderator
Open architecture and sovereignty are critical for long-term sustainability and avoiding vendor lock-in while maintaining data residency Open architecture is critical for ensuring long-term sustainability and avoiding vendor lock-in for ministries delivering last mile services
Takeaways
Key takeaways
AI-powered language tools like Bhashini have successfully democratized rural governance by enabling citizens to access information in local languages, with over 115,000 gram sabha meetings processed through Sabha Sar Frugal innovation approach using existing mobile phones has enabled rapid adoption – UP onboarded 59,000 gram panchayats in 40 days without requiring additional infrastructure investment AI implementation in rural governance requires balancing automation with human oversight, avoiding both complete autonomy and excessive manual intervention India’s experience with population-scale digital infrastructure (Aadhaar, UPI, GST) provides the foundation and confidence for implementing AI at unprecedented scale in rural areas Language AI transforms governance from elite-mediated to direct citizen participation, allowing diaspora to monitor village panchayats and enabling systematic record-keeping and accountability Success depends on clearly defined problems and understanding which AI tools can address specific governance challenges rather than technology-first approaches Open architecture and data sovereignty are critical for long-term sustainability and avoiding vendor lock-in while maintaining control over public systems
Resolutions and action items
Expansion of Bhashini to support 11 additional languages including Assamese, Bodo, Maitali, and Santal to address current language gaps Development of next-generation AI tools for service delivery that can automatically assign citizen issues to appropriate departments and track resolution Integration of spatial development plans with AI visualization tools to help citizens understand future development impacts Expansion of Sabha Sar adoption to more states beyond current leaders (Odisha, Tamil Nadu, Tripura) Development of second-stage tools by adopting states to convert meeting minutes into activity tracking and follow-up systems Collaboration with Department of Drinking Water and Sanitation to extend Bhashini usage to village water committee meetings Implementation of capacity building training programs to intensify AI tool adoption across panchayats
Unresolved issues
How to address the significant number of panchayats whose languages are not yet supported by Bhashini, limiting their access to AI-enabled governance tools Balancing the need for human oversight with AI automation efficiency – determining optimal levels of human-in-the-loop intervention Managing the complexity of multiple AI use cases as departments become fully AI-enabled and scale from single to multiple applications Ensuring consistent adoption across all states, as some may be slower to embrace AI-enabled governance tools compared to early adopters Addressing potential resistance to cultural change in meeting documentation and governance processes Determining how to maintain system performance and accuracy as AI tools scale to cover all 250,000+ gram panchayats nationwide
Suggested compromises
Meeting stakeholders halfway by addressing both ministry accountability needs and panchayat operational requirements in system design Using offline recording capabilities to address connectivity issues while maintaining AI processing benefits Implementing human-in-the-loop systems that allow for corrections and improvements without requiring manual approval of every transaction Adopting a platform approach for future AI implementations that can accommodate multiple use cases while maintaining open architecture Allowing states to provide language expertise to Bhashini for training bots in currently unsupported languages rather than waiting for central development
Thought Provoking Comments
I happened to attend a Gram Sabha in the state of Karnataka I was there for something like 45 minutes and I was felicitated and sat on stage and I didn’t understand a thing and then it struck me I had this thing that how do you expect these people really to relate to what is happening because it is public money
This personal anecdote reveals a profound moment of realization about the fundamental disconnect between governance systems and the people they serve. It highlights how language barriers create exclusion in democratic processes, making this a powerful catalyst for understanding the real problem AI needed to solve.
This comment established the foundational problem that drove the entire AI initiative. It shifted the discussion from technical capabilities to human-centered governance, setting the stage for explaining why Bhashini integration was not just innovative but necessary for democratic participation.
Speaker: Shri Alok Prem Nagar
So the AI is all about idea and use case. Right. If you have the right idea, you can do wonders. But you have to have idea and kind of, you know, muscles to execute it.
This comment cuts through the AI hype to focus on what really matters – having clear problems to solve and the capability to implement solutions. It reframes AI from a technology-first to a problem-first approach, which is crucial for sustainable governance applications.
This insight redirected the conversation from celebrating technical achievements to understanding the strategic thinking behind successful AI implementation. It influenced subsequent discussions about scalability and replicability across other ministries.
Speaker: Amit Kumar
So if you look at the frugality of the situation right so so so for example if you look at in india we generally people talk about either we live in a bullock cart stage right or we are aspiring for bullet train right so so the point is if ai has to tell us in terms of you know how we learn in the future how will we transform so we cannot i mean leave out 900 plus million people who are living in villages
This comment challenges the urban-centric view of AI development and makes a compelling case for inclusive technology. The ‘bullock cart to bullet train’ metaphor powerfully illustrates India’s development paradox and the moral imperative to ensure AI benefits reach rural populations.
This comment elevated the discussion from operational details to philosophical questions about equitable development. It reframed AI implementation as a matter of social justice rather than just efficiency, influencing how the conversation addressed scalability and inclusiveness.
Speaker: Amit Kumar
Can you imagine Uttar Pradesh did it in 40 days flat, all 59,000 gram panchayats. So my point was that if you are ready with a product that addresses their needs and it is friendly and it meets, of course, my need was that I needed the money well accounted for and their need was a system that could make it very easy for them to do it. So we met halfway
This example demonstrates that scale is achievable when solutions genuinely address user needs. The ‘meeting halfway’ concept reveals sophisticated understanding of stakeholder alignment – recognizing that successful implementation requires balancing institutional accountability needs with user convenience.
This comment shifted the discussion from theoretical possibilities to proven scalability. It provided concrete evidence that rural India can rapidly adopt technology when it solves real problems, influencing the conversation about implementation challenges and future expansion possibilities.
Speaker: Shri Alok Prem Nagar
So the idea is despite, in spite any kind of, you know, geopolitical risk, we should survive. Our system should run, right? So for that, generally people confuse sovereignty with also making India local, et cetera. So that’s not the case, right? We will always have some technology from outside. But we have to design in a way that it is kind of ready to shift, right?
This comment provides nuanced thinking about technological sovereignty, distinguishing between self-reliance and isolationism. It shows sophisticated understanding of how to balance global technology integration with national resilience, which is crucial for long-term AI strategy.
This insight introduced strategic depth to the conversation, moving beyond immediate implementation to long-term sustainability concerns. It influenced discussions about open architecture and positioned India’s AI development within broader geopolitical contexts.
Speaker: Amit Kumar
So like Sir said, you know, Sir is not an AI person, neither am I… So if you look at the idea of Panchayati Raj itself is a participative governance, right? That people have to assemble in the Gram Sabha and decide on the money which they’re getting, how to spend and prioritize. And if AI tools like Praman and Sabha Sar and, you know, Pancham can help that strengthen, what best, you know, you can expect from a participative government
This comment reveals that the most successful AI implementations come from domain experts who understand problems deeply, not AI specialists. It connects AI tools directly to democratic principles, showing how technology can strengthen rather than replace human-centered governance processes.
This comment provided a powerful conclusion by reframing the entire discussion – positioning AI as a servant of democracy rather than a replacement for human judgment. It reinforced the human-centered approach throughout the conversation and validated the ministry’s problem-first methodology.
Speaker: Amit Kumar
Overall Assessment

These key comments shaped the discussion by consistently grounding technical achievements in human-centered governance principles. The conversation evolved from showcasing AI capabilities to demonstrating how technology can strengthen democratic participation. The personal anecdote about language barriers established emotional resonance, while insights about frugal innovation and inclusive development provided philosophical depth. Comments about sovereignty and scalability added strategic dimension, while the acknowledgment that successful AI comes from domain expertise rather than technical specialization provided a humble yet confident conclusion. Together, these comments created a narrative that positions India’s rural AI initiatives not just as technological achievements, but as democratic innovations that could serve as a model for inclusive governance worldwide.

Follow-up Questions
How can AI tools be integrated with other ministries like Department of Drinking Water and Sanitation for their village water committee meetings?
This represents a concrete opportunity for cross-ministry collaboration and scaling AI solutions beyond Panchayati Raj to other rural governance areas
Speaker: Shri Alok Prem Nagar
How can Bhashini support the 11 additional languages (including Assamese, Bodo, Maitali, and Santal) that are currently not available on the platform?
This addresses a significant limitation in reaching all rural populations and requires coordination with states to provide necessary linguistic expertise for training AI models
Speaker: Shri Alok Prem Nagar
How can AI be used to automatically populate meeting agendas based on previous meeting commitments and follow-ups?
This represents the next evolution of Sabha Sar to create more systematic accountability and tracking of governance commitments
Speaker: Amit Kumar
How can image recognition AI be integrated with the Meri Panchayat mobile interface to automatically categorize and assign reported issues to appropriate departments?
This would automate service delivery and issue resolution processes, building on the Guwahati pilot example mentioned
Speaker: Shri Alok Prem Nagar
How can AI tools be used to create quick audio-video messages for the Pancham WhatsApp-based chatbot platform?
This would enhance communication effectiveness with sarpanchas and panchayat secretaries across the country
Speaker: Shri Alok Prem Nagar
How can AI be leveraged to create better visualizations for spatial development plans to improve citizen understanding and buy-in?
This addresses the challenge of making complex planning documents accessible and understandable to rural populations who will be affected by them
Speaker: Shri Alok Prem Nagar
What is the optimal balance between autonomous AI decision-making and human oversight in governance applications?
This is critical for maintaining accountability while achieving efficiency gains from AI implementation in public sector
Speaker: Amit Kumar
How can states that have adopted Sabha Sar (like Odisha, Tamil Nadu, and Tripura) develop second-stage tools for tracking post-meeting activities and commitments?
This represents the evolution from documentation to actionable governance and accountability systems
Speaker: Shri Alok Prem Nagar

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Welfare for All Ensuring Equitable AI in the Worlds Democracies

Welfare for All Ensuring Equitable AI in the Worlds Democracies

Session at a glanceSummary, keypoints, and speakers overview

Summary

This panel discussion focused on democratizing AI’s impact globally and preventing the concentration of AI’s economic value primarily in Western economies and China, where estimates suggest 70% of AI value could reside. The conversation emphasized that avoiding this outcome requires intentional design, international collaboration, innovation, workforce development, and building trust and security in AI systems.


The panelists discussed the importance of developing international AI safety standards while recognizing the need to customize these standards for different cultures, languages, and local constraints. They highlighted the tension between creating universal standards that enable cross-border technology flow and adapting them to local needs, with examples like Google’s IndIC GenBench supporting 29 Indian languages. The discussion emphasized moving from traditional technology transfer approaches to co-creation models where developers and governments collaborate as partners.


A significant portion of the conversation addressed the persistent AI skills gap and various approaches to bridge it. Microsoft announced commitments to upskill 20 million Indians by 2030, while L&T Technology Services shared their strategy of reaching out to colleges, upskilling current employees during billable hours, and encouraging personal technology development time. The panelists agreed that traditional workforce displacement approaches don’t work in developing economies, requiring more nuanced upskilling strategies.


Security and trust emerged as critical concerns, with discussions about AI-specific cyber threats like prompt injection attacks and the need for multilingual security capabilities. The conversation concluded with reflections on India’s evolution from being viewed as a “back office” to becoming a “front office” for global AI development, emphasizing grassroots impact and the integration of governance with innovation rather than treating them as competing priorities.


Keypoints

Major Discussion Points:

International AI Standards and Collaboration: The need for global cooperation in developing AI safety standards while allowing for local customization based on cultural, linguistic, and economic differences. Discussion emphasized that standards should be enablers rather than barriers, with examples like Google’s Indiq GenBench for Indian languages.


Skills Gap and Workforce Development: Addressing the persistent AI skills shortage through public-private partnerships, with specific focus on upskilling existing workers rather than replacement. Companies shared strategies including curriculum partnerships with colleges, continuous employee training during billable hours, and incentivizing technology development through patents and recognition.


Democratizing AI Access and Preventing Digital Divide: Concerns about AI’s economic value concentrating in Western economies and China, with discussion of intentional efforts needed to ensure broader global participation. Microsoft’s commitment to training 20 million Indians by 2030 and infrastructure investments were highlighted as examples.


AI Security and Trust Building: Growing cybersecurity threats specific to AI, including prompt injection attacks and multilingual vulnerabilities. Discussion covered the need for “self-defending systems” and AI-versus-AI security approaches, while addressing public trust deficits in AI applications.


India’s Evolving Role in Global AI: Recognition of India’s transformation from a “back office” to a “front office” for AI development, with emphasis on grassroots impact and local innovation rather than just cost-based services.


Overall Purpose:

The discussion aimed to explore strategies for democratizing AI’s benefits globally, with particular focus on preventing the concentration of AI’s economic value in developed nations and ensuring developing countries, especially India, can participate meaningfully in the AI revolution through international collaboration, skills development, and responsible deployment.


Overall Tone:

The conversation maintained an optimistic and collaborative tone throughout, with participants sharing practical solutions and success stories. While acknowledging significant challenges like the digital divide, skills gaps, and security concerns, speakers consistently emphasized opportunities for partnership and positive outcomes. The tone became particularly enthusiastic when discussing India’s potential and achievements, ending on a note of genuine optimism about AI’s democratization potential despite the challenges ahead.


Speakers

Speakers from the provided list:


Brad Staples – Panel moderator/host


Amit Chadha – Managing Director and CEO of L&T Technology Services


Amanda Craig Deckard – Senior Director, Office of Responsible AI at Microsoft


Sachin Kakkar – India Site Development, Privacy, Safety and Security at Google


Lee Tiedrich – Inaugurable AI Multidisciplinary Initiative Fellow, University of Maryland, Senior Advisor on the International AI Safety Report


Julian Waits – Chief Experience Officer with Rapid7


Audience – Various audience members asking questions


Additional speakers:


– None identified beyond the provided speakers names list


Full session reportComprehensive analysis and detailed insights

This panel discussion at an AI summit in India addressed the critical challenge of democratising artificial intelligence’s global impact and preventing the concentration of AI’s economic value in developed nations. The conversation brought together diverse stakeholders including technology executives, government advisors, and academic researchers to explore practical strategies for ensuring AI benefits reach developing countries and grassroots communities.


The Challenge of AI Economic Concentration

The discussion opened with moderator Brad Staples highlighting concerning trends in AI development. Some estimates suggest that 70% of AI’s economic value risks being concentrated in Western economies and China if present trends continue unchecked. However, Staples emphasised that this concentration is not inevitable but represents a failure of intentional design, international collaboration, and societies coming together. The panel stressed that democratising AI’s impact requires deliberate efforts across multiple dimensions including international cooperation, innovation and research, workforce development, and establishing trust and security frameworks.


International Standards and Local Adaptation

A significant portion of the discussion focused on developing international AI safety standards whilst recognising the critical need for local customisation. Lee Tiedrich, an Inaugurable AI Multidisciplinary Initiative Fellow who worked with approximately 100 experts on the second International AI Safety Report, highlighted both progress and persistent gaps in AI evaluation and evidence development. Whilst organisations like ISO have released initial standards such as 42,001, the pace of development needs acceleration.


Sachin Kakkar from Google illustrated the localisation challenge through the company’s IndIC GenBench initiative, which supports 29 Indian languages, 12 scripts, and 4 language families for fine-tuning and assessing large language models. He emphasised that “copy pasting regulations from international markets to local markets may not always work,” highlighting the need for standards that accommodate different cultures, languages, and local constraints.


Both speakers agreed that effective AI governance requires moving beyond traditional technology transfer approaches toward co-creation models where developers and governments collaborate as genuine partners. Google’s Coalition of Secure AI Framework (COSI), which they are expanding across the Asia-Pacific region, exemplifies this collaborative approach.


Workforce Development and Skills Revolution

The AI skills gap emerged as one of the most pressing challenges. Amit Chadha from L&T Technology Services provided a stark assessment: 40-50% of current engineering consulting work has emerged in just the past five years, whilst 60% of today’s work will become obsolete within the next three to five years.


Microsoft’s Amanda Craig-Deckard, Senior Director of the Office of Responsible AI, outlined their comprehensive approach through the Elevate initiative. The company has committed to upskilling 20 million Indians by 2030, having successfully trained 5.6 million people in the past year. Their “Elevate for Educators” programme works with Indian government ministries, schools, vocational institutes, and higher education institutions to achieve scale through educational multiplication effects.


L&T Technology Services has developed a three-pronged workforce development strategy: engaging with colleges during students’ final year to ensure relevant curricula; upskilling existing employees during billable hours rather than waiting for non-billable periods; and tracking personal technology development time beyond work hours. This approach has yielded measurable results, with the percentage of L&T’s workforce spending personal time on technology development increasing from 19% to 52% over five years, whilst annual patent filings have grown from 50 to 200.


Julian Waits from Rapid7 noted the unprecedented pace of change, acknowledging that skills considered essential today may become obsolete within five years. The panel consensus favoured incentive-based approaches over mandates, focusing on making AI tools immediately useful rather than imposing top-down requirements.


Security, Trust, and Multilingual Vulnerabilities

The security discussion revealed sophisticated understanding of emerging threats and defence strategies. Waits noted that AI could potentially eliminate 60% of current human security tasks, though audience members challenged whether this transition would be manageable given the exponential pace of change.


Kakkar introduced the concept of “self-defending systems” that could reverse the traditional defender’s dilemma in cybersecurity, where attackers need only find one vulnerability whilst defenders must protect all potential attack vectors. AI offers the potential to automate routine defensive work, potentially providing the first aggregate advantage to defenders in cybersecurity history.


Amanda Craig-Deckard highlighted a particularly sophisticated challenge: multilingual AI vulnerabilities. AI systems that perform well in high-resource languages but poorly in low-resource languages create exploitable weaknesses. Attackers can use prompt injection techniques in languages like Tamil to “jailbreak” safety systems and circumvent security measures, connecting digital inclusion directly to cybersecurity.


To address these challenges, Microsoft has collaborated with ML Commons to develop jailbreak benchmarks that include multiple Indic and Asian languages. Google has contributed tools like SynthID, a watermark technique for AI-generated content across text, images, video, and audio.


India’s Grassroots-Focused Approach

A recurring theme was India’s evolution from a “back office” to a “front office” for global AI development. Chadha traced this transformation from initial perceptions in the 1990s through data security concerns definitively addressed during COVID-19, to the current reality of global companies developing products in India for worldwide markets.


Kakkar emphasised that whilst some regions focus on AI governance frameworks, India concentrates on AI’s practical impact for farmers, small schools, NGOs, and local hospitals. This grassroots approach aligns with India’s digital public infrastructure philosophy, exemplified by systems like Aadhaar and UPI that achieved massive scale through practical utility.


Rather than viewing challenges like bandwidth constraints and linguistic diversity as obstacles, Indian AI development treats them as design parameters that can inform more inclusive global solutions.


Addressing Infrastructure Challenges

Despite the optimistic tone, participants acknowledged persistent challenges. An audience member, Rita Soni from the Digital Empowerment Foundation, highlighted the gap between high-level AI discussions and basic connectivity challenges in rural areas, reminding the panel that AI democratisation must address fundamental infrastructure deficits.


Lee Tiedrich raised another challenge: the lack of data standardisation and voluntary sharing frameworks necessary for AI customisation across different regions. Data exchange faces significant friction due to incompatible formats and absence of standard agreements.


Future Challenges and Exponential Change

The discussion concluded with sobering reflections on AI’s exponential pace of development. Audience members challenged the panel’s assumptions about manageable transitions, arguing that rapid economic displacement and power polarisation may outpace adaptation efforts.


Tiedrich emphasised the importance of teaching students “how to think” rather than specific skills, recognising that adaptability and problem-solving capabilities will be more valuable than domain-specific knowledge in a rapidly changing technological landscape.


Conclusions

The panel revealed both significant progress and persistent challenges in democratising AI’s global impact. There was strong consensus on key principles: the need for localised rather than universal AI standards, the importance of public-private partnerships, preference for incentive-based approaches to AI adoption, and recognition that AI security requires proactive, AI-powered defence systems.


However, the conversation highlighted unresolved tensions between the speed of AI development and institutional adaptation. The discussion positioned India as a potential model for AI democratisation through its focus on grassroots impact and inclusive development, though success depends on addressing fundamental infrastructure challenges and ensuring benefits reach beyond urban technology hubs.


Ultimately, democratising AI requires not just technical solutions but fundamental changes in international collaboration, workforce development, and technology governance approaches. The urgency of implementing these changes may be greater than many participants acknowledged, making immediate action essential for achieving an inclusive AI future.


Session transcriptComplete transcript of the session
Brad Staples

by corporations, by innovators to secure that outcome. And if current trends continue, the majority of AI’s economic value risks being centered in the hands of countries and corporations in the Western economies in China. And some estimates suggest that 70 % of the value could be created and reside in those locations. And I think it’s for us in this context to think a bit about why we don’t need to accept that outcome. It’s by far means not an inevitability. And to democratize the impact of AI, it requires intentional design, it takes international collaboration, and it takes societies coming together to ensure that doesn’t happen. It also takes innovation and research, workforce development, private sector partnerships, and also trust, safety, and security.

And they’re the things we’re going to talk about on the panel today. And my colleagues are extremely well -placed. to share their thoughts and insights on those topics. So let me introduce the panel. We have Amit Chandha, Managing Director and CEO of L &T Technology Services. Good to see you, Amit.

Amit Chadha

Happy to be here.

Brad Staples

Great to have you with us. Amanda Craig -Dekard, Senior Director, Office of Responsible AI at Microsoft. Great to have you with us. Sachin Kakar from India Site Development, Privacy, Safety and Security at Google. Good to have you with us, Sachin. Thank you for being with us. Lee Tedrick, Inaugurable AI Multidisciplinary Initiative Fellow, University of Maryland, Senior Advisor on the International AI Safety Report. Lee, good to have you with us. And last but by no means least, Julian Waits, Chief Experience Officer with Rapid7. Good to have you with us. Good to have you with us. Okay. So without further ado, let’s take a look at international and scientific research collaboration. And, Lee, let me come to you.

Let me pose. Here’s a question. Okay. And, Lee, let me pose. And the second international AI safety report was released just ahead of this conference, something that you’re very much an author of. Let’s start by hearing from you and then maybe, Sachin, I’ll bring you in. What opportunities do you see, Lee, in open international standards to address the technical challenges that we face while also building trust in AI -based systems and services? Which of these, how would you characterize those challenges and which are most critical in a developing country context?

Lee Tiedrich

Yeah, thanks for the question, Brad, and there’s a lot here. So the international AI safety report that I worked on with a panel of about 100 experts was just released. And one of the key takeaways from the report is that while we have made a lot of progress over the past year in evaluations and developing evidence, there’s still a long way to go. There’s a gap. And I think, you know, internationally. International standards organizations and similar efforts is a good way to work together to try to fill some of the gaps. ISO has already released one standard, 42 ,001, which is a good start, but we need to accelerate this, and we need to also recognize the fact that standards and evaluation metrics, you know, there’s a tension.

On the one hand, we want them to be able to apply across borders because we want to enable companies to have responsible technology flow across borders. But on the other hand, it’s really important because we all differ in terms of language and culture that we need to be able to customize them for different cultures, norms, languages. And I think, you know, the standards organizations will continue to play an important role. I spent a year working at NIST, the U .S. National Institute of Standards and Technology. One of the NIST projects is working on what we call the zero draft of trying to create a draft that we could then feed into the ISO process, and NIST is trying to collect stakeholder input into that draft.

And I think, you know, more globally, you know, efforts like the Hiroshima AI process, there are sort of all these pre -standards efforts where different stakeholders across different regions can work together. And I think that the ACs, the AI safety institutes across different countries and how they can coordinate. So I think there’s a lot of work to be done, but I think there’s a lot of avenues where we can collaborate together and make sure that we’re addressing the needs of everybody around the globe. Thank you.

Sachin Kakkar

Yeah, thanks, Steve. Very well covered. If I can add just a few more points. I think one of the challenges we see is copy pasting the regulations or standards from, you know, international markets to local markets may not always work. So localizing them, understanding the needs and constraints of the local area. Google launch Indiq GenBench. It’s a test bench for fine tuning. And assessing the. LLM models for local languages, supporting 29 Indian languages, 12 scripts, and 4 language families. So that shows an example of how we need to localize things. The second point is one -time audit or certification may not work as AI evolves. We need a continuous scanning and auditing to make sure we avoid any temporal drift in these standards and the applications.

Brad Staples

So Sachin, let’s build on that. How do governments and developers collaborate in a way that we get the outcome that everyone desires, which is not to see the developed markets race ahead of developing countries? What does that collaboration need to look like?

Sachin Kakkar

Yeah, that’s an interesting question. I think at highest level, the way we think to bridge the gap between AI divide is to move away from traditional, traditional transfer approach. to more co -creation where developers and government coming together and and the underlying goal is that standards and regulations are seen as enablers and equalizers not as barriers or compliance hurdle so three specific dimensions in which we believe developers and government can collaborate and Google specifically focuses on number one is open source frameworks and interoperability and standards second capacity building and third is workforce upskilling and research I’ll quickly unpack each one of them so starting with open source frameworks AI is not new to Google we have been working on AI for past decades and remember Alpha fold and we were the first one to share the transformer paper on which all the LLMs are built when we were building AI we were also focusing on best AI practices and safety practices on AI And we have open sourced all the best practices to keep AI safe.

Safe SAIF, secure AI framework is something we have shared outside. And it is important to understand supply chain risk. And India’s digital transformation is characterized by DPI, the digital public infrastructure on which Aadhaar and UPI are built. So they can actually leverage some of these secure AI framework to make sure the malware attacks and the vulnerability in open source components are taken care. Now, standards is one thing. The collaboration goes beyond to adoption of them. And Google has co -built the COSI, Coalition of Secure AI Framework, with various industry partners. And this is what we are expanding in APAC, including India. Now, we are also committed to capacity building. With the government. And which means we need to provide tools and infrastructure, not just standards.

So we are proactively sharing the threat intelligence. We are building tools like SynthID and sharing with the community abroad. SynthID is a watermark technique which goes into the text, image, video, audio, and it can tell you whether it is AI -generated content. So some of these tools are also helping us to make sure our commitment towards standards goes into actual adoption. And finally, upskilling workforce, digital literacy, working with government to make sure the vulnerable section of the society, like elderly and teenagers, are aware of some of these challenges. And giving grants to institutes like IITs to push the frontier of research, like PQC, post -quantum cryptography, are other areas of collaboration between AI developers and the government and academia.

Brad Staples

Let me just ask you both a question. Is there a trade -off between setting global standards and regulation? ensuring the right environment for innovation and collaboration?

Sachin Kakkar

Oh, yeah, that’s right. And that’s where you can start with the global regulations but then adapt them to the local constraints. Like we have bandwidth constraints in India. We have linguistic diversity. And therefore, the global standard should not become a hurdle for the young startups in India. Rather, they become co -creators in enabling the innovation that can happen and then evolve from there. So it’s a creative tension, and I think the best way is to be adaptive in this situation and eventually evolve to the international standard.

Brad Staples

How do you see this interplay, Lee?

Lee Tiedrich

Yeah, I think, I mean, kind of in my work, you know, both in government, academia, and I spent 30 years working with the private sector, I think sort of figuring out the standards and the standards that are in place and the values that are in place. evaluation techniques is really key. You know, how are we going to evaluate these systems so we can, they can meet a certain threshold of safety. And then I think the question kind of comes in, you know, afterwards, once we know what it is, you know, should there be regulation or not? You know, I worry a lot of times that when we go too quickly toward the regulation, you know, the best of intentions may be there, but, you know, the technology is moving so quickly, regulators don’t necessarily know how to style the regulations to achieve the goal.

And I think sort of working from the bottom up with the science, developing the evaluation technique, taking into account that we do need to socialize, you know, customize for local markets is really important. And then we can get to the question of, well, should there be a regulation or not? And that’s where, you know, different countries may have different answers, but at least we’re working from a common technical framework and evaluation framework to assess systems. Thank you.

Brad Staples

Thank you both. Let’s make a shift to… The conversation towards more public -private… collaboration, which I think we know is at the heart of driving the success that everybody’s looking for. And Sachin was talking a little bit about capacity building. Maybe we focus on those two elements. And Amanda, I’ll come to you and then to Amit. So there’s a persistent skills gap in AI. It’s very apparent and a lot’s being done to try and bridge that here in this country. How are your, has your organization, and I’ll come to you Amit with the same question, how are your organizations grappling with that challenge and also collaborating with government to help to narrow that skills gap?

Amanda Craig Deckard

Thank you. Yes, skills gap is really important. We see it as part of the sort of foundational infrastructure for what we need to work on together as Microsoft with other industry partners, government partners, other local partners. It’s going to take a whole community really working together to do this at scale. And just to take a step back for a moment briefly before I talk more specifically about skills, you know, we kind of see this as part of a holistic effort where you kind of need to support all of the enabling infrastructure for AI deployment, kind of from from the infrastructure layer all the way through sort of realizing value in local use cases. So we actually published on Wednesday a blog from our president, Brad Smith, our chief responsible AI officer, Natasha Crampton, where we talk about sort of five areas where we’re really focused on investing to kind of close the gaps between AI diffusion and the global north and global south.

So we talk about, like, hard infrastructure investment, right, in terms of connectivity, AI compute capacity, scaling is the second part of that plan. And the third part is really thinking about multilingual, multicultural AI capability. And the fourth is really working with local partners on local AI deployment and really what we can learn and what’s going to serve local communities, also what we can learn through that process around how we need to adapt the technology so it’s ready for those local use cases. And then really measuring diffusion so that we actually understand how things are going and how we can do that. And then really measuring diffusion so that we actually understand how things are going and how we can do that.

And then really measuring diffusion so that we actually have really informed interventions. And then really measuring diffusion so that we actually have really informed interventions. And then really measuring diffusion so that we actually have really informed interventions. So that’s the kind of holistic approach that we’re thinking about for public -private partnership. And looking at skilling more specifically, we actually have a new sort of initiative that we launched last July at Microsoft called Microsoft Elevate, which is really bringing together a number of ways that we engage with a community that is going to also be part of skilling everyone at scale, so sort of nonprofit communities, schools, and actually ensuring that they’re equipped with the technology itself, so with cloud compute access and with access to AI.

And then we are coupling that with investments in skilling. So we have made some big -number commitments around how we are really trying to do this at scale. I would say specifically for India is, you know, we last year, early last year, we made this commitment to scale up 10 million Indians by 2030. This year, we upscaled 5 .6 million Indians, and so we actually doubled that commitment to 20 million people by the end of 2030. And one of the ways that we’re doing that is we’re actually, we just announced this week a new Elevate for Educators in India program where we’re partnering with local schools, with vocational institutes, with higher education institutions to sort of teach the teachers, right? So you can actually work at scale, and we’re working with a number of Indian government ministries in this program to figure out, you know, what, how we can ensure that we have tailored programs for all of those different communities and that we’re thinking holistically about how.

You know, we, across those different sort of educational paths, are really meeting people where they are and equipping them to kind of do the next powerful thing with AI.

Brad Staples

Thanks, Amanda. And as a business, L &T Tech Services, I mean, part of L &T originating here in India, but now very much involved in global markets. How are you tackling this in terms of addressing the skills gap?

Amit Chadha

Sure. So thank you. So before I go to skill gap, I do want to make a point on the regulation part. I do believe that too much of regulation can stifle innovation as well. So we’ve got to be careful on how much we do and where do we take it. And then the second part, of course, is to do regulation of traffic control in Delhi for our next event that we have. I think all of us will agree. Let’s get down to skills in a second now. I had to say that because it was a mess in the last two days. I’ve got pictures of myself in an auto rickshaw as well. So if we get down to skill gap, I want to address this three ways.

So I am responsible. I run a company which is potentially India’s first, engineering intelligence company with about 25 ,000 employees. I’ve been CEO for five years. When I took over, we used to be about 15 ,000 employees. We’re about 14 now, we’re about 25 ,000 employees. So, we look at skill gap and I look at skill levels. Three things you have to think about. Whatever work we’re doing in engineering consulting today, I want to say 40 to 50 % of that is new and built in the last five years, did not exist. I also want to say that whatever we are doing today, 60 % will be gone in about three to five years time. That’s the rate and pace of change. So, while my colleague from Microsoft spoke about skilling school stem as well as colleges, we’re doing two different things to stay current with the changing dynamics or three things.

One, we are actually reaching out to colleges. In the last year of their curriculum and we are making sure that the curriculum is going to be in the last five years. in India is contextual to what the industry needs. So we are sending our employees to teach. We are using CSR hours. We are doing all of that to build that up. We are actually participating with NASSCOM as well to be able to do that in the skill development. The second thing we are doing is upskilling our own employees. Now, again, in a developed economy, it’s very simple that you hear these layoffs that happen all the time and they are not because people don’t have work but because the skill is redundant.

So let’s go ahead and get a new set of skills. In an Indian context, my colleague here spoke about that very nicely. You can’t cut and paste. You fire a thousand people, you will actually end up spending half your working hours plus more with the labor commissioner here locally. You can’t do that. So you have to be able to skill people up while they are in the workforce. Now, one thing is developing curriculum, developing modules for them. to go through but the second part is actually making them do it so and normally in a consulting company you would send people to get get coached and do upskilling when they are not billable we actually doing it while they are billable because when they become non billable that’s not when you want it you want it before that right so that’s and it’s a major shift in how we’ve been operating the third thing that we are tracking as an engineering and a technology company is how much of personal time is the employee spending on technology development efforts beyond billing hours to the client so you come in and spend 40 hours right and that’s what you normally work now if you spend another three hours to write a technology paper you file a patent you actually go speak at a symposium all that is towards technology effort beyond billable hours.

The percentage of workforce within the company five years ago that did that was at 19%. Today, that number stands at 52 % of our workforce spends time, personal time to go spend time on technology beyond billable hours. And the net result of that has been we used to file 100 patents per year. We have gone from there, sorry, we used to file 50 patents per year. We have gone to filing 200 patents per year. So the point is that so again, summarizing, one, reach out to the local ecosystem and do it and spend the last year with them. That’s the hook in. Second, upskill the workforce within. And third, beyond just money, find a bigger purpose like technology or betterment of human race with technology to motivate your workforce to actually spend time on that.

And I think that’s what we’ve been doing and we think will be helpful. One last thing and we keep discussing India. But if I look at the US today, and I’ve lived there for 27 years now, is we will need schools to start mandating a certain level of STEM education that has to be done. Today, both my boys went to public schools in Virginia. I can tell you that in some schools, it’s broken. And we don’t do that in the US. We don’t do it in parts of Europe. We will continue to look at different countries for skills. And that is not where we want to be in 20 years time. I’m sorry. Jump in. Jump in, Julius.

Julian Waits

I was going to agree with what you just said. Because Rapid7, like your company, of course, we’re a software company. We’ve basically mandated the use of agentic technologies by our employees, especially the ones in developing countries or countries that aren’t as developed as the United States. What I would tell you also on the education system, which is unique to the US, which is what makes India special. And that’s why we’re in such a wonderful place. It’s because of the technology. we’re so far behind, we’re forced to use labor in other societies that appreciate the use of STEM technology and where it’s embedded in the way that they learn. We have no choice. If we didn’t have foreign workers in the U .S., we would fall behind the rest of the world.

You don’t hear that too often.

Brad Staples

Let me just probe a little bit on this. How much is carrot and how much is stick when you’re looking to upskill the workforce and bring them into more of an AI mindset? You’ve got a very bold program at Microsoft reaching across colleges, but you’re also active, I know, in creating the capabilities within the workplace. How much of this, to both of you, is carrot or stick? I was at a dinner in D .C. a few weeks ago where the head of a large media group had told his team they had to be two times more productive by the end of 25 using AI. to stay in their roles and 10 times more productive by the end of 26.

That was an expectation. But it was set very much as a minimum standard and goal. They were putting training programs in place, but there was a clear metric to achieve. What’s your perspective based on how you’ve seen this work?

Amanda Craig Deckard

You mean internally?

Brad Staples

Either within Microsoft or within the companies that you collaborate with in training.

Amanda Craig Deckard

In our experience, I think we are much more leaning in the direction of using CareReds. So we have a lot of programs internally that are a mix, I think, in terms of tactics that’s important. Both kind of like, here’s a day -long training or a week -long training program, right? Which I think is really valuable. It gives you an opportunity to really dig in. But also really difficult. Difficult to find the time for. And so we actually have weekly tips. for how colleagues that are in similar roles are using Copilot, for example, internally to have more efficiency in their work. And I feel like that’s the kind of thing where, you know, is that skilling, is that training?

I don’t know, but it certainly is helpful because that’s the kind of thing that in my day -to -day job I can look to and integrate much more easily. And the other thing that we’ve started doing is hackathon -type exercises internally that are not just oriented towards engineering communities, but actually our corporate external legal affairs group, which is not just lawyers, but is a lot of lawyers, for example, having a hackathon that’s really meeting that community where we are and building a Copilot to serve our kind of day -to -day work. And so a lot of, like, different kind of carrot approaches is what we’re doing internally and where we see, I personally can say, like, I feel especially the latter two, it’s just hard to find, like, time to do a deep training program.

But if you integrate sort of into your day -to -day work, make it easy with these kind of carrots, you can really start seeing the impact, and that motivates you to use the technology more.

Amit Chadha

So, stick is out of the window, you can’t do that anymore, right? But we use carrots and budgets. Okay? When I say carrots, it’s basically appealing to the individual now and their glorification. So if it’s a patent, you’re filing it. The company doesn’t own it, you own it, right? If there’s a paper, you’re writing it. If you’re speaking at a symposium, you’re doing it, right? And that allows them to think. And then we’ve actually spent a lot of time through HR to try and explain that with the pace of change of technology, if you don’t upscale, you don’t change, you actually are facing extinction in about five, ten years’ time. Gone are the days where you can be there on the same technology for 30 years, will not work, right?

So we home in the message, provide that, and then provide the push. we glorify people that file patents, we glorify them within the company so that’s one. Second when I come to budgets, we actually leverage budgets with our segment heads. So they’re given budgets, they’re given training budgets, we also provide them headcount budgets and say can’t exceed. So we’ve been able to actually improve productivity with AI so we used to run on a utilization of productivity basis the metrics all service companies track at about 73 % five years ago. We’re already at 83 % and I think I can push this up another 2 % in terms of productivity levels in the company again leveraging AI and that’s the budget approach that we use but with the seniors.

So it’s a mix of both if I may to be able to manage this and motivate this. But it’s an ongoing exercise.

Brad Staples

It’s fascinating maybe we’ll come back to it as we talk to a close. Let me shift gears a little bit and talk a bit more about security and trust and come to you Julian if I can. So I think we’ve recognized and we’ve heard it in different conversations this week that there’s a trust deficit around the use of AI, certainly in a public context. There is some fear, suspicion, and anxiety in a global context. I’m not talking just about India. YouGov carried out a survey in the U .S. last month, and in the context of fintech, they found that less than 20 % of Americans trust AI in financial services. And they’re also sort of struggling, I think, with some of the cybersecurity questions and issues, which you’re very well placed to address.

So if public trust in AI remains fragile and AI -specific cyber risks are growing, which they clearly are, what are the immediate steps that industry should prioritize to counter those threats? And… Things like prompt injection attacks. How can these solutions be scaled? Thank you. particularly for developing countries?

Amit Chadha

of seven. So other than the incentives that we’re giving you to learn these technologies, which of course is to the company’s benefit, it’s to your benefit because these skills that you’re learning and that you’re going to be using will translate to the next thing that you do, and it makes you that much better. If we do enough of that, not only are we helping the employees, but we’re helping the societies and the ecosystem that they live in, including in India. I wanted to add one additional area that we’re really focused on to address the kind of AI cyber threats, particularly relevant in India and other areas in the global south. I mentioned that one of the areas that we’re focused on is multilingual and multicultural AI capabilities, and one of the most important foundational reasons for doing that, of course, is that you have an AI that works well.

and in different languages and cultural context is reliable performs well. Another reason is also that AI that is not robust and it’s multilingual and multicultural capabilities does have additional security weaknesses. You mentioned prompt injection attacks and you know one way in which you can think about a prompt injection attack is basically if you have an AI system and you have the sort of safety system around that, someone who is misusing the technology can sort of try to break that safety system or get around it and one of the ways that attackers do that is by using languages that are not well supported in that model or system right so if a model or system is primarily prepared to perform well in high resource languages, but not in low resource languages.

Tamil, for example, or some other sort of language that is not really built in to how the model performs, if companies aren’t attuned to that, then an attacker could use that language and jailbreak the system, basically get around the safety system. And so it’s just another reason why it’s really important from our perspective, and we’re partnering with a lot of others in industry and government, so this comes back to a public -private partnership opportunity, to really work on multilingual and multicultural AI capabilities. One of the things that we announced this week is actually there’s a benchmark from an organization called ML Commons, which is a jailbreak benchmark. It’s actually measuring how robust systems are against that kind of prompt injection attack technique.

And we worked with a number of others to really build out the current version of that. which is really English -specific, to include multiple Indic languages and Asian languages in terms of its capability. It’s not going to solve the problem. It’s one step of what we see in the right direction. But I just want to draw that sort of really specific area of focus in India and other areas for thinking about the kind of AI and cyber threats.

Brad Staples

That’s wonderful. Thank you.

Sachin Kakkar

Can I add a point?

Brad Staples

Sure.

Sachin Kakkar

So this is about the rise in prominence of AI agents. And we have been constantly investing in self -defending systems, just like a human immune system. As agents grow and they can – the scale and speed at which they can attack infrastructure, the hospitals, the energy grids, we need agents on the other side. And this becomes AI versus AI story, where we are smartly inventing agents. And we believe, first time, with AI. We can reverse the defender’s dilemma. So the dilemma, many of you might already know, attackers have to find just one open wallet in this crowd, but defenders have to protect all the wallets all the time. And first time, AI will give us aggregate advantage to defenders because majority of defenders’ time, 80%, goes in drudgery and skunk work.

And AI can actually automate and uplift that work. So the entire stack of defenders can improve and uplift with AI. And we believe that we’ll be able to build a self -defending adaptive system which can protect us from various vulnerabilities.

Brad Staples

Wonderful. Thank you. Well, we’re drawing towards the close of the session, and it’s been a very rich conversation. I just wanted to take a step back and ask you all, you’ve been – most of you have been here all week. And you’ve heard a whole host of different interventions and some very significant investments and initiatives. What are your conclusions? What’s changed? changed in your perspective when you look at AI for the future from your own vantage point? What’s this event given you a new perspective on or crystallized in your minds? Maybe, let me go back to Lee. Do you want to share your thoughts?

Lee Tiedrich

It’s reinforced for me, you know, something I’ve seen through a lot of my international work with OECD, with global partnership on AI, just the need for the global cooperation, and not just at the government level, but among all different types of stakeholders, you know, within academia, within industry, within civil society, and working together. And I think, you know, we can sort of pause at this moment and say, you know, if you look at the safety report, we’ve made a lot of progress over the last few years, but we need to continue to work together and not just focus on the harms and the risks that AI can have, but think about the benefits. You know, if we are able to leverage AI, we might be able to, you know, help achieve some of the UN Sustainable Development Goals.

I think one other thing I want to just kind of enter into the mix, you know, the customization of AI for different regions also depends upon data. And a lot of my work has focused on, you know, how do we create voluntary foundations so we can exchange data more easily? Like right now, we don’t have data standardization. So if I want to exchange my data with any of you, my data may be in a different format. As a former lawyer, a lot of my work is also focused on we don’t even have standard agreements. So if we want to exchange data, how can we easily transact and not have all that friction and transaction costs?

You know, we don’t have the Creative Commons licenses right now for data. And if we’re ever going to get to that localization and that ideal point where we’re customizing for different cultures, we’re going to have to have a lot of different tools. we’re going to have to figure out ways where we can voluntarily and responsibly share data. And this has been part of the discussion, but hearing the conversations over the past week kind of underscore the need to continue to advance that work while we work on some of the other topics that we’ve been discussing.

Brad Staples

Great. Julian?

Julian Waits

More than anything, what this week has taught me is I’m old and this industry is moving.

Brad Staples

Okay, so stop saying you’re old. You don’t look old. You look great.

Julian Waits

This industry is moving so quickly. Again, skills that are needed and considered to be important today will no longer be necessary in five years. And if the workforce and if the users of the technologies aren’t evolving with it, we all fall behind. So what is a great advantage and opportunity in using AI, the danger is it also cannot. obsolete at the same time. And we need to be very careful of that and how we use it and then how we help, hopefully, to promote this throughout the world in a way that makes it equitable for everyone.

Brad Staples

Great. Thank you. Amit, Sachin, any reflections?

Amit Chadha

Yeah, I think one of my big takeaway from this week was some parts of the world are focused on AI as an influence. Some part of the world is focused on governance of the AI. I think India is focused on impact of AI at the grassroots level. Thinking about how AI will impact a farmer or a small school or an NGO or a small hospital has been the focus. And it resonates with me because mission of my team is to keep everyone safe at scale. And when I say everyone, it’s not just about Google or Alphabet or not just about our billions users. but the entire society, everyone at scale, and how to make sure we become the architect and not just the consumer of AI and make sure it reaches to the grassroots level is one area to think about.

Sachin Kakkar

I agree with that. So, of course, outside of the traffic bit, right? What you learn, if you ask me, in the whole week that I’ve seen is that if I, and I’ve been in this business for, I don’t want to date myself, so say a couple of decades and we leave it there. But people used to say India is a back office. That’s how it started in 90s. People said India is a back office. Y2K happened and they said the IT industry will be over, right? Because Y2K, that’s all there is. Today, the IT industry, engineering industry together is $600 billion. We move forward. People said, are you going to take data? And are you going to?

Is data going to get leaked? and then COVID came and India proved yet again there was not a single data leakage that happened from India Inc anywhere. There are some draconian rules. We don’t allow our employees to use USBs, blah, blah, blue, blue. Net result, zero data leakage, absolute privacy and the government comes down very heavily if they get something like this. So they’ve been able to create a safe environment. Move forward. People used to say is India a market? This last week and forget technology companies, if you just walk the floors, you see people like Schneider, you see people like Vertiv, you see others, they are developing products for India. In India, you’re developing products to the world from India and it’s no longer just a cost base.

So if I was to say there’s one thing that I’ve learned in the last week, it is that India is no longer the back office for AI. it is actually the front office for AI for the world and that’s the net summary that I would draw in the entire week that I’ve been here

Brad Staples

Thank you, that’s very funny Bill

Amanda Craig Deckard

And I, you know, zooming out to the sort of highest level, one of the things that I really genuinely felt this week that has been very exciting to me is that there is a lot of energy around how to deploy this technology, how to have impact it’s been actually really fun to be in a lot of sessions with students and entrepreneurs that you can really feel the energy and I feel that it has the conversation around governance has come along and felt integrated in a really genuine way as well, if we look at the kind of summit series that kicked off a few years ago at Bletchley, I think it’s fair to say early on the emphasis of the conversation felt very safety and security heavy last year In France, there was a big pivot to trying to think about the opportunity.

And what I see in India this week is a genuine integration of those conversations and a deepening of those conversations. So really, what do we mean when we say impact? What really do we want to see in deploying this technology? And then sort of not taking for granted that, of course, governance actually has to come along with that. You have to really do the deep, hard work around things like multilingual AI. And there’s a real need for a partnership in moving those things forward. And there’s a real need to think about governance steps so that you can have trust in this technology. India actually just passing a law last week thinking about how to mark AI -generated content.

There’s a real sort of recognition that some of those steps are going to be important. And you don’t want to stop or have those steps sort of prevent deployment of the technology or realization of the benefits. But, like, you know, we have to do the deep work together to sort of move. Forward across a dime. A dime. and impact and governance together.

Brad Staples

Thank you. Thanks, Amanda. We’ve got a few minutes. If anyone would like to chip in. Great. Hands are going up. The room’s filled, by the way, while we’ve been going along, and it’s been a great conversation. Let’s hand one or two mics out to colleagues around the room, if we can, to the lady here on the front.

Audience

Hello? Hello? Okay. Right. Thanks, and I appreciate the comments and the traffic. I think we’ve all got a traffic story. Now, I hear a lot of talk about upskilling, co -creation, which are all very important things. I agree. But what I’m also hearing a lot from, and I’m sure you all are too, is the issue of speed of this technology that could potentially outspace some of this real scenario. So my question to you is, you know, what do we – and this goes to anyone who might want to answer or has some real thoughts on it. What do you think might – be the gaps between that that we would need to address in a transition process between upscaling and real economic displacement.

Brad Staples

Who can grab that? Yeah you put the mic Julian you’re gonna give it a go.

Julian Waits

It’s a real problem right meaning technology is moving so quickly as I said years ago I would tell young people in technology learn to be the best programmer you can. Now with agentic AI especially with the usage of MCP where you can have multiple agents talking to each other sharing information it’s now learning to be the best user and prompter of the technology understanding the outcomes but there’s gonna be some displacement. It’s you know right now I would tell you AI especially in the security context I can probably eliminate 60 % of the things that humans have to look at today. but there’s still the 40 % where a human has to be involved to make a determination around risk to an entity, whether it’s a government, whether it’s defense, whether it’s a business.

And so it’s really helping them evolve to this next level of user, this next level of programmer, if you want to call it that. And there probably will be some displacement that we just can’t get around.

Brad Staples

Gentleman in the front.

Audience

I actually have an extension of the same concern that the lady shared. The speed is one aspect, but also I think there’s a whole information arbitrage between the people who are creating and pioneering in the AI space versus the others to whom the information is reaching. And the impact of that on the power polarization and even the democracies. You know, that possibility I sense. And a lot of the conversation that I hear today is assuming that, you know, AI is moving linearly, but I see it moving exponentially. I agree. With a polarizing effect. Yes. Yes. Both. Both the polarizing effect and the effect, you know, like I think 40 % that Serge just spoke about. For me, that 40 % is not really 40%.

It’s just that we want to be very, very careful. But if we were to not care so much about how accurate and how much data standards we have. It could be 100%. You know, it’s very large. You know, I think the displacement can happen very fast. So I’m really concerned about how things are moving. I’m not sure if my concern is being shared by people in the panel.

Brad Staples

Anyone want to respond?

Lee Tiedrich

I mean, I think we need to focus on AI literacy because, you know, again, the technology is moving so fast. How do we make sure people in their everyday lives? People in the workforce have access to education so they can continue to upskill. and I also think being in academia after having been in the private sector for we won’t go into how many decades but teaching students how to think I think a good student when you’re looking at your career trajectory it’s not just coming out of college with a set of skills but teaching them how to think, how to problem solve and I think it’s really the public -private partnerships that Amanda mentioned with academia is really important because a lot of times the tenured faculty, they don’t know how to teach that to students and bringing people in to tell them this is how you adapt, these are sort of what you’re going to expect in your career and I say this not only from the perspective of being in academia but having two children of my own in their 20s who are just starting their career and sort of expect the unexpected but learn how to be on your toes I think a lot of it is just having the good analytics skills, having good communication skills and if you have those core skills you’re going to be able to adapt and it will carry forward in the future.

Brad Staples

Great. I think we’ve got time for one more question. Okay, gentle. Oh, the lady who, sorry, the lady who has the mic. She has the mic.

Audience

Thank you so much. My name is Rita Soni. I work with a company that’s operating in small -town India, delivering all these tech services that many of these companies are doing. And my question is actually for Amanda, because I think she was the only one who really brought up the digital divide that continues to exist, both in India and across the globe. I actually didn’t feel like I heard very much about how to actually bridge that. Yesterday I didn’t have one of those special passes to go to the events on the 19th, so instead I visited a local nonprofit called the Digital Empowerment Foundation, which has been around for more than 20 years, doing incredible work in rural India.

And they’re simply talking about last -mile Internet connectivity, let alone the enablement or ease. in the critical thinking that Lee just mentioned. So just a few more words on how it is that we can bridge this digital divide and make it more equitable, because I think the more folks are going to be excluded, the more different kinds of problems that we’re going to have.

Amanda Craig Deckard

Yeah, and I think you may have come in after we talked briefly about some of the work that we’re doing to address the digital divide. And for a lot of words, I would point you actually to we published a blog on Wednesday where we talked about investments in five areas that we’re thinking about to close the gaps that we see. And we actually point to the work that we’ve done using our own telemetry to sort of track these gaps and their trajectory and really lifted up our own concerns about the trajectory. And so among the areas of investment, infrastructure is really foundational. And we actually do talk in the blog about of course, infrastructure in terms of like AI.

compute capacity, but actually the fundamentals beyond, like, in terms of connectivity, energy access as really important as well. And then we talked about scaling multilingual and multicultural AI capabilities, really working with local communities on local use cases and the kind of deep work that we can do to sort of help bring the technology to people and see, like, even in agriculture, for example, we at Microsoft Research have done a lot of projects, like, in close collaboration with local communities and try to see, like, how could this serve you and then also learn from how the technology needs to evolve in order to do so better. And basically then also taking a step back and continuing to study diffusion so we understand, like, are our interventions working?

Are they not? If so, what can we learn and how can we improve how we’re intervening?

Brad Staples

Okay, so time’s up, everyone. Thank you so much for your contributions and for joining us at different points during the conversation. Thanks to the panelists for a really rich and diverse conversation. It’s been a real pleasure to have you with us. And I think we end with a sense of optimism that no matter what the challenges of the digital divide and those other elements, there’s probably an AI solution to the AI challenges that we’re creating. Thanks. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
L
Lee Tiedrich
4 arguments190 words per minute1193 words374 seconds
Argument 1
Need for accelerated international standards development while recognizing cultural customization requirements
EXPLANATION
Lee argues that while international standards organizations like ISO are making progress, there’s still a significant gap that needs to be filled more rapidly. He emphasizes the tension between creating standards that can apply across borders while also being customizable for different cultures, languages, and norms.
EVIDENCE
ISO has already released one standard, 42,001, which is a good start, but we need to accelerate this. NIST is working on a zero draft for the ISO process and collecting stakeholder input. Examples include the Hiroshima AI process and coordination between AI safety institutes across different countries.
MAJOR DISCUSSION POINT
International AI Standards and Collaboration
AGREED WITH
Sachin Kakkar
Argument 2
Bottom-up approach starting with technical evaluation frameworks before regulation to avoid premature regulatory constraints
EXPLANATION
Lee advocates for developing scientific foundations and evaluation techniques first before rushing into regulation. He warns that when regulators move too quickly without understanding the rapidly evolving technology, they may create well-intentioned but ineffective regulations.
EVIDENCE
Drawing from 30 years of experience working with private sector, government, and academia. References the international AI safety report showing progress in evaluations and developing evidence over the past year.
MAJOR DISCUSSION POINT
International AI Standards and Collaboration
DISAGREED WITH
Amit Chadha
Argument 3
Need for data standardization and voluntary sharing frameworks to enable AI customization for different regions
EXPLANATION
Lee emphasizes that customizing AI for different regions depends heavily on data availability and standardization. He points out current barriers including lack of data format standardization and absence of standard agreements for data exchange.
EVIDENCE
Currently no data standardization exists – if wanting to exchange data, formats may differ. No standard agreements exist for data transactions, and there are no Creative Commons licenses for data. References his work on voluntary foundations for easier data exchange.
MAJOR DISCUSSION POINT
AI Impact and Economic Transformation
Argument 4
Importance of AI literacy and teaching students how to think and problem-solve rather than just specific skills
EXPLANATION
Lee argues that given the rapid pace of technological change, education should focus on developing analytical thinking and problem-solving abilities rather than just teaching specific technical skills. He emphasizes the importance of public-private partnerships with academia to achieve this.
EVIDENCE
Drawing from experience in academia after decades in private sector, and having two children in their 20s starting careers. Notes that tenured faculty often don’t know how to teach adaptability skills to students.
MAJOR DISCUSSION POINT
Technology Displacement and Future Challenges
AGREED WITH
Amit Chadha, Julian Waits
S
Sachin Kakkar
6 arguments149 words per minute1152 words462 seconds
Argument 1
Importance of localizing global standards rather than copy-pasting regulations, with continuous auditing as AI evolves
EXPLANATION
Sachin argues that simply copying international regulations to local markets doesn’t work effectively. He emphasizes the need to understand local needs and constraints, and implement continuous monitoring rather than one-time audits as AI technology evolves.
EVIDENCE
Google launched Indiq GenBench, a test bench for fine-tuning and assessing LLM models for local languages, supporting 29 Indian languages, 12 scripts, and 4 language families. This demonstrates the need for localization.
MAJOR DISCUSSION POINT
International AI Standards and Collaboration
AGREED WITH
Lee Tiedrich
Argument 2
Standards should be enablers and equalizers, not barriers, requiring co-creation between developers and governments
EXPLANATION
Sachin advocates for moving away from traditional technology transfer approaches toward collaborative co-creation between developers and governments. The goal is to ensure standards and regulations facilitate rather than hinder innovation and equality.
EVIDENCE
Three specific collaboration dimensions: open source frameworks and interoperability standards, capacity building, and workforce upskilling and research. Google has been working on AI for decades and shared foundational work like the transformer paper.
MAJOR DISCUSSION POINT
International AI Standards and Collaboration
AGREED WITH
Amanda Craig Deckard, Amit Chadha
Argument 3
Open source frameworks, capacity building, and workforce upskilling as key collaboration dimensions between developers and governments
EXPLANATION
Sachin outlines three specific areas where developers and governments can collaborate effectively: sharing open source frameworks and standards, building local capacity through tools and infrastructure, and investing in workforce development and research.
EVIDENCE
Google open-sourced SAIF (Secure AI Framework) for supply chain risk management. Built COSI (Coalition of Secure AI Framework) with industry partners, expanding in APAC including India. Sharing tools like SynthID watermarking technology and providing grants to IITs for research in areas like post-quantum cryptography.
MAJOR DISCUSSION POINT
Public-Private Partnership and Capacity Building
Argument 4
Sharing of tools like SynthID for AI-generated content detection and secure AI frameworks with industry partners
EXPLANATION
Sachin describes Google’s commitment to sharing practical tools and threat intelligence with the broader community to ensure AI safety and security. This includes both detection tools and preventive frameworks.
EVIDENCE
SynthID is a watermarking technique that works across text, image, video, and audio to identify AI-generated content. Google proactively shares threat intelligence and has built the COSI coalition with various industry partners.
MAJOR DISCUSSION POINT
Public-Private Partnership and Capacity Building
Argument 5
Self-defending systems using AI agents to reverse the defender’s dilemma and automate 80% of defensive drudgery work
EXPLANATION
Sachin argues that AI agents can fundamentally change cybersecurity by creating self-defending systems that give defenders an advantage for the first time. He describes this as an ‘AI versus AI’ scenario where defensive AI can automate routine security work.
EVIDENCE
The defender’s dilemma: attackers only need to find one vulnerability while defenders must protect everything all the time. AI can automate 80% of defenders’ drudgery and routine work, allowing the entire defensive stack to improve and uplift.
MAJOR DISCUSSION POINT
AI Security and Trust
AGREED WITH
Julian Waits, Amanda Craig Deckard
Argument 6
India focused on grassroots AI impact for farmers, schools, NGOs, and hospitals rather than just governance or influence
EXPLANATION
Sachin observes that while some parts of the world focus on AI as influence or governance, India’s approach centers on practical, ground-level impact for ordinary citizens and institutions. This reflects Google’s mission to keep everyone safe at scale.
EVIDENCE
Examples include impact on farmers, small schools, NGOs, and small hospitals. Google’s mission is to keep everyone safe at scale, not just Google users but entire society, focusing on becoming architects rather than just consumers of AI.
MAJOR DISCUSSION POINT
AI Impact and Economic Transformation
A
Amanda Craig Deckard
5 arguments180 words per minute1537 words509 seconds
Argument 1
Holistic approach needed including infrastructure, multilingual AI capabilities, and local partnerships to scale skills development
EXPLANATION
Amanda argues that addressing the skills gap requires a comprehensive strategy that goes beyond just training programs. This includes supporting all enabling infrastructure from basic connectivity to AI compute capacity, and working with diverse local partners.
EVIDENCE
Microsoft published a blog outlining five investment areas: hard infrastructure, AI compute capacity scaling, multilingual/multicultural AI capability, local AI deployment partnerships, and measuring diffusion for informed interventions.
MAJOR DISCUSSION POINT
Skills Gap and Workforce Development
AGREED WITH
Sachin Kakkar, Amit Chadha
Argument 2
Microsoft’s commitment to upskill 20 million Indians by 2030 through educator programs and government partnerships
EXPLANATION
Amanda describes Microsoft’s ambitious scaling of their skills development program in India, doubling their original commitment. The approach focuses on training educators to achieve scale and working with various government ministries.
EVIDENCE
Originally committed to upskill 10 million Indians by 2030, achieved 5.6 million in the first year, then doubled commitment to 20 million. Launched Microsoft Elevate for Educators program partnering with schools, vocational institutes, and higher education institutions.
MAJOR DISCUSSION POINT
Skills Gap and Workforce Development
Argument 3
Carrot-based approaches more effective than mandates, using weekly tips, hackathons, and integrated day-to-day work applications
EXPLANATION
Amanda advocates for incentive-based rather than mandate-based approaches to workforce AI adoption. She emphasizes making AI training accessible and integrated into daily work rather than requiring separate intensive training programs.
EVIDENCE
Microsoft uses weekly tips showing how colleagues use Copilot, hackathon exercises for non-engineering groups like legal affairs, and integration into day-to-day work rather than separate deep training programs that are hard to find time for.
MAJOR DISCUSSION POINT
Public-Private Partnership and Capacity Building
AGREED WITH
Amit Chadha
DISAGREED WITH
Amit Chadha
Argument 4
Multilingual AI robustness critical for security, as attackers exploit low-resource language vulnerabilities for prompt injection attacks
EXPLANATION
Amanda explains that AI systems with poor multilingual capabilities have additional security weaknesses. Attackers can use languages that aren’t well-supported in AI models to bypass safety systems through prompt injection attacks.
EVIDENCE
Worked with ML Commons and others to expand jailbreak benchmarks from English-specific to include multiple Indic and Asian languages. If a model performs well in high-resource languages but not low-resource languages like Tamil, attackers can exploit this gap.
MAJOR DISCUSSION POINT
AI Security and Trust
AGREED WITH
Sachin Kakkar, Julian Waits
Argument 5
Digital divide remains a critical challenge requiring infrastructure investment and local community collaboration
EXPLANATION
Amanda acknowledges that despite technological advances, fundamental digital access issues persist globally. She emphasizes the need for basic infrastructure investment and deep collaboration with local communities to address these gaps.
EVIDENCE
Microsoft’s five-area investment plan includes infrastructure (connectivity, energy access), working with local communities on use cases in agriculture and other sectors, and continuing to study diffusion to understand if interventions are working.
MAJOR DISCUSSION POINT
Technology Displacement and Future Challenges
A
Amit Chadha
4 arguments164 words per minute1854 words675 seconds
Argument 1
40-50% of current engineering work is new from last five years, with 60% of today’s work becoming obsolete in 3-5 years
EXPLANATION
Amit emphasizes the unprecedented pace of change in engineering and technology work, highlighting how rapidly skills become obsolete and new capabilities emerge. This creates an urgent need for continuous adaptation and learning.
EVIDENCE
As CEO of L&T Technology Services for five years, growing from 15,000 to 25,000 employees, he has observed this transformation firsthand in engineering consulting work.
MAJOR DISCUSSION POINT
Skills Gap and Workforce Development
AGREED WITH
Julian Waits, Lee Tiedrich
Argument 2
Three-pronged approach: college curriculum updates, upskilling current workforce while billable, and encouraging personal technology development time
EXPLANATION
Amit outlines a comprehensive strategy for addressing skills gaps that includes working with educational institutions, training employees during productive work time rather than downtime, and motivating personal investment in technology development.
EVIDENCE
Company sends employees to teach in colleges using CSR hours, partners with NASSCOM for skill development. Improved productivity from 73% to 83% utilization while upskilling. 52% of workforce now spends personal time on technology development (up from 19%), resulting in increased patent filings from 50 to 200 per year.
MAJOR DISCUSSION POINT
Skills Gap and Workforce Development
AGREED WITH
Sachin Kakkar, Amanda Craig Deckard
Argument 3
Combination of individual recognition (patents, papers) and budget-based productivity improvements to motivate workforce development
EXPLANATION
Amit describes using both personal incentives (allowing employees to own patents and gain recognition) and business metrics (productivity targets and budget constraints) to drive workforce development and AI adoption.
EVIDENCE
Employees own their patents and get recognition for papers and speaking engagements. Company improved productivity from 73% to 83% utilization using AI, with segment heads given both training budgets and headcount constraints to manage.
MAJOR DISCUSSION POINT
Public-Private Partnership and Capacity Building
AGREED WITH
Amanda Craig Deckard
DISAGREED WITH
Amanda Craig Deckard
Argument 4
India transitioning from back office to front office for AI development, creating products for global markets
EXPLANATION
Amit argues that India has evolved from being merely a cost-effective service provider to becoming a center for AI innovation and product development for global markets. He traces this evolution through various technological transitions.
EVIDENCE
Historical progression: India started as ‘back office’ in 1990s, survived Y2K predictions of industry collapse to reach $600 billion IT/engineering industry, proved data security during COVID with zero leakages, and now sees companies like Schneider and Vertiv developing products in India for global markets.
MAJOR DISCUSSION POINT
AI Impact and Economic Transformation
J
Julian Waits
2 arguments172 words per minute437 words152 seconds
Argument 1
Mandatory use of agentic technologies by employees, especially in developing countries, with focus on STEM education integration
EXPLANATION
Julian describes Rapid7’s approach of requiring employees to use AI agent technologies, particularly emphasizing this for workers in developing countries. He also highlights the importance of STEM education being embedded in learning systems.
EVIDENCE
Rapid7 has mandated use of agentic technologies by employees. Notes that the US relies heavily on foreign workers because STEM technology is embedded in how other countries learn, while the US education system lags behind.
MAJOR DISCUSSION POINT
Skills Gap and Workforce Development
Argument 2
Industry moving too quickly for traditional security approaches, requiring evolution to next-level users and prompters of technology
EXPLANATION
Julian argues that the rapid pace of AI development is fundamentally changing cybersecurity work, requiring professionals to evolve from traditional programming to becoming expert users and prompters of AI systems.
EVIDENCE
Previously advised young people to become the best programmers possible, but now with agentic AI and MCP (multiple agents sharing information), the focus shifts to being the best user and prompter. AI can eliminate 60% of security tasks but 40% still requires human risk determination.
MAJOR DISCUSSION POINT
AI Security and Trust
AGREED WITH
Sachin Kakkar, Amanda Craig Deckard
B
Brad Staples
1 argument151 words per minute1335 words529 seconds
Argument 1
Risk of 70% of AI economic value concentrating in Western economies and China without intentional democratization efforts
EXPLANATION
Brad opens the discussion by highlighting the risk that AI’s economic benefits could become concentrated in already developed economies unless deliberate action is taken. He argues this outcome is not inevitable but requires intentional design and international collaboration.
EVIDENCE
Some estimates suggest that 70% of AI’s economic value could be created and reside in Western economies and China if current trends continue.
MAJOR DISCUSSION POINT
AI Impact and Economic Transformation
A
Audience
1 argument157 words per minute519 words198 seconds
Argument 1
Concern about exponential rather than linear AI development creating rapid economic displacement and power polarization
EXPLANATION
Audience members express concern that AI is developing exponentially rather than linearly, creating information arbitrage between AI creators and others. They worry about rapid economic displacement and the polarizing effects on power structures and democracies.
EVIDENCE
Questions the assumption that AI is moving linearly when it appears to be moving exponentially. Suggests that the 40% of work requiring human involvement could actually be much smaller if accuracy standards were relaxed.
MAJOR DISCUSSION POINT
Technology Displacement and Future Challenges
Agreements
Agreement Points
Need for localized rather than universal AI standards and regulations
Speakers: Lee Tiedrich, Sachin Kakkar
Need for accelerated international standards development while recognizing cultural customization requirements Importance of localizing global standards rather than copy-pasting regulations, with continuous auditing as AI evolves
Both speakers agree that while international standards are important, they must be customized for local cultures, languages, and constraints rather than simply copying global frameworks
Importance of public-private partnerships for AI development and deployment
Speakers: Sachin Kakkar, Amanda Craig Deckard, Amit Chadha
Standards should be enablers and equalizers, not barriers, requiring co-creation between developers and governments Holistic approach needed including infrastructure, multilingual AI capabilities, and local partnerships to scale skills development Three-pronged approach: college curriculum updates, upskilling current workforce while billable, and encouraging personal technology development time
All three speakers emphasize the critical need for collaboration between government, industry, and educational institutions to effectively develop and deploy AI technologies
Rapid pace of technological change requiring continuous adaptation and learning
Speakers: Amit Chadha, Julian Waits, Lee Tiedrich
40-50% of current engineering work is new from last five years, with 60% of today’s work becoming obsolete in 3-5 years Industry moving too quickly for traditional security approaches, requiring evolution to next-level users and prompters of technology Importance of AI literacy and teaching students how to think and problem-solve rather than just specific skills
All speakers acknowledge the unprecedented speed of technological change and the need for continuous skill development and adaptation rather than one-time training
Preference for incentive-based rather than mandate-based approaches to AI adoption
Speakers: Amanda Craig Deckard, Amit Chadha
Carrot-based approaches more effective than mandates, using weekly tips, hackathons, and integrated day-to-day work applications Combination of individual recognition (patents, papers) and budget-based productivity improvements to motivate workforce development
Both speakers advocate for positive incentives and recognition rather than punitive measures to encourage AI adoption and skill development
AI security requires proactive, AI-powered defense systems
Speakers: Sachin Kakkar, Julian Waits, Amanda Craig Deckard
Self-defending systems using AI agents to reverse the defender’s dilemma and automate 80% of defensive drudgery work Industry moving too quickly for traditional security approaches, requiring evolution to next-level users and prompters of technology Multilingual AI robustness critical for security, as attackers exploit low-resource language vulnerabilities for prompt injection attacks
All speakers agree that traditional cybersecurity approaches are insufficient and that AI-powered defensive systems are necessary to counter AI-enabled threats
Similar Viewpoints
Both emphasize the importance of building technical foundations and understanding local needs before implementing top-down solutions, whether in regulation or digital access
Speakers: Lee Tiedrich, Amanda Craig Deckard
Bottom-up approach starting with technical evaluation frameworks before regulation to avoid premature regulatory constraints Digital divide remains a critical challenge requiring infrastructure investment and local community collaboration
Both speakers view India as moving beyond being a service provider to becoming an innovation center focused on practical, ground-level AI applications
Speakers: Sachin Kakkar, Amit Chadha
India focused on grassroots AI impact for farmers, schools, NGOs, and hospitals rather than just governance or influence India transitioning from back office to front office for AI development, creating products for global markets
Both emphasize the critical importance of multilingual AI capabilities and sharing security tools to address vulnerabilities and ensure inclusive AI development
Speakers: Amanda Craig Deckard, Sachin Kakkar
Multilingual AI robustness critical for security, as attackers exploit low-resource language vulnerabilities for prompt injection attacks Sharing of tools like SynthID for AI-generated content detection and secure AI frameworks with industry partners
Unexpected Consensus
Regulation should follow rather than precede technical understanding
Speakers: Lee Tiedrich, Amit Chadha
Bottom-up approach starting with technical evaluation frameworks before regulation to avoid premature regulatory constraints Three-pronged approach: college curriculum updates, upskilling current workforce while billable, and encouraging personal technology development time
Unexpected consensus between an academic/government advisor and a business CEO that regulation can stifle innovation if implemented too quickly without proper technical foundation
AI will fundamentally change rather than just augment human work
Speakers: Julian Waits, Amit Chadha, Audience
Industry moving too quickly for traditional security approaches, requiring evolution to next-level users and prompters of technology 40-50% of current engineering work is new from last five years, with 60% of today’s work becoming obsolete in 3-5 years Concern about exponential rather than linear AI development creating rapid economic displacement and power polarization
Broad consensus across industry practitioners and audience that AI represents a fundamental transformation rather than incremental change, with significant implications for workforce displacement
Overall Assessment

Strong consensus on need for localized AI approaches, public-private partnerships, continuous learning, incentive-based adoption, and AI-powered security. Unexpected agreement on regulation timing and transformational nature of AI change.

High level of consensus across diverse stakeholders (industry, academia, government) suggests mature understanding of AI challenges and practical approaches to address them. This alignment could facilitate more effective policy development and implementation strategies.

Differences
Different Viewpoints
Timing and approach to AI regulation
Speakers: Lee Tiedrich, Amit Chadha
Bottom-up approach starting with technical evaluation frameworks before regulation to avoid premature regulatory constraints Too much of regulation can stifle innovation as well. So we’ve got to be careful on how much we do and where do we take it
Lee advocates for developing scientific foundations and evaluation techniques first before regulation, while Amit warns that excessive regulation can stifle innovation. Both are cautious about regulation but Lee emphasizes building technical frameworks first, while Amit focuses on limiting regulatory scope.
Workforce development approach – mandates vs incentives
Speakers: Amanda Craig Deckard, Amit Chadha
Carrot-based approaches more effective than mandates, using weekly tips, hackathons, and integrated day-to-day work applications Combination of individual recognition (patents, papers) and budget-based productivity improvements to motivate workforce development
Amanda emphasizes purely carrot-based approaches with integrated learning, while Amit uses a combination of carrots (recognition) and budget constraints/productivity targets as motivational tools. Amit explicitly states ‘stick is out of the window’ but still uses budget pressures.
Speed and scale of AI displacement
Speakers: Julian Waits, Audience
AI can eliminate 60% of security tasks but 40% still requires human risk determination Suggests that the 40% of work requiring human involvement could actually be much smaller if accuracy standards were relaxed
Julian maintains that 40% of work will still require human involvement for risk determination, while audience members argue this percentage could be much smaller if accuracy standards were relaxed, suggesting more rapid and complete displacement is possible.
Unexpected Differences
Fundamental nature of AI development trajectory
Speakers: Multiple panelists, Audience
Various speakers discuss linear progression and manageable transitions Concern about exponential rather than linear AI development creating rapid economic displacement and power polarization
While panelists generally discussed AI development as manageable with proper planning and gradual transitions, audience members challenged this assumption by arguing AI is developing exponentially rather than linearly, creating more urgent displacement concerns. This represents a fundamental disagreement about the pace and nature of AI advancement.
Overall Assessment

The discussion revealed relatively low levels of fundamental disagreement among panelists, with most conflicts centered on implementation approaches rather than core objectives. Key areas of disagreement included the timing and scope of AI regulation, specific methods for workforce development, and assessments of displacement speed.

Low to moderate disagreement level among panelists, but more significant tension between panelist optimism and audience concerns about exponential AI development. The implications suggest a need for more robust dialogue between AI industry leaders and broader stakeholders about the pace and scale of AI transformation.

Partial Agreements
Both agree on the need for international standards that can be customized locally, but Lee focuses on the tension between cross-border applicability and cultural customization, while Sachin emphasizes the inadequacy of copy-pasting and the need for continuous adaptation.
Speakers: Lee Tiedrich, Sachin Kakkar
Need for accelerated international standards development while recognizing cultural customization requirements Importance of localizing global standards rather than copy-pasting regulations, with continuous auditing as AI evolves
Both recognize the need for comprehensive, multi-faceted approaches to skills development, but Amanda focuses on infrastructure and partnerships while Amit emphasizes direct workforce intervention and personal motivation.
Speakers: Amanda Craig Deckard, Amit Chadha
Holistic approach needed including infrastructure, multilingual AI capabilities, and local partnerships to scale skills development Three-pronged approach: college curriculum updates, upskilling current workforce while billable, and encouraging personal technology development time
Both recognize infrastructure and collaboration challenges for AI deployment, but Lee focuses specifically on data standardization and sharing frameworks, while Amanda emphasizes broader infrastructure needs including connectivity and energy access.
Speakers: Lee Tiedrich, Amanda Craig Deckard
Need for data standardization and voluntary sharing frameworks to enable AI customization for different regions Digital divide remains a critical challenge requiring infrastructure investment and local community collaboration
Takeaways
Key takeaways
International AI standards must balance global consistency with local customization for different cultures, languages, and market constraints The AI skills gap requires urgent attention through public-private partnerships, with workforce development needing to be continuous rather than one-time due to rapid technology evolution India is positioning itself as a front office for global AI development rather than just a back office, focusing on grassroots impact for farmers, schools, and healthcare AI security requires self-defending systems using AI agents to counter AI-powered attacks, with multilingual robustness being critical to prevent exploitation of language vulnerabilities Carrot-based approaches (recognition, hackathons, integrated learning) are more effective than mandates for workforce AI adoption and upskilling The risk of AI economic value concentrating in developed markets can be mitigated through intentional democratization efforts, co-creation approaches, and open-source frameworks AI development is moving exponentially rather than linearly, creating concerns about rapid economic displacement and the need for enhanced AI literacy Data standardization and voluntary sharing frameworks are essential for enabling AI customization across different regions and cultures
Resolutions and action items
Microsoft committed to upskilling 20 million Indians by 2030 through their Elevate for Educators program Google launched IndIC GenBench supporting 29 Indian languages for LLM model assessment and fine-tuning Industry partners agreed to expand the Coalition of Secure AI Framework (COSI) in APAC including India Development of multilingual jailbreak benchmarks including Indic and Asian languages through ML Commons collaboration NIST to continue collecting stakeholder input for the zero draft to feed into ISO standards process Companies to mandate use of agentic technologies by employees, especially in developing countries L&T Technology Services to continue three-pronged approach: college curriculum updates, workforce upskilling during billable hours, and encouraging personal technology development time
Unresolved issues
How to address the speed of AI development potentially outpacing upskilling and transition processes Managing economic displacement as AI could potentially automate much more than the conservative 60% estimate Bridging the fundamental digital divide including last-mile internet connectivity and energy access in rural areas Information arbitrage between AI pioneers and general population leading to power polarization Lack of data standardization and standard agreements for voluntary data sharing across borders Balancing innovation with appropriate levels of regulation without stifling technological advancement Addressing the gap between AI governance frameworks and actual grassroots implementation
Suggested compromises
Start with global standards but adapt them to local constraints and capabilities rather than direct copy-pasting Use bottom-up approach beginning with technical evaluation frameworks before implementing regulation Combine carrot-based incentives with budget constraints to motivate workforce development without punitive measures Focus on teaching core analytical and communication skills alongside AI literacy to enable adaptation Implement continuous auditing and scanning rather than one-time certification as AI systems evolve Balance global AI safety standards with local customization needs for different languages and cultures Integrate governance conversations with deployment and impact discussions rather than treating them separately
Thought Provoking Comments
Whatever work we’re doing in engineering consulting today, I want to say 40 to 50% of that is new and built in the last five years, did not exist. I also want to say that whatever we are doing today, 60% will be gone in about three to five years time. That’s the rate and pace of change.
This comment provides a stark quantification of the unprecedented pace of technological change, making abstract concepts of disruption concrete and immediate. It challenges the traditional approach to workforce development and highlights why conventional training methods are insufficient.
This observation fundamentally reframed the skills discussion from incremental upskilling to radical workforce transformation. It led other panelists to acknowledge the urgency of the challenge, with Julian later emphasizing how quickly skills become obsolete and the need for continuous evolution.
Speaker: Amit Chadha
AI that is not robust in its multilingual and multicultural capabilities does have additional security weaknesses… if a model or system is primarily prepared to perform well in high resource languages, but not in low resource languages… an attacker could use that language and jailbreak the system.
This insight brilliantly connects two seemingly separate issues – digital inclusion and cybersecurity – revealing how inequality in AI development creates systemic vulnerabilities. It demonstrates that diversity isn’t just about fairness but about fundamental system security.
This comment elevated the conversation beyond social equity to strategic necessity, showing how multilingual AI capabilities are essential for security. It prompted Sachin to discuss AI-versus-AI defense systems and reinforced the technical imperative for inclusive AI development.
Speaker: Amanda Craig Deckard
First time, with AI, we can reverse the defender’s dilemma… majority of defenders’ time, 80%, goes in drudgery and skunk work. And AI can actually automate and uplift that work.
This reframes AI from a source of new security threats to a potential solution for a fundamental asymmetry in cybersecurity. The ‘defender’s dilemma’ concept provides a clear framework for understanding why AI could be transformative for security rather than just disruptive.
This shifted the security discussion from defensive concerns about AI risks to offensive opportunities for AI solutions. It introduced the concept of ‘self-defending systems’ and positioned AI as potentially advantageous to defenders for the first time in cybersecurity history.
Speaker: Sachin Kakkar
India is no longer the back office for AI. It is actually the front office for AI for the world.
This powerful reframing challenges decades of perception about India’s role in global technology, moving from cost-based services to innovation leadership. It encapsulates a fundamental shift in global AI development dynamics.
This comment served as a capstone to the entire discussion, synthesizing themes about India’s unique approach to grassroots AI implementation. It reinforced the conference’s central theme that developing countries can lead rather than follow in AI development.
Speaker: Amit Chadha
Copy pasting the regulations or standards from international markets to local markets may not always work. So localizing them, understanding the needs and constraints of the local area… We need a continuous scanning and auditing to make sure we avoid any temporal drift.
This challenges the assumption that global standards can be universally applied, introducing the critical concepts of localization and temporal drift in AI governance. It highlights the dynamic nature of AI systems that makes one-time compliance insufficient.
This observation shaped the entire regulatory discussion, leading Lee to emphasize the tension between global standards and local customization. It established the framework for discussing adaptive, culturally-sensitive AI governance throughout the panel.
Speaker: Sachin Kakkar
If we didn’t have foreign workers in the U.S., we would fall behind the rest of the world. You don’t hear that too often.
This candid admission challenges American technological exceptionalism and acknowledges the critical dependence on global talent, particularly from countries like India. It’s a rare moment of vulnerability from a US perspective.
This comment added authenticity to the discussion about global AI collaboration and reinforced arguments about the importance of international partnerships. It validated other panelists’ points about the global nature of AI development and the need for inclusive approaches.
Speaker: Julian Waits
Overall Assessment

These key comments fundamentally shaped the discussion by challenging conventional assumptions and introducing new frameworks for understanding AI development challenges. Amit’s quantification of technological change velocity established urgency that permeated subsequent discussions. Amanda’s connection between multilingual AI and security transformed the inclusion conversation from moral imperative to strategic necessity. Sachin’s insights on localization and the defender’s dilemma provided new conceptual frameworks that other panelists built upon. The cumulative effect was a discussion that moved beyond surface-level policy recommendations to deeper structural insights about AI development, security, and global collaboration. The conversation evolved from addressing AI challenges to reimagining AI opportunities, particularly positioning developing countries as potential leaders rather than followers in responsible AI development.

Follow-up Questions
How can we create voluntary foundations and standardized agreements for data exchange to enable AI customization for different regions?
Lee emphasized that customization of AI for different regions depends on data, but currently there’s no data standardization, no standard agreements for data exchange, and no Creative Commons licenses for data, creating friction and transaction costs
Speaker: Lee Tiedrich
How can we develop Creative Commons-style licenses specifically for data sharing in AI development?
This was identified as a critical gap needed to enable voluntary and responsible data sharing for AI localization across different cultures and regions
Speaker: Lee Tiedrich
What are the specific gaps and transition processes needed to address economic displacement caused by AI’s rapid advancement?
An audience member raised concerns about the speed of AI technology potentially outpacing upskilling efforts and creating economic displacement, asking what gaps need to be addressed in the transition process
Speaker: Audience member
How can we address the information arbitrage and power polarization effects of AI’s exponential growth on democracies?
An audience member expressed concern about the information gap between AI pioneers and others, and the potential polarizing effects on democratic institutions as AI moves exponentially rather than linearly
Speaker: Audience member
What specific strategies can effectively bridge the digital divide beyond infrastructure, particularly for last-mile connectivity in rural areas?
Rita Soni highlighted that while digital divide was mentioned, there wasn’t enough discussion on practical solutions for bridging it, especially regarding basic internet connectivity in rural areas before even considering AI enablement
Speaker: Rita Soni (Audience member)
How can we better measure and track AI diffusion to ensure interventions are working effectively?
Amanda mentioned the need for measuring diffusion to understand if interventions are working and how they can be improved, but this area requires further development and research
Speaker: Amanda Craig Deckard
How can continuous scanning and auditing systems be developed to prevent temporal drift in AI standards and applications?
Sachin noted that one-time audits or certifications may not work as AI evolves, requiring research into continuous monitoring systems
Speaker: Sachin Kakkar

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Policymaker’s Guide to International AI Safety Coordination

Policymaker’s Guide to International AI Safety Coordination

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel in New Delhi highlighted that the rapid race toward artificial general intelligence is outpacing safety measures, creating significant risks that require coordinated action [1-3]. AI Safety Connect was founded to shape frontier AI safety through global convenings, capacity-building, and trust-building activities such as bi-annual summits and closed-door workshops [6-15]. Stuart Russell emphasized that ensuring safe and ethical AI is both a technical and a governance challenge that demands international coordination because harms cross borders [40-45].


Eileen Donahoe argued that middle-power and global-majority states can leverage pooled resources, market influence, and regulatory innovation to steer AI safety, turning them into “leaders from the middle” [56-64]. Mathias Cormann identified inclusion, objective evidence, and international consistency as key lessons from past consensus work, and pointed to coordinated transparency and incident reporting-such as the Hiroshima Code of Conduct and the emerging Global Partnership on AI incident-reporting framework-as the most critical frontier-AI safety infrastructure [77-84][90-96]. He also noted that the OECD’s principles, now adopted by 50 countries, have informed major policy frameworks and that an open-source safety-tool catalog is being launched to support practical implementation [88-99].


Singapore’s Minister Josephine Teo stressed the need to translate scientific understanding into effective policy, invest in testing and standards, and cooperate through OECD and AI Safety Connect to avoid fragmented governance [104-112][115-122][142-148]. Malaysia’s Gobind Singh Deo highlighted the importance of enforceable agencies and regional institutions, arguing that ASEAN must build capacity, enforce standards, and sustain expertise to manage AI risks across member states [158-166][167-173]. World Bank Vice-President Sangbu Kim said the Bank can help Global South countries design safety architecture from the outset and facilitate knowledge transfer from advanced economies to mitigate emerging AI threats [176-182].


Jann Tallinn warned that the most pressing danger lies in the unchecked race for superintelligence within labs, calling for a slowdown, greater transparency, and public pressure to achieve a de-facto prohibition until safety consensus is reached [210-218][223-227]. He further suggested that investors now have limited influence over leading AI firms, which are moving toward IPOs and a level playing field that diminishes private-sector leverage [231-235]. The panelists agreed that a narrow 12- to 24-month window exists before frontier AI outpaces governance, and they prioritized refreshing safety research priorities, developing robust testing tools, and institutionalizing AI-safety governance structures [236-250][251]. Nicolas Miailhe concluded that the coordination gap in AI safety is real and urgent but can be closed through continued global collaboration, inviting participants to the next UN General Assembly session and the fourth AI Safety Connect summit [260-264].


Keypoints


Major discussion points


AI is advancing far faster than safety and governance mechanisms, creating an urgent coordination gap.


Nicolas opened by stressing that “the race towards artificial intelligence is no longer a theoretical pursuit… safety is not keeping pace with it” [1-4]. Eileen later framed the problem as “technology is advancing rapidly and being deployed with minimal guardrails… risk-management processes… are fragmented or insufficiently binding” [56-59]. Stuart added that ensuring safe AI is both a technical and a governance challenge, and that “global coordination is essential because the harms… cross borders” [40-44].


Middle-power and “global-majority” states can and must lead the international AI governance effort.


Eileen argued that “middle powers and global-majority states… can shape the direction of global AI practices” [62-65]. Minister Josephine Teo emphasized that smaller states must translate scientific knowledge into policy and that “the key… is to think about what it takes to translate what we know from science into policy” [102-110]. Gobind Singh Deo highlighted ASEAN-wide coordination, stressing the need for “institutions that can enforce… standards, regulation and legislation” across the region [158-166].


Concrete, shared infrastructure-transparent incident reporting, an international response centre, and open-source safety tools-is essential.


Mathias Cormann identified “coordinated transparency and incident reporting” as the most critical frontier-AI safety infrastructure, citing the Hiroshima Code of Conduct and the emerging GPI AI Common Framework for Incident Reporting, which could evolve into an international incident-response centre [91-98].


Building trust requires deliberate pauses, rigorous testing, and robust institutional capacity.


Cormann called for “pause, test, monitor, audit, share information” to build confidence that systems respect fundamental rights [84-87]. Josephine Teo used the aviation-safety analogy to illustrate the need for extensive research, testing, and interoperable standards before policies can be trusted [124-132]. Jann Tallinn warned that “the cut-throat race… makes it hard to slow down” and urged “transparency… so that more people know what the leaders of AI companies know” [210-218][256].


Funding and investor engagement are currently insufficient but crucial for safety.


Tallinn noted that “investors… are largely absent from the governance conversation” and that their influence has waned as AI firms become IPO-ready [231-236]. Sangbu Kim stressed that “AI safety measures are currently under-invested” and called for dedicated resources to embed safety architecture from the start [178-185].


Overall purpose / goal of the discussion


The panel was convened to “shape the frontier AI safety and secure agenda” and to “identify present-day coordination gaps in the global AI practice… and highlight practical steps policymakers can take” [6][53-57]. It aimed to mobilise governments, industry, academia, and civil-society around concrete governance mechanisms and collaborative actions to mitigate frontier-AI risks.


Overall tone and its evolution


Opening tone: urgent and cautionary, emphasizing the speed of AI progress and the lag in safety [1-5].


Middle segment: collaborative and solution-focused, with speakers presenting frameworks, consensus-building efforts, and concrete proposals [53-57][77-88].


Later segment: more admonitory and urgent, calling for pauses, stricter testing, and institutional reforms [84-87][210-218].


Closing remarks: hopeful and forward-looking, reaffirming the possibility of closing the coordination gap and inviting continued engagement at future summits [260-264].


Thus, the discussion moved from highlighting the problem, through proposing collaborative structures, to urging decisive actions and ending on a constructive, forward-looking note.


Speakers

Nicolas Miailhe – Expertise: AI safety, policy coordination, convening international AI summits; Role/Title: Founder/lead of AI Safety Connect, organizer of AI Safety Connect panels and global convenings.


Stuart Russell – Expertise: Artificial intelligence safety research, AI ethics; Role/Title: Professor of Computer Science, University of California, Berkeley; Director of the International Association for Safe and Ethical AI[S13].


Eileen Donahoe – Expertise: Digital rights, AI governance, venture investing; Role/Title: Founder and Managing Partner of Sympathico Ventures; former U.S. Special Envoy and Coordinator for Digital Freedom; former Ambassador to the UNHCR[S10][S11].


Mathias Cormann – Expertise: International policy, economic cooperation, AI governance frameworks; Role/Title: Secretary-General of the Organisation for Economic Co-operation and Development (OECD)[S8][S9].


Josephine Teo – Expertise: Digital policy, AI safety research priorities; Role/Title: Minister for Digital Development and Information, Government of Singapore[S16].


Gobind Singh Deo – Expertise: Digital regulation, regional AI coordination; Role/Title: Malaysian Minister (Minister for Digital Development and Information)[S21].


Jann Tallinn – Expertise: AI safety advocacy, technology investment, futurism; Role/Title: Founding engineer of Skype; early investor in DeepMind and Anthropic; Co-founder of the Future of Life Institute[S7].


Sangbu Kim – Expertise: Digital public infrastructure, AI resilience for development economies; Role/Title: Vice President for Digital and AI, World Bank[S1].


Osama Manzar – Expertise: Grassroots digital empowerment, AI safety outreach; Role/Title: Co-organizer of AI Safety Connect, representative of the Digital Empowerment Foundation.


Additional speakers:


Prime Minister Dick Schuh – Role/Title: Prime Minister of the Netherlands (guest speaker mentioned in the opening remarks).


Full session reportComprehensive analysis and detailed insights

The panel opened with Nicolas Miailhe warning that “the race toward artificial intelligence is no longer a theoretical pursuit” and that “billions and maybe trillions now of dollars are getting deployed to push the frontier of artificial intelligence” while “safety is not keeping pace with it” [1-4]. He explained that AI Safety Connect was created to “shape the frontier AI safety and secure agenda” and to “encourage global majority engagement into frontier AI safety” [6-9]. To achieve this, the organisation convenes bi-annual global gatherings – at AI summits in Paris, India and the upcoming meeting in Switzerland, as well as at the UN General Assembly – and supplements these with capacity-building workshops [10-15]. In addition to capacity-building, AI Safety Connect runs trust-building workshops and has convened a closed-door scientific dialogue whose findings will be published shortly[16-19].


The event was co-hosted by the International Association for Safe and Ethical AI (led by Professor Stuart Russell) and the Digital Empowerment Foundation (represented by Osama Manzar). Sponsors included Sympathico Ventures, the Future of Life Institute, and the Mindero Foundation [20-30].


Stuart Russell framed the problem as both a technical and a governance challenge, asking “how do we even build systems that have that property?” and “how do we ensure that those are the systems and only those systems get built?” [40-42]. He stressed that the harms of unsafe AI – “psychological damage to the next generation or loss of human control altogether” – cross borders, making “global coordination … essential” [44-46]. Russell announced the second annual ICI conference in Paris, already attracting over 1 300 participants and hosted at UNESCO headquarters [36-39].


Eileen Donahoe, moderator and founder of Sympathico Ventures, described the policy opportunity as a situation where “technology is advancing rapidly and being deployed with minimal guardrails” while existing risk-management processes are “fragmented across jurisdictions, or insufficiently binding on developers, deployers, investors, and regulators” [56-59]. She argued that “middle-power and global-majority states … can shape the direction of global AI practices” and that “leading from the middle may turn out to be a more powerful approach than previously anticipated” [62-65]. Her opening set the agenda to identify coordination gaps and practical steps for policymakers.


Mathias Cormann highlighted two key lessons from past consensus work. First, trust is built through “inclusion and on the basis of objective evidence” by bringing together governments, industry, civil society and technical experts [77-80]. Second, the most critical frontier-AI safety infrastructure is “coordinated transparency and incident reporting”, exemplified by the Hiroshima International Code of Conduct and the emerging Global Partnership on AI (GPI) Common Framework for Incident Reporting – a model that could evolve into an “international AI Incident Response Centre” [84-86][91-98]. He noted that the OECD’s principles, now adopted by 50 countries, have informed major policy frameworks such as the EU AI Act and U.S. executive orders, and that the OECD AI Policy Observatory is launching an open-source safety-tool catalogue to make trustworthy AI easier to implement [115-118][88-99].


Minister Josephine Teo of Singapore stressed the difficulty of translating scientific knowledge into effective policy. She argued that “the key … is to think about what it takes to translate what we know from science into policy” and that policies must be judged on “effectiveness” and on the trade-offs they entail [102-112]. Using an aviation-safety analogy, she illustrated the need for extensive research, testing, simulations and interoperable standards before policies can be trusted [119-132][124-132], and called for continued cooperation through the OECD, the AI Safety Connect effort and the International AI Safety Report [142-148].


Gobind Singh Deo, Malaysian Minister, focused on the institutional side of governance. He warned that standards and regulations are “useless without an agency that can enforce them” and that “without enforcement mechanisms, policies remain strong on paper but have no real impact” [158-166]. He advocated a dual-track approach – building national capacity first and then scaling cooperation across ASEAN – to ensure that “the conversation persists at an ASEAN level” and that expertise is maintained to meet future AI risks [167-173].


Sangbu Kim, Vice-President for Digital and AI at the World Bank, described how the Bank can help Global South countries embed safety from the design stage. He called for “design the safety architecture within the system” and for close collaboration with advanced-economy firms that run red-team exercises, allowing developing-country clients to learn how to prevent AI-driven attacks [176-182]. He warned that “AI attacks are like a sphere that can pierce any shield” but that “we can also build strong protective systems by fully utilizing AI”, underscoring the need for knowledge transfer and investment in safety measures [184-185][196-199].


Jann Tallinn, co-founder of the Future of Life Institute, warned that the most pressing danger lies in the “cut-throat race … inside labs to build superintelligence” [210-214]. He argued that a “global prohibition … until two conditions are met – broad scientific consensus that it can be done safely and controllably, and strong public buy-in” – is the only way to curb this risk, noting that a petition had already gathered over 130 000 signatures [223-227]. Tallinn also observed that “investors … are largely absent from the governance conversation” and that their influence has waned as leading AI firms become IPO-ready, making traditional investor leverage ineffective [231-235]. He later called for “transparency … so that more people know what the leaders of AI companies know” to support a slowdown [256].


When asked what should be prioritised in the next 12-24 months, each panelist offered a distinct focus[240-242].


Josephine Teo said the AI safety research agenda must be refreshed – the “Singapore consensus identified a set, but … will be out of date” – and that “better testing tools” are required to give developers practical assurance [241-250].


Mathias Cormann added that the response must be both rapid (“we have to go as fast as we can to play catch up”) and comprehensive, stressing that there is “no one thing that will make us all safe” and that effort must be “right across the board” [251].


Gobind Singh Deo called for the “institutionalisation” of AI-safety governance structures to keep pace with fast-moving technology [253-256].


Sangbu Kim urged that “AI safety measures are currently under-invested” and that resources should be allocated to embed safety from the outset [254-255].


In his closing, Nicolas Miailhe reiterated that “the coordination gap frontier in AI safety is real, and it is urgent” but also “closable” [262-264]. He invited participants to the next UN General Assembly session in New York, where the fourth edition of AI Safety Connect will be held, hoping to bring together the policymakers and leaders heard that day [265-267]. Osama Manzar framed AI safety as a fundamental human-protection issue, urging that “we have to save people before you teach people how to think” and calling for strong safeguards to “save human intelligence from artificial intelligence” [272-277].


Overall, the discussion displayed strong consensus that rapid AI advances have outstripped safety and governance, creating a narrow 12-24 month window for decisive action. All speakers agreed on the necessity of global coordination, inclusive multi-stakeholder processes, and the development of practical infrastructure such as transparent incident-reporting frameworks and open-source safety tools. Points of divergence emerged around the role of investors (Donahoe vs. Tallinn), the relative priority of coordinated incident reporting versus extensive testing and standards (Cormann vs. Teo), and whether enforcement agencies or scientific testing should lead implementation (Gobind Singh Deo vs. Teo). Nevertheless, the panel concluded on a hopeful note, emphasizing that the coordination gap can be closed through continued semi-annual convenings, regional initiatives like the ASEAN AI Safety Network, and sustained political will at both national and international levels. The panel concluded that closing the coordination gap will require simultaneous investment in transparency, testing, institutional capacity, and political will, with semi-annual global convenings serving as a catalyst for progress[262-277].


Session transcriptComplete transcript of the session
Nicolas Miailhe

that the race towards artificial intelligence is no longer a theoretical pursuit. As billions and maybe trillions now of dollars are getting deployed to push the frontier of artificial intelligence, the technology is now advancing rapidly. And safety is not keeping pace with it. There are wonderful opportunities on the other side of this quest. There are also big risks. And so that’s the purpose, that’s the reason AI Safety Connect was founded. AI Safety Connect is there to help shape the frontier AI safety and secure agenda towards what I would frame as commonsensical AI risk management. AI Safety Connect has been founded to encourage global majority engagement into frontier AI safety. And AI Safety Connect, has been connected to showcase Concrete’s governance coordination mechanisms, tools, and solutions.

So how we do this? We convene at each AI summit. So last year we started in Paris, this year in India, next year we’re going to be in Switzerland. But we also convene at the UN General Assembly, right? We need a faster tempo for these safety discussions, so every six months we have this global convening. We also do capacity building, and we also do trust building exercises at times behind closed doors. Well, this week in New Delhi has been an intense one, an impactful one. On Tuesday we had a full day of panels, conference, solution demonstrations, and closed -door workshop discussions on some specific nuts to crack to advance AI safety. We, for example, at the privilege of, hosting Prime Minister Dick Schuh from the Netherlands on stage to deliver a special address on the role of top leadership in advancing AI safety.

We also engage with industry, engage with academia. of India and abroad. So we’re an extremely busy week beside our main event. We had this closed -door discussion that I was mentioning yesterday and today, this closed -door scientific dialogues. We’re going to publish the results soon that brought together senior industry leaders to discuss shared responsibility for AI safety. Well, obviously, none of this would happen without partnership. And we want to thank our co -hosts, the International Association for Safe and Ethical AI and its director, Professor Stuart Russell, to whom I will hand over the floor in a few minutes, and the Digital Empowerment Foundation who is anchoring us at the grassroots here with Osama Manzar,. We’ll close the session later on.

And we obviously want to thank our sponsors and supporters, starting with Sympathico Ventures. Eileen Donahoe will moderate that panel and we’re thankful for that. The Future of Life Institute, Ima and Yann, who’s been supporting this effort, and the Mindero Foundation, whose team is here as well with team. And it’s great to have your support and we are thankful for that. So today we’re about to hear from His Excellency Matthias Korman who’s the Secretary General of the OECD We’re going to hear from Her Excellency Minister Josephine Theo who’s the Minister for Digital Development and Information at the Government of Singapore. Thank you for your continuous support, really appreciate that Same for Jann Tallinn who’s the AI investor but also a founding engineer at Skype and the co -founder of the Future of Life Institute And last but not least, we also have Minister Teo who’s going to be with us from Malaysia Minister for Digital Development and Information Thank you Minister as well as Vice President Kim for Digital and AI at the World Bank So an extremely important conversation to have And before we welcome you to the stage I would like to hand over the floor to Professor Stuart Russell to say a few words and to speak about also what’s happening next week in Paris Thank you so much.

Stuart Russell

Thank you very much, Cyrus and Nico. So as Nico mentioned, the International Association for Safe and Ethical AI, or ICI, the world’s worst acronym, is a global, democratic, scientific and professional society. We have several thousand members and approaching 200 affiliate organizations. Our mission is to ensure that AI systems operate safely and ethically for the benefit of humanity. And as Nico mentioned, our second annual conference will take place in Paris starting on Tuesday. It’s still, I think, possible to register, but we’re already up over 1 ,300 people coming. It’s at UNESCO headquarters in Paris. Thank you. So achieving this mission of ensuring… that AI systems operate safely and ethically is partly a technical challenge. How do we even build systems that have that property?

But also a governance challenge. How do we ensure that those are the systems and only those systems get built? And this panel is mainly about this second challenge. And I think it’s one on which global coordination is essential because the harms, whether it’s psychological damage to the next generation or loss of human control altogether, those harms cross borders. And we must coordinate to make sure that they don’t happen or they don’t originate anywhere. And it’s, I think, fitting that we are having this summit here in India, which has really, among other things, championed the idea that everyone on Earth should have a say. And so with that, I will hand over to Eileen. Thank you very much.

Nicolas Miailhe

Thank you, Stuart. So Dr. Eileen Donahoe is the founder and managing partner of Sympathico Ventures. She’s also the former U .S. Special Envoy and Coordinator for Digital Freedom and Ambassador to the UNHCR. Eileen? Welcome the speaker on the floor. Please, Your Excellency, Mr. Mattias Korman, Mr. Gobind Singh Deo, Mr. Josephine Teo, and Mr. Jann Tallinn, as well as Mr. Sangbu Kim, join us on stage.

Eileen Donahoe

Okay. Given this remarkable panel and the very short time we have, let me very briefly frame our discussion and get right to our speakers. So we’re here to share. Views on the opportunity for policymakers to impact international AI governance. As the race towards AGI and superintelligence intensifies, AI safety advocates face a compounding challenge. The technology is advancing rapidly and being deployed with minimal guardrails, while the risk management processes that do exist are either ill -adapted to the magnitude of the risk, fragmented across jurisdictions, or insufficiently binding on developers, deployers, investors, and regulators. The result is an unharmonized governance landscape that fails to shape the behavioral incentives. Of those building and funding frontier AI. Economies, governments, and societies do not respond well to such mixed signals.

While much of the discourse on frontier AI safety has focused on AI superpowers, there’s an urgent need for deeper international diplomacy on the most… extreme risks. At this juncture, middle powers and global majority states can’t be seen as peripheral actors in this landscape. Through pooled resources, market leverage, normative influence, and regulatory innovation, they can shape the direction of global AI practices and safeties. Leading from the middle may turn out to be a more powerful approach than previously anticipated. Whether or not that collective power is exercised now will determine whether international AI governance moves from the rhetorical level to the real -world impact on safety. This panel will aim to identify present -day coordination gaps in the global AI practice and the global market.

We will also look at the role of global AI in international AI safety and highlight practical steps policymakers can take in the coming months to close them. So to our panel, I’ll start with Secretary General Corman. The OECD has done remarkable work over the past decade, developing consensus on the OECD principles, providing a definition of AI systems that has resonated internationally, and playing an international role in operationalizing the Hiroshima International Code of Conduct. Along with those foundations, we now have the International AI Safety Report and the Singapore Consensus on Global AI Safety Research Priorities. With these principles, definitions, and frameworks in mind, two -part question for you. First, what are the key lessons learned from the process of building consensus and then implementing these frameworks?

And then second, looking ahead, what’s the most critical? What’s the most critical piece of coordinated frontier AI safety infrastructure we should be building now? Some have called for an international incident response center, and we’re all curious whether you think that should be a priority and achievable. Just some small, easy questions.

Mathias Cormann

In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is built through inclusion and on the basis of objective evidence. And, you know, I think what we’ve learned over the last few years is that bringing together all the relevant actors, governments, companies, civil society, technical experts, is what we need to do. I mean, each has a different perspective and different imperatives. I mean, markets reward the private sector for speed, scale, and innovation. While governments must manage risk and protect the public interest without stifling progress. But a challenge, and it’s been mentioned in some of the opening remarks, a challenge for policymakers in this context is that AI is moving much faster than policy cycles have traditionally moved, which easily then creates gaps between innovation and progress and opportunity, but necessary oversight, mitigation and management of risk.

But all sides in this conversation do share an essential common interest, and that is to ensure that the systems that are developing are trustworthy, because without public trust in the end, even the most powerful AI tools will struggle to gain broad adoption. So that means that occasionally, and it’s not always popular with everyone, but occasionally we should slow down. Occasionally we should actually pause. Pause, test, monitor, audit, share information, and take the time and invest in building confidence that these systems can work as intended and respect fundamental rights. So that’s sort of, I guess, the first point. another critical lesson involves international consistency and this is part of the reason why these sorts of summits are so important is to really facilitate these conversations among countries and among different jurisdictions because national priorities can vary quite widely and there’s of course fragmentation and compliance cost related risks and at the OECD really what we’ve been doing for six decades now across different policy areas is to try and reduce fragmentation and by achieving alignment around key principles, building shared evidence and facilitating the necessary conversations to develop a more coherent better coordinated approach moving forward and on AI I mean we’ve developed the OECD principles which were first adopted in 2019 and which are now adhered to by 50 countries around the world and that was really the first globally recognized baseline for trustworthy AI The OECD’s lifecycle definition of an I .I.

system has since shaped policy frameworks from the EU I .I. Act to U .S. executive orders. And we’ve had just earlier the meeting of the Global Partnership on I .I. co -chaired by Korea and Singapore. We’ve got the OECD I .I. Policy Observatory, which is sort of essentially the broad gamut of all of the different policy approaches around the world to provide countries and industries with data and evidence on what’s being done, facilitating peer learning, and trying to take some of the politics and the rhetoric out of it, but really looking at the facts. Now, looking ahead, and you sort of ask a question here about what to do about the risk. I mean, the most critical piece of frontier I .I.

safety infrastructure is coordinated. transparency and incident reporting. I mean, the Hiroshima I .I. Process Code of Conduct and its reporting framework launched at the I .I.’s Action Summit in Paris last year. You know, that’s a promising step, and we’ve got to continue to develop that. Since their publication, 25 organizations across nine countries have already submitted detailed reports on how they manage I .I. risks, offering for the first time a comparable view of developer practices across jurisdictions. The next stage is to strengthen information sharing on I .I. failures and near misses. The GPI I .I. Common Framework for Incident Reporting aims to help us collectively learn from mistakes before they scale globally, and over time, this could evolve into an international I .I.

Incident Response Center, coordinating alerts between governments and labs without exposing companies to commercial or legal penalties for reporting in good flight. Finally, we do need to scale access to practical safety tools. With global partners, the OECD recently launched an open call for open source safety and evaluation tools hosted in the OECD .ai catalog of tools and metrics to make a trustworthy AI easier to implement in practice. I mean, these are some initiatives to form the foundation of a more transparent, data -driven, and interoperable AI governance ecosystem, and

Eileen Donahoe

Excellent. Minister Teo, a number of questions for you, but let me start with the fact that Singapore occupies a very distinctive position in the global geostrategic landscape as a pro -innovation, advanced knowledge economy, with deep commercial and diplomatic ties to both the U .S. and China. Thank you. As the race to AGI intensifies and bilateral tensions mount, is there a role for Singapore and other middle powers to play in bridging the coordination gap to keep scientific and safety channels open? And also, what’s the most important step middle powers can take in the next 12 months to help establish a shared minimum understanding of frontier safety?

Josephine Teo

Well, thank you very much for that question. I think there is no running away from the fact that for smaller states, and that includes Singapore, the technology that our companies, our citizens are going to rely on do not originate from our shores. So they don’t necessarily come within our jurisdictions. We don’t always get to set the rules. Having said that, I do believe that we’re not without. Thank you. agency. It doesn’t mean that we take a step back and just let things happen to us. There are still things that we can do. One of the most important things I think as policymakers is for us to think about what it takes to translate what we know from science into policy.

And I wanted to just say why this is so important. In our case, as policymakers, the key questions will always be, are the policies that we make effective? And also, policies always come with trade -offs. With the question of effectiveness, there is always a need to understand what actually works, as opposed to what looks good on paper. With the question of trade -offs, it’s about understanding what we lose as a result of whatever safety aspects it is that we choose to put in place. And whether we can minimize them, can we mitigate them? Now, in areas where safety is the objective, we can’t just go with gut. We can’t just go with speculation. You take, for example, in my previous life, I was working on promoting Singapore’s Air Hub.

And we had to deal with a question of aviation safety. We were expanding our airport. It was going to carry many more passengers in and out of the country. But we are limited by number of runways. And in landscape Singapore, you can’t just click your finger and say, let’s build a new one. It’s a long runway. It’s very expensive anyway. Then there is the question of what do you do when you have these jumbo jets like A380s? Because each time an A380 hits the runway. It creates so much of a blast that you really need to create more distance between the A380 taking off and the next aircraft that is scheduled to take off.

Now, this is not a question that the transport minister can just decide on a whim. The air traffic management has to decide on its policy of how much distance is considered safe between landings or rather between takeoffs. And to answer this question, you really need to invest in the research. You need to invest in understanding the tests. So the science is one part of it. But between science to policy, you are actually going to need a lot of time. You are going to need a lot of tests. You are going to need a lot of simulations. you need to understand whether the distances that you decide are safe works well in a thunderstorm, a tropical thunderstorm.

Does it work just as well in a snowstorm? Well, we don’t have snow in Singapore. But you think about the airline that operates this. If each country that they fly into has a different safety distance, that creates some difficulty. So we therefore think that not only is there a need to invest in understanding the science, not only is there a need in understanding what testing looks like, what good testing looks like, there is also a need for us to think about what standards that will eventually be interoperable, what do they look like, which is why we think that international efforts, the collaboration that… that is being carried forward by the OECD through the Global Partnership on AI, the AI Safety Connect effort, and also ICI.

Where is Stuart now? Those kinds of efforts, you can’t do away without. At the outset, there is likely to be a bit of a fragmentation. And the trade -off with not having these conversations is that we are not even going to make advances in AI safety. And I don’t think that that’s a very good place for us to be in. It doesn’t give us the assurance that we can deliver to our citizens. And it does not create a foundation of trust that will eventually help us to push ahead with the use of this technology on a wider scale. So that’s how we are thinking about it, Aileen. Thank you.

Eileen Donahoe

So let me turn to Minister Gobind from Malaysia. and note that under your leadership and Malaysia’s 2025 ASEAN chairmanship, Malaysia succeeded in placing AI at the center of ASEAN’s agenda by establishing the ASEAN AI Safety Network. Malaysia is now finalizing its own AI National Action Plan, and Malaysia’s AI Governance Bill is expected in Parliament in 2026. So this dual -track approach of building national capacity while leading regional coordination represents a model of middle power agency that other countries are watching closely. So what lessons do you think other middle powers can draw from Malaysia’s experience? And on the ASEAN AI Safety Network, we have to note that operationalize and it will require sustained political will. technical capacity and resources.

So what concrete steps must ASEAN take in the next 12 to 18 months to ensure that this isn’t just aspirational?

Gobind Singh Deo

Online fraud, for example, scams, you have deepfakes today, you have huge concerns about certain vulnerable groups that are going to be impacted, children, older folk and so on and so forth. So this is something that stretches across the region. How do we deal with it in a coordinated way and ensure that the conversation doesn’t just stop with the government of the day, but it’s a conversation that expands over a period of time with clear policies that we can actually execute. The second layer that I think we need to think about is in the event there’s a need for execution. When we speak about risks in AI and we speak about how we’re going to govern these risks, we often talk about standards.

We often talk about regulation. We even speak about legislation at times for areas that pose higher risks. But ultimately, it really comes back down to you making sure you have an agency that can enforce it, because you can have the best standards. regulations and legislation but if there is no institution that’s really able to implement those standards to ensure that they are properly implemented and also to ensure that rules for failure to implement are enforced then those standards regulation and policies are really going to be just strong on paper but they’re not going to really have that impact that you need. So again, how do you build this mechanism across ASEAN where every country strengthens themselves domestically first and then moves across to the ASEAN member states and hopes to learn from their experiences so that we can together move ahead in this new world of AI and I think the threats that we anticipate in future.

Now the third part which is really important is also ensuring that whilst this goes on, you create those policies, you have institutions that enforce and the discussions persist at an ASEAN level. I think what is important is also to have that expertise looking at what comes next. We must make sure that our countries are prepared for the risks that are to come with the next generation technology. This is important because you don’t want a situation where new technology is adopted and there are risks that come with this new technology, you’re not prepared. I think that’s something we want to avoid and that’s the reason why I come back to where I started off. We really need to look at building institutions that have the expertise and of course are able to sustain as we go along and to build and deliver something that’s impactful.

Sorry, but that’s in short what we’re doing in Malaysia today.

Eileen Donahoe

Excellent. Thank you so much. Okay. Let me turn to Vice President Kim and talk about the World Bank, which has been at the forefront of digital public infrastructure, helping countries leapfrog legacy systems. We note that frontier AI systems, though, are arriving in the global south under very different conditions from previous waves of technology and governments are under pressure to deploy AI systems quickly. often using models that haven’t been adequately tested, let alone certified for their context, languages, or risk tolerances. So how can the World Bank help Global South countries move from being passive recipients of frontier AI to active shapers of safety and reliability requirements before the systems are deployed at scale?

Sangbu Kim

Thank you. In one word, definitely we need to make our clients well prepared from the scratch. When they design the AI systems, definitely they need to design the safety architecture within the system. That’s very, in general, that’s very correct. But real challenge is that… nobody can really expect a new type of new threat especially our some countries in a low capacity it is really hard to figure out what that will be so that’s the in order to tackle that type of irony and dilemma we need to very closely working with very developed economies company and government and very high end examples so that we can really well connect those good examples to the developing world so one partnership is one of the good examples we are helping our country for example some big tech company who is running some red teams so that you they are trying very hard to attack their system in advance by fully utilizing AI.

So through that type of practice and experiment, they can learn how to prevent the AI attack in the future, which is pretty much possible. So in this way, it is inevitable for our developing countries to keep track on the new trend and new innovation, even in this safety protection area. It is the only way. So I have to admit that this constraint. But think about this. Some anecdotal story in East Asia, in China and in Korea, there’s two models. Merchant who is selling two products. Number one is. sphere. And then they keep saying that this sphere is so strong so it can get through any kind of shield. So this is one vendor. The other vendor is selling shield.

And then they are saying that this shield is one of the most safe and strong shields. No sphere can get through this shield. This is exactly an ironical situation. If you think about AI, AI attack is the sphere. AI is so strong and smart and really capable so it can get through and hack any system with high -end intelligence and knowledge. But good news is that on the other hand, we also can build strong protective systems. by fully utilizing AI. So this is one good news, but the constraint is that we do not clearly know how AI can really evolve to fully protect those big attacks in the future. So in order to solve this type of ironical situation from the developing world point of view and from the World Bank point of view, this is the only way to very closely work and collaborate and learn from the advanced technology and advanced company and advanced country.

Eileen Donahoe

Thank you so much. Last but not least, Mr. Jan Tallinn, you occupy a very rare position in this landscape as a founding engineer of Skype, an early investor in DeepMind and Anthropic, and you’re also the co -founder of the Future of Life, which last October released a statement on superintelligence. calling for prohibition on superintelligent development until two conditions are met. Number one, broad scientific consensus that it can be done safely and controllably, and second, strong public buy -in. Let’s just ask the hard question. What would an effective prohibition look like in practice? How could that work?

Jann Tallinn

Thank you very much. So I think I’m kind of like a little bit different from the people on this panel. And that too, I guess. That I’m kind of, my main kind of threat vector about, my main worries about future are less about like how AI is being deployed and diffused and taken into practice. And I’m way more worried about what is happening in the labs, in the top AI companies. I’m not sure what the future is going to look like. because they are now in a cutthroat race to build something that is smarter than they are. They are in a cutthroat race to build superintelligence. And, like, I mean, we just saw yesterday the picture where, with a photo of it, Narendra Modi, Dario Amadei, and Sam Altman refused to link hands.

I mean, this is, like, indicative. We also saw both Dario and Demis Hassabis call for a slowdown in Davos last month. They just can’t do it alone. And I think there are, like, two reasons why it’s, like, an unfortunate situation. One is that the U .S. as a country is conflicted. They basically rely on AI for their economic and competitive power. So they are, like, very hesitant to, kind of, meddle with now. cutthroat situation in AI companies and the rest of the world really doesn’t understand how big danger they are now. So it’s part of the reason why we did the superintelligence statement is to create awareness that there is increasing political demand to do something about this situation.

We now have more than 130 ,000 signatures which is like many times more than we had done our original six months post letter had in 2023. So yeah, that’s like if there was enough pressure, I think clearly like the rest of the world is still kind of more powerful than the kind of leading AI countries. There are more people, there’s more economic power, etc. So if there was like enough pressure this could be solved. Like the way I put it is that it’s super hard to do like a $10 billion project. it’s impossible to do it if it’s illegal. So having these trillions flow into AI actually makes it easier to govern than harder.

Eileen Donahoe

So I’m tempted to follow up with a question about investors and their potential role in this. They are obviously playing a decisive role in shaping the incentives, but they’re largely absent from the governance conversation. So what would it take to bring investors meaningfully into the safety conversation?

Jann Tallinn

So, yeah, I think the answer is kind of simple. I don’t think investors play much of a role anymore because the leading AI companies now are kind of above the level where private investors can influence them. They will now IPO soon. And if you are like an IPO market, there is… like, like, so level playing field, which means that like, if somebody’s not funding, somebody else will. So I don’t think investors, investors could have affected things, but like, five, 10 years ago.

Eileen Donahoe

Great. Okay, so since we’re running short on time, I’m going to ask one question, and ask you all to answer it, which is about the 12 month window. Oh, the very shortly, each shortly. Many in the AI safety community believe we have a narrow window, perhaps 12 to 24 months before frontier AI capabilities advance beyond our ability to evaluate and govern them. So what would you recommend is prioritized between now and we’re basically in the next year to two years, each of you to enhance safety? and security?

Josephine Teo

I think there are two, really. I think the AI safety research priorities need to be refreshed because the field has moved so quickly. The Singapore consensus identified a set, but as soon as they are published, we recognize that they will be out of date. So we need to refresh it. That’s why we’re going to have the second edition, you know, worked on. Hopefully in a few months. The second thing I think is that we can’t just keep thinking about frameworks, you know, and guidelines. At some point, we need to be able to introduce better testing tools. And until we are able to do so, the companies that are developing and deploying AI models, they also don’t have a very practical way of giving assurance.

So I’d like to see in the next 12 months some further advancements. In those two areas.

Mathias Cormann

I’ll be really quick I know there’s always a temptation in these sorts of conversations, what is the one thing that can sort of fix it all and the truth is there’s not one thing we’ve got to go as fast as we can to play catch up to a degree but we’ve also got to go as comprehensive and as deep as we can there’s just no alternative, there’s catch up to be played, we’ve got to put a real effort and it’s got to be right across the board and I don’t think that you can just say there’s the one thing that will make us all safe and it’s going to be okay.

Eileen Donahoe

Minister Gobind?

Gobind Singh Deo

I think as I said earlier, we need to start thinking how we can build structures and perhaps institutionalize this entire conversation about building security around AI and its governance in this regard, we have to understand that things are going to move very quickly and you’re going to see new technology develop very fast which brings new risks as well, so in that regard, you’ve got to build something that’s sustainable and I think in order to do that, institutionalizing it should be a priority.

Sangbu Kim

everyone is really rushing for ai system development ai solution development that means ai is currently ai safety measures currently under invested so i really like to urge all of us to think about this is not free you know things we need to spend some money to protect the system in advance from the scratch when you design the system so that means we should allocate some money to fully invest in in the

Eileen Donahoe

Jann Tallinn?

Jann Tallinn

so slow down we really need to slow down that the companies are asking for it and if we like instrumental to that would be basically transparency like more people should know what the leaders of ai companies know in order to basically understand how crucial the slowdown now is

Eileen Donahoe

okay great well i believe we have a little bit of a close coming and thank you all so much i wish we had had a day to talk about all of these issues. But thank you so much. Thank you very much.

Nicolas Miailhe

Thank you very much, Eileen, and this fantastic panel, excellencies, colleagues, friends. What we’ve heard today confirms something important. The coordination gap frontier in AI safety is real, and it is urgent. And as we’ve discussed today, it is closable. And before I hand over the floor to Osama Manzara to close off for a few minutes of remarks and reflection, I’d like to invite you all to the United Nations General Assembly next edition in New York, where we hope to organize the fourth edition of AI Safety Connect, and hopefully with many of the great policymakers and leaders we have heard from today, to carry forward that collective effort. Osama, the floor is yours.

Osama Manzar

Well, thank you very much. And we are one of those absentee co -organizer in this one. So, you know, because being a local, but I just want to I mean, apart from thanking each one of you who didn’t get up and, you know, go out of the room. And every one of you who gave all the safety remarks before usage of AI on behalf of 40 million people that we have reached out in the last 23 years. And billions of the other people whom we are going to work for. I want to suggest that the entire safety aspect of AI should be more from please save people from AI. Right. Because that’s the safety like it’s a car on the road.

You know, we have to save people before you teach people how to think. So we also have to keep a very, very strong thing. How do we save human intelligence from artificial intelligence? And how do we inbuilt in the safety guards and all the ethics and all the all the, you know, policy playbooks? Thank you very much. Thank you. Bye. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (37)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Nicolas Miailhe warned that “the race toward artificial intelligence is no longer a theoretical pursuit” and that “billions and maybe trillions now of dollars are getting deployed to push the frontier of artificial intelligence” while “safety is not keeping pace with it”.”

The knowledge base contains an identical statement describing the race toward AI as no longer theoretical, billions-trillions of dollars being deployed, and safety lagging behind [S11].

Confirmedhigh

“AI Safety Connect convenes bi‑annual global gatherings – at AI summits in Paris, India and the upcoming meeting in Switzerland, as well as at the UN General Assembly – and supplements these with capacity‑building workshops.”

The source explains that the group meets at each AI summit (Paris, India, next year Switzerland) and also at the UN General Assembly, with a six-month cadence for global safety discussions [S3].

Confirmedmedium

“Eileen Donahoe, moderator and founder of Sympathico Ventures, described the policy opportunity as a situation where “technology is advancing rapidly and being deployed with minimal guardrails”.”

The knowledge base identifies Eileen Donahoe as the founder and managing partner of Sympathico Ventures, confirming her affiliation and leadership role [S109].

Additional Contextmedium

“The panel highlighted that safety is not keeping pace with rapid AI development, indicating a coordination gap in AI safety.”

Another source explicitly states that the coordination gap frontier in AI safety is real and urgent, providing additional context to the panel’s safety-pace concern [S6].

External Sources (111)
S1
AI for Social Good Using Technology to Create Real-World Impact — The World Bank’s Sangbu Kim presented concrete examples of how locally successful solutions can achieve global scale. He…
S2
Panel 5 – Ensuring Digital Resilience: Linking Submarine Cables to Broader Resilience Goals — – Nomsa Muswai Mwayenga- Sangbu Kim – Yongbo Tang- Sangbu Kim
S3
S4
Hack the Digital Divides | IGF 2023 Day 0 Event #19 — Moderator – Peter A. Bruck:Can I ask the technical support to see if we can put the slides in? Is that good? Hello, good…
S5
S7
S8
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — -Mathias Cormann- Secretary General, OECD (Organisation for Economic Co-operation and Development) -Moderator- Role: Ev…
S9
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — 1030 words | 136 words per minute | Duration: 452 secondss India AI Impact Summit. And thank you to India for your lead…
S10
Policymaker’s Guide to International AI Safety Coordination — Thank you, Stuart. So Dr. Eileen Donahoe is the founder and managing partner of Sympathico Ventures. She’s also the form…
S11
https://dig.watch/event/india-ai-impact-summit-2026/policymakers-guide-to-international-ai-safety-coordination — And we obviously want to thank our sponsors and supporters, starting with Sympathico Ventures. Eileen Donahoe will moder…
S12
The Declaration for the Future of the Internet: Principles to Action — A key figure tackling this connectivity challenge is Zeyna Bouharb, serving as head of international cooperation at Oger…
S13
Driving U.S. Innovation in Artificial Intelligence — 13. Stuart Appelbaum – President, Retail Wholesale and Department Store Union 14. Stuart Ingis – Chairman, Venable 15. …
S14
S15
Acknowledgements — In addition to coordinating simultaneous attacks on a single target, such UAVs could disperse to find and attack a la…
S16
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Josephine Teo- Role/title not specified (represents Singapore)
S18
Policymaker’s Guide to International AI Safety Coordination — – Nicolas Miailhe- Eileen Donahoe- Jann Tallinn- Josephine Teo – Nicolas Miailhe- Mathias Cormann- Stuart Russell- Jose…
S19
Policymaker’s Guide to International AI Safety Coordination — Speakers:Nicolas Miailhe, Eileen Donahoe, Jann Tallinn, Josephine Teo Speakers:Nicolas Miailhe, Mathias Cormann, Stuart…
S20
PLAN NATIONAL DU NUMÉRIQUE HORIZON 2025 — | N° | NOMS | INSTITUTION | | 67 …
S21
Policymaker’s Guide to International AI Safety Coordination — -Gobind Singh Deo- Minister from Malaysia (leading Malaysia’s 2025 ASEAN chairmanship)
S23
AI Safety at the Global Level Insights from Digital Ministers Of — But if we fail to engage in these conversations in a rational way, then I think we are even further behind in trying to …
S24
UNSC meeting: Artificial intelligence, peace and security — Secretary General – Antonio Guterres:Mr. President, Excellencies, I thank the United Kingdom for convening the first deb…
S25
Advancing Scientific AI with Safety Ethics and Responsibility — The panel ultimately argued for a “web of prevention” approach where multiple complementary measures work together rathe…
S26
Ethical AI_ Keeping Humanity in the Loop While Innovating — And then at times we also hear this, that having constraints or having frameworks will actually hinder these dynamics. A…
S27
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — Dafna Feinholz: Okay, good morning, good morning to all. Recording in progress. Thank you very much, be very welcome and…
S28
Defending Our Voice: Global South Participation in Digital Governance — Bia emphasizes that better coordination between processes dealing with overlapping issues is strongly needed to avoid fr…
S29
Upholding Human Rights in the Digital Age: Fostering a Multistakeholder Approach for Safeguarding Human Dignity and Freedom for All — Eileen Donahoe:It’s difficult. So many good questions and so many layers to them. I will start with the two points by ac…
S30
Laying the foundations for AI governance — Technology moves very fast while governance moves much slower, creating a fundamental pacing problem
S31
Hard power of AI — The analysis comprises multiple arguments related to technology, politics, and AI. One argument suggests that the rapid …
S32
WS #97 Interoperability of AI Governance: Scope and Mechanism — Rapid technological advancement poses challenges for governance frameworks to keep pace
S33
Keynote-Nikesh Arora — The central thesis of Arora’s presentation revolves around a critical imbalance in AI development priorities. He argues …
S34
AI for Democracy_ Reimagining Governance in the Age of Intelligence — This brings me to the international dimension. AI is a truly global challenge whose effects transcend national borders. …
S35
Future of International Cyber Diplomacy: Comprehensive Discussion Report — Cybersecurity | Development There is a need for practical tools that help with cooperation and incident response, espec…
S36
Successes & challenges: cyber capacity building coordination | IGF 2023 — Claire Stoffels:Thank you, Enrico. Hello, everyone. My name is Claire Stoffels. I’m the Digital for Development focal po…
S37
Building Trust through Transparency — Furthermore, the corruption index is critiqued for its failure to incorporate the element of trust in its assessment of …
S38
HIGH LEVEL LEADERS SESSION I — Capacity building for policy oversight and management of partnerships is considered crucial. Government institutions nee…
S39
FIRST SECTION — 13. As a result of these changes, the potential exposure nowadays of a vast range of communications and other online ac…
S40
TABLE OF CONTENTS — Ensuring sufficient investment and funding in for the policies and strategies outlined in the Policy will be critical to…
S41
Limiter l’utilisation à des fins malveillantes des menaces et vulnérabilités dans les TIC — Recent trends in ICT threats and vulnerabilities reveal significant risks to national and international security. Attack…
S42
Policymaker’s Guide to International AI Safety Coordination — Russell argues that global coordination on AI safety is essential because the potential harms, whether psychological dam…
S43
Advancing Scientific AI with Safety Ethics and Responsibility — Summary:The speakers demonstrated strong consensus on several key areas: the need for context-specific governance framew…
S44
Policymaker’s Guide to International AI Safety Coordination — But also a governance challenge. How do we ensure that those are the systems and only those systems get built? And this …
S45
Setting the Rules_ Global AI Standards for Growth and Governance — Various jurisdictions acknowledge new risks with frontier AI and express concern for citizen safety, but they delegate t…
S46
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Owen Later:Fantastic. Hello, is this on? Can people hear me? Excellent, thank you. Good morning, everyone. My name is Ow…
S47
AI That Empowers Safety Growth and Social Inclusion in Action — “investors should ask whether there is clear board level responsibility on AI risk whether executive incentives are alig…
S48
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Minister Teo outlines the three key components necessary for a robust AI assurance ecosystem. Technical testing ensures …
S49
World Economic Forum® — (Figure 4.4). This risk captures the inability to efficiently govern a nation, which is caused by or results in factors …
S50
HIGH LEVEL LEADERS SESSION I — Another key point highlighted in the discussions was the need for dialogue and consensus on data flow. Data has become t…
S51
Emerging Markets: Resilience, Innovation, and the Future of Global Development — Capital flows are driven by opportunities and returns rather than geopolitical constraints The Prime Minister argued th…
S52
Use Case: Flow Management — Considering the sensitivity of biometric data, the use of facial recognition is intrinsically risky. This is true even w…
S53
Blended Finance’s Broken Promise and How to Fix It / Davos 2025 — Need for clear policy direction and investment frameworks Renaud-Basso emphasizes the importance of clear policy direct…
S54
How Trust and Safety Drive Innovation and Sustainable Growth — Disagreement level:The level of disagreement is moderate but significant for policy implications. While all speakers agr…
S55
WSIS Action Line Facilitators Meeting: 20-Year Progress Report — Modern regulation requires innovative approaches including data-driven regulation and regulatory sandboxes for experimen…
S56
Future of International Cyber Diplomacy: Comprehensive Discussion Report — Chat comments emphasizing role of regional organizations in uniting fragmented positions Coordination with other UN pro…
S57
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240 (continued)/3/OEWG 2025 — Mauritius: Distinguished Chair and colleagues, good morning. The effective implementation of confidence-building measu…
S58
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S59
AI as critical infrastructure for continuity in public services — Inclusive multi‑stakeholder governance & trust Inclusivity of all affected stakeholders creates legitimacy and trust. T…
S60
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S61
A bottom-up approach: IG processes and multistakeholderism | IGF 2023 Open Forum #23 — In conclusion, the analysis highlights the importance of multi-stakeholder engagement in policy processes, with specific…
S62
Keynote-Nikesh Arora — The central thesis of Arora’s presentation revolves around a critical imbalance in AI development priorities. He argues …
S63
WS #97 Interoperability of AI Governance: Scope and Mechanism — Rapid technological advancement poses challenges for governance frameworks to keep pace
S64
Laying the foundations for AI governance — Technology moves very fast while governance moves much slower, creating a fundamental pacing problem
S65
Policymaker’s Guide to International AI Safety Coordination — Okay. Given this remarkable panel and the very short time we have, let me very briefly frame our discussion and get righ…
S66
Policymaker’s Guide to International AI Safety Coordination — Okay. Given this remarkable panel and the very short time we have, let me very briefly frame our discussion and get righ…
S67
AI for Democracy_ Reimagining Governance in the Age of Intelligence — This brings me to the international dimension. AI is a truly global challenge whose effects transcend national borders. …
S68
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Tomiwa Ilori:Thank you very much, Michael. And quickly to my presentation, I’ll be focusing more on the regional initiat…
S69
Global South pushes for digital inclusion — At the2025 Internet Governance Forumin Lillestrøm, Norway, global leaders, youth delegates, and digital policymakers con…
S70
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240/ OEWG 2025 — Mozambique: Mr. Chair, at the outset, Mozambique delegation would like to commend you, Mr. Chair, and your dedicated t…
S71
WS #190 Securing critical infrastructure in cyber: Who and how? — Regional and international cooperation was proposed as a way to address these challenges. Participants suggested creatin…
S72
Successes & challenges: cyber capacity building coordination | IGF 2023 — However, building trust is challenging due to the presence of different policy fields and institutions. Luxembourg, perc…
S73
Networking Session #37 Mapping the DPI stakeholders? — Mwape argues that building trust between government and civil society requires transparent processes, open dialogue, and…
S74
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Speaker 1:Yeah, thank you. Thank you, Lea, and thank you for the opportunity to be here. And I believe I will actually s…
S75
HIGH LEVEL LEADERS SESSION I — Capacity building for policy oversight and management of partnerships is considered crucial. Government institutions nee…
S76
Safe Surfing: Understanding Child Online Activity — Funding is crucial for making progress in child online safety. It can be utilized to raise awareness through campaigns a…
S77
(Interactive Dialogue 1) Summit of the Future – General Assembly, 79th session — Civil Society 3 Morningstar Sustainalytics: Excellencies, private sector and CSOs sitting behind me, and these United N…
S78
Towards a Safer South Launching the Global South AI Safety Research Network — Safety is not adequately integrated into companies’ financial planning and cost structures, creating insufficient incent…
S79
WEF Business Engagement Session: Safety in Innovation – Building Digital Trust and Resilience — Multi-stakeholder engagement including civil society, academia, governments, and external experts is crucial for effecti…
S80
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S81
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — The tone began optimistically with audience engagement but became increasingly concerned and urgent as panelists reveale…
S82
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S83
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S84
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — The discussion began with a cautiously optimistic tone, acknowledging both opportunities and risks. However, the tone be…
S85
WS #226 Strengthening Multistakeholder Participation — The discussion maintained a collaborative and constructive tone throughout, with participants openly acknowledging chall…
S86
Bridging the Digital Divide: Inclusive ICT Policies for Sustainable Development — The discussion maintained a formal, academic tone throughout, characteristic of a research presentation or conference se…
S87
Media and Education for All: Bridging Female Academic Leaders and Society towards Impactful Results — The discussion maintained a consistently professional and collaborative tone throughout. It was informative and solution…
S88
Dynamic Coalition Collaborative Session — The speakers’ emphasis on urgency, particularly regarding post-quantum cryptography migration, reflects the reality that…
S89
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — The tone was consistently optimistic and forward-looking throughout the conversation. Speakers expressed excitement abou…
S90
New Technologies and the Impact on Human Rights — The discussion maintained a collaborative and constructive tone throughout, despite addressing complex and sometimes con…
S91
The sTaTe of The — Humanitarian evaluations, at least those on policy and system issues, tend to be unflinchingly self-critical, which has …
S92
Executive summary — The challenges parliaments faced were not matters of simply adopting technology. Many were strategic and needed to be ad…
S93
HIGH LEVEL LEADERS SESSION IV — This indicates the recognition that companies have a role to play in shaping policies and providing examples of good pra…
S94
Any other business /Adoption of the report/ Closure of the session — Despite the hard work acknowledged, the Dominican Republic’s representative noted that their delegation’s expectations w…
S95
Closing remarks — Summit Organization and Future Planning
S96
Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track — The discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mut…
S97
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S98
World Economic Forum Annual Meeting Closing Remarks: Summary — Fink strategically chose this quote to close the entire forum, making it the final thought participants would carry with…
S99
OPENING STATEMENTS FROM STAKEHOLDERS — Discussions on artificial intelligence show that technological development is not without risk.
S100
9821st meeting — 2. Creation of an International Scientific Panel on Artificial Intelligence President, Excellencies, in today’s race fo…
S101
Main Session | Policy Network on Artificial Intelligence — Benifei argues for the importance of developing common standards and definitions for AI at a global level. He suggests t…
S102
Alignment Project to tackle safety risks of advanced AI systems — The UK’s Department for Science, Innovation and Technology (DSIT) hasannounced a new international research initiativeai…
S103
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Thank you for inviting me to this important summit. It is an honor to be here in India at this pivotal moment for global…
S104
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Tigran Karapetyan: at work. And as we know, the AI is here to stay. It’s not going to go away. It’s there already. So it…
S105
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — 1. Chris Martin: Head of policy innovation at Access Partnership 4. Thiago Moraes: Facilitator of the interactive exerc…
S106
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — Collaborations between the public, private, and academic sectors in research and drug discovery are cited as successful …
S107
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — Moraes Thiago: Yeah. Hello, everyone. And thanks, Sophie, for the invitation, the invitation. It has been very nice to b…
S108
Webinar session — Catalina Vera Toro: Well, let’s be hopeful. So in March, we shall all reconvene. There will be election of chair, right?…
S109
https://app.faicon.ai/ai-impact-summit-2026/policymakers-guide-to-international-ai-safety-coordination — Thank you, Stuart. So Dr. Eileen Donahoe is the founder and managing partner of Sympathico Ventures. She’s also the form…
S110
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S111
How African knowledge and wisdom can inspire the development and governance of AI — H.E Muhammadou M.O. Kah:Thank you so much, and good afternoon. And apologies, I was somewhere else, being pulled in anot…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
N
Nicolas Miailhe
3 arguments149 words per minute812 words325 seconds
Argument 1
AI safety is lagging behind rapid AI development; coordinated convenings and capacity‑building are essential – Nicolas Miailhe
EXPLANATION
Nicolas stresses that the rapid deployment of AI resources has outpaced safety efforts, creating significant risks. He argues that regular global gatherings and capacity‑building activities are needed to keep safety discussions moving at the same speed as technology development.
EVIDENCE
He notes that billions and possibly trillions of dollars are being invested in AI while safety is not keeping pace, describing the race towards AI as no longer theoretical and highlighting the need for faster safety discussions through semi-annual global convenings held at AI summits and the UN General Assembly [1-4][10-15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The guide notes that the AI race is no longer theoretical, with billions invested and safety lagging, calling for faster, coordinated convenings at AI summits and the UN General Assembly to close the gap [S3][S6].
MAJOR DISCUSSION POINT
Need for accelerated, recurring global safety forums
AGREED WITH
Eileen Donahoe, Josephine Teo, Mathias Cormann
Argument 2
AI Safety Connect will continue rapid, semi‑annual global convenings (Paris, India, Switzerland, UNGA) to keep the safety agenda moving forward – Nicolas Miailhe
EXPLANATION
Nicolas outlines the schedule of AI Safety Connect events, emphasizing that the initiative will maintain momentum by meeting twice a year in different locations, including major international venues. This continuity is presented as a way to ensure ongoing coordination among stakeholders.
EVIDENCE
He describes past and upcoming convenings: last year in Paris, this year in India, next year in Switzerland, and additional meetings at the UN General Assembly, with a six-month cadence for global convenings [10-15][12-14].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The coordination guide describes a six-month cadence of global convenings held in Paris, India, Switzerland and at the UN General Assembly, matching the proposed schedule [S3][S6].
MAJOR DISCUSSION POINT
Ongoing semi‑annual AI safety convenings
AGREED WITH
Mathias Cormann, Stuart Russell, Eileen Donahoe, Josephine Teo, Gobind Singh Deo
Argument 3
The next UN General Assembly session will host the fourth AI Safety Connect edition, extending the collaborative effort to a broader set of policymakers – Nicolas Miailhe
EXPLANATION
Nicolas invites participants to the upcoming UN General Assembly where the fourth edition of AI Safety Connect will be held, aiming to broaden participation and deepen policy coordination. He frames this as a continuation of the effort to close the coordination gap.
EVIDENCE
He mentions the invitation to the UN General Assembly in New York for the fourth AI Safety Connect, highlighting the intention to bring together many policymakers and leaders for collective action [260-265].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The fourth edition of AI Safety Connect is slated for the UN General Assembly, as highlighted in the policy briefing [S6].
MAJOR DISCUSSION POINT
UNGA as platform for AI safety collaboration
S
Stuart Russell
2 arguments119 words per minute250 words125 seconds
Argument 1
AI safety requires both technical solutions and governance; harms cross borders, making global coordination indispensable – Stuart Russell
EXPLANATION
Stuart points out that ensuring AI safety is not only a technical problem but also a governance challenge, because the potential harms affect people worldwide. He argues that coordinated international action is essential to prevent or mitigate these cross‑border risks.
EVIDENCE
He describes safety as a technical challenge (building safe systems) and a governance challenge (ensuring only safe systems are built) and stresses that harms such as psychological damage or loss of human control cross borders, requiring global coordination [39-45].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Russell stresses that AI harms such as psychological damage or loss of human control cross national borders, making global coordination essential [S6][S3].
MAJOR DISCUSSION POINT
Technical and governance dimensions of AI safety
AGREED WITH
Mathias Cormann, Nicolas Miailhe, Eileen Donahoe, Josephine Teo, Gobind Singh Deo
Argument 2
The second annual ICI conference in Paris (at UNESCO) will gather over 1,300 participants to advance safe and ethical AI practices – Stuart Russell
EXPLANATION
Stuart promotes the upcoming ICI conference, noting its large expected attendance and prestigious venue, as a key event for advancing the mission of safe and ethical AI. He positions the conference as a platform for sharing knowledge and fostering coordination.
EVIDENCE
He mentions that the second annual conference will take place in Paris at UNESCO headquarters, with registration still open and already over 1,300 participants expected [36-39].
MAJOR DISCUSSION POINT
Large‑scale AI safety conference
E
Eileen Donahoe
2 arguments122 words per minute1101 words539 seconds
Argument 1
Current governance is fragmented and ill‑adapted; middle powers must help close coordination gaps and shape real‑world impact – Eileen Donahoe
EXPLANATION
Eileen describes the existing AI governance landscape as disjointed, with risk‑management processes that are either unsuitable for the scale of risk or scattered across jurisdictions. She calls on middle powers to use their pooled resources and influence to bridge these gaps and translate policy into tangible safety outcomes.
EVIDENCE
She notes that AI is advancing rapidly with minimal guardrails, while risk-management is fragmented, ill-adapted, or insufficiently binding, leading to an unharmonized governance landscape that fails to shape incentives for developers and funders [56-60]. She also emphasizes the urgent need for deeper international diplomacy and the role of middle powers in shaping global AI practices through pooled resources and normative influence [61-64].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The briefing highlights an urgent coordination gap and fragmented governance, emphasizing the need for middle-power collaboration to bridge these gaps [S6][S28].
MAJOR DISCUSSION POINT
Middle powers as bridge builders in AI governance
AGREED WITH
Josephine Teo, Gobind Singh Deo, Mathias Cormann
Argument 2
Future panels and follow‑up reports (e.g., Singapore Consensus, OECD AI Safety Report) will guide policymakers on immediate steps within the next 12‑24 months – Eileen Donahoe
EXPLANATION
Eileen highlights that upcoming reports and consensus documents will provide concrete guidance for policymakers, helping them to act within a short time horizon. She frames these outputs as essential for moving from rhetoric to real‑world safety impact.
EVIDENCE
She outlines the panel’s aim to identify present-day coordination gaps and to highlight practical steps policymakers can take in the coming months, referencing the Singapore Consensus and the International AI Safety Report as examples of such guidance [66-71].
MAJOR DISCUSSION POINT
Guidance documents for near‑term policy action
AGREED WITH
Mathias Cormann, Josephine Teo, Sangbu Kim
M
Mathias Cormann
4 arguments145 words per minute864 words356 seconds
Argument 1
Trust is built through inclusive, evidence‑based processes; international consistency and shared incident‑reporting frameworks are the most critical frontier infrastructure – Mathias Cormann
EXPLANATION
Mathias argues that trust among stakeholders emerges when inclusion is paired with objective evidence, and that the most urgent infrastructure need is coordinated transparency and incident reporting. He stresses that shared data and consistent standards are essential for trustworthy AI.
EVIDENCE
He states that trust is built through inclusion and objective evidence, emphasizing the need to bring together governments, companies, civil society, and technical experts, and that international consistency reduces fragmentation, citing the OECD principles adopted by 50 countries and the lifecycle definition shaping policies like the EU AI Act and US executive orders [77-84][87-88]. He further details coordinated transparency and incident reporting as critical, referencing the Hiroshima AI Code of Conduct, its reporting framework, and the GPI AI Common Framework for Incident Reporting, which already has 25 organizations submitting reports and could evolve into an international incident-response centre [91-96].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cormann explains that trust arises from inclusive, evidence-based processes and that coordinated transparency and incident-reporting frameworks are the most critical safety infrastructure [S6][S8].
MAJOR DISCUSSION POINT
Incident reporting as core trust mechanism
AGREED WITH
Nicolas Miailhe, Stuart Russell, Eileen Donahoe, Josephine Teo, Gobind Singh Deo
DISAGREED WITH
Josephine Teo
Argument 2
Coordinated transparency and incident reporting (e.g., Hiroshima AI Code of Conduct) are foundational; an international incident‑response centre should evolve from these mechanisms – Mathias Cormann
EXPLANATION
Mathias reiterates that transparent reporting of AI incidents is a foundational step toward global safety, and proposes that an international incident‑response centre could be built on existing reporting frameworks. He sees this as a way to share lessons and prevent scaling of failures.
EVIDENCE
He describes the Hiroshima AI Process Code of Conduct and its reporting framework launched at the AI Action Summit in Paris, noting that 25 organizations across nine countries have already submitted detailed risk-management reports, and outlines the next stage of strengthening information sharing on failures and near-misses through the GPI AI Common Framework, which could evolve into an international incident-response centre [91-96].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Hiroshima AI Code of Conduct and the G-PI incident-reporting framework are cited as foundational steps toward an international AI incident-response centre [S6].
MAJOR DISCUSSION POINT
Evolution toward an AI incident‑response centre
AGREED WITH
Josephine Teo, Eileen Donahoe, Sangbu Kim
Argument 3
The OECD’s long‑standing work on principles, definitions, and policy observatories demonstrates how regional bodies can reduce fragmentation and provide shared evidence – Mathias Cormann
EXPLANATION
Mathias highlights the OECD’s decades‑long experience in creating AI principles, definitions, and policy observatories that have been adopted by many countries, showing how a regional organization can harmonize standards and supply evidence‑based guidance. He presents this as a model for reducing fragmentation.
EVIDENCE
He explains that the OECD developed principles first adopted in 2019 and now adhered to by 50 countries, that its lifecycle definition shaped the EU AI Act and US executive orders, and that the OECD AI Policy Observatory aggregates global policy approaches to provide data, evidence, and peer-learning, thereby reducing fragmentation and compliance costs [87-90].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The OECD’s AI principles adopted by 50 countries, its lifecycle definition influencing the EU AI Act, and its AI Policy Observatory illustrate how regional bodies can harmonise standards and supply evidence-based guidance [S8][S9].
MAJOR DISCUSSION POINT
OECD as model for regional AI governance
AGREED WITH
Eileen Donahoe, Josephine Teo, Gobind Singh Deo
Argument 4
Open‑source safety and evaluation tools, catalogued by the OECD, help make trustworthy AI practicable for a wider audience – Mathias Cormann
EXPLANATION
Mathias points out that the OECD has launched an open call for open‑source safety and evaluation tools, which are hosted in an OECD.ai catalog, making it easier for developers and regulators to implement trustworthy AI. He frames this as a practical step toward broader adoption of safety measures.
EVIDENCE
He mentions the OECD’s recent open call for open-source safety and evaluation tools, which are now hosted in the OECD.ai catalog of tools and metrics, intended to make trustworthy AI easier to implement in practice [98-99].
MAJOR DISCUSSION POINT
Open‑source tools to democratize AI safety
J
Josephine Teo
2 arguments143 words per minute889 words371 seconds
Argument 1
Translating scientific knowledge into effective policy and interoperable standards demands international collaboration and avoids fragmentation – Josephine Teo
EXPLANATION
Josephine stresses that policymakers must bridge the gap between scientific research and policy, ensuring that standards are interoperable and that fragmented national rules do not hinder safety. She argues that international collaboration is essential to achieve this translation.
EVIDENCE
She explains that smaller states like Singapore rely on technology developed elsewhere, so they must translate scientific findings into policy, emphasizing the need for effective, evidence-based policies, testing, simulations, and interoperable standards, and cites the OECD’s work and the Global Partnership on AI as examples of needed international collaboration [110-118][142-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Teo stresses the need to translate scientific findings into policy through international collaboration, referencing OECD work and the Global Partnership on AI as mechanisms to avoid fragmented national rules [S6][S8].
MAJOR DISCUSSION POINT
Science‑policy translation for AI safety
AGREED WITH
Eileen Donahoe, Gobind Singh Deo, Mathias Cormann
Argument 2
Effective policy requires rigorous testing, simulations, and standards that work across diverse environments; investment in research and interoperable metrics is needed – Josephine Teo
EXPLANATION
Josephine argues that policies must be grounded in thorough testing and simulation across varied conditions (e.g., different weather scenarios) and that investment in research and interoperable metrics is crucial. She uses aviation safety analogies to illustrate the complexity of establishing safe standards.
EVIDENCE
She recounts her experience with Singapore’s air hub, describing how safety distances for A380 take-offs required extensive research, testing, and simulations across different weather conditions, and notes that differing national safety distances create challenges, underscoring the need for interoperable standards and international collaboration through bodies like the OECD and AI Safety Connect [119-138][142-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Teo uses an aviation safety analogy to illustrate that robust AI policy must be grounded in extensive testing, simulations across varied conditions, and interoperable standards, underscoring the need for research investment [S6].
MAJOR DISCUSSION POINT
Testing and standards for robust AI policy
AGREED WITH
Mathias Cormann, Eileen Donahoe, Sangbu Kim
DISAGREED WITH
Gobind Singh Deo
G
Gobind Singh Deo
2 arguments174 words per minute535 words183 seconds
Argument 1
ASEAN’s AI Safety Network illustrates a dual‑track approach (national capacity + regional coordination); sustained political will and enforcement institutions are vital – Gobind Singh Deo
EXPLANATION
Gobind describes ASEAN’s strategy of building national AI capacity while simultaneously coordinating regionally through the ASEAN AI Safety Network. He emphasizes that lasting political commitment and strong enforcement agencies are necessary for the network to move beyond rhetoric.
EVIDENCE
He notes that under Malaysia’s leadership ASEAN placed AI at the centre of its agenda, creating the ASEAN AI Safety Network and a national AI Action Plan, and stresses that operationalising the network will require sustained political will, technical capacity, and resources, as well as institutions capable of enforcing standards [158-166]. He further stresses the need for agencies that can enforce policies and maintain ongoing discussions at the ASEAN level [167-173].
MAJOR DISCUSSION POINT
Dual‑track ASEAN AI safety model
AGREED WITH
Eileen Donahoe, Josephine Teo, Mathias Cormann
Argument 2
Building enforceable agencies and institutional mechanisms is essential; without them, standards and regulations remain ineffective on paper – Gobind Singh Deo
EXPLANATION
Gobind argues that having standards, regulations, or legislation is insufficient unless there are agencies with the authority and capacity to enforce them. He calls for the creation of institutional mechanisms that can sustain AI safety governance over time.
EVIDENCE
He explains that effective governance requires agencies that can enforce standards and regulations, warning that without enforcement the rules will remain merely on paper, and calls for building institutions with expertise that can sustain and deliver impactful AI safety measures [162-166][167-173].
MAJOR DISCUSSION POINT
Need for enforcement institutions
AGREED WITH
Mathias Cormann, Nicolas Miailhe, Stuart Russell, Eileen Donahoe, Josephine Teo
DISAGREED WITH
Josephine Teo
S
Sangbu Kim
1 argument112 words per minute525 words280 seconds
Argument 1
Embedding safety architecture from the design stage and partnering with advanced economies/companies enables developing nations to adopt robust protection measures – Sangbu Kim
EXPLANATION
Sangbu stresses that AI safety must be built into systems from the outset and that low‑capacity countries should collaborate with advanced economies and tech firms to learn best practices. He suggests that such partnerships can help developing nations keep pace with emerging threats.
EVIDENCE
He states that safety architecture should be designed from the start, cites collaboration with big-tech companies running red-team exercises to anticipate attacks, and uses an analogy of a sphere (AI attack) and shield (defensive system) to illustrate the need for strong protective measures built with AI itself, emphasizing the importance of close cooperation with advanced partners [178-185][186-200].
MAJOR DISCUSSION POINT
Design‑time safety and advanced‑partner collaboration
AGREED WITH
Mathias Cormann, Josephine Teo, Eileen Donahoe
J
Jann Tallinn
3 arguments143 words per minute517 words216 seconds
Argument 1
The massive flow of capital into frontier AI creates both risk and leverage; strong public pressure and consensus are prerequisites for any effective prohibition of superintelligence development – Jann Tallinn
EXPLANATION
Jann argues that the huge investment in AI both amplifies risk and provides a lever for regulation, but any prohibition of superintelligence must be backed by broad scientific consensus and widespread public support. He sees public pressure as essential to compel action.
EVIDENCE
He references the Future of Life Institute’s statement calling for a prohibition on superintelligence until there is scientific consensus on safe controllability and strong public buy-in, noting over 130,000 signatures supporting the call and emphasizing that political demand is growing [207-226].
MAJOR DISCUSSION POINT
Public pressure as condition for AI prohibition
AGREED WITH
Nicolas Miailhe, Stuart Russell, Eileen Donahoe, Mathias Cormann, Josephine Teo, Gobind Singh Deo, Sangbu Kim
Argument 2
Investors’ influence has waned as leading AI firms become too large and IPO‑driven; traditional investor leverage is insufficient for shaping safety incentives today – Jann Tallinn
EXPLANATION
Jann observes that the biggest AI companies are now beyond the reach of typical private investors, especially as they move toward IPOs, reducing the ability of investors to affect safety decisions. He suggests that investor influence was more effective a decade ago.
EVIDENCE
He explains that leading AI companies are now above the level where private investors can influence them, that they will soon IPO, and that the level playing field of the IPO market diminishes investor impact, noting that investors could have mattered five to ten years ago but not now [231-235].
MAJOR DISCUSSION POINT
Diminished investor leverage in AI
DISAGREED WITH
Eileen Donahoe
Argument 3
Prohibition would require broad scientific consensus on safe controllability and widespread public buy‑in; transparency about lab‑level risks is key to generating that pressure – Jann Tallinn
EXPLANATION
Jann reiterates that any effective ban on superintelligent AI must rest on a solid scientific foundation and mass public endorsement, which can only be achieved through transparent disclosure of risks within AI labs. He links transparency to building the necessary consensus.
EVIDENCE
He notes that the Future of Life Institute’s statement calls for prohibition until there is broad scientific consensus that superintelligence can be developed safely and that there is strong public buy-in, and stresses that increasing political demand stems from awareness of lab-level dangers [207-226].
MAJOR DISCUSSION POINT
Transparency as prerequisite for AI prohibition
O
Osama Manzar
1 argument72 words per minute193 words159 seconds
Argument 1
Closing emphasis on prioritising human protection from AI, embedding ethics and safeguards as foundational safeguards – Osama Manzar
EXPLANATION
Osama concludes the session by urging that AI safety should prioritize protecting people before any other considerations, likening AI safety to a car’s safety features that protect passengers before teaching them to drive. He calls for embedding ethical safeguards directly into AI systems.
EVIDENCE
He uses the analogy of a car needing safety mechanisms to protect people before teaching them how to think, stresses the need to “save human intelligence from artificial intelligence,” and calls for strong safety guards and ethics to be built into AI from the start [272-277].
MAJOR DISCUSSION POINT
Human‑first AI safety framing
Agreements
Agreement Points
Global coordination is essential to address AI safety risks that cross borders
Speakers: Nicolas Miailhe, Stuart Russell, Eileen Donahoe, Mathias Cormann, Josephine Teo, Gobind Singh Deo, Sangbu Kim, Jann Tallinn
AI safety is lagging behind rapid AI development; coordinated convenings and capacity‑building are essential – Nicolas Miailhe AI safety requires both technical solutions and governance; harms cross borders, making global coordination indispensable – Stuart Russell Current governance is fragmented and ill‑adapted; middle powers must help close coordination gaps and shape real‑world impact – Eileen Donahoe Trust is built through inclusive, evidence‑based processes; international consistency and shared incident‑reporting frameworks are the most critical frontier infrastructure – Mathias Cormann Translating scientific knowledge into effective policy and interoperable standards demands international collaboration and avoids fragmentation – Josephine Teo ASEAN’s AI Safety Network illustrates a dual‑track approach (national capacity + regional coordination); sustained political will and enforcement institutions are vital – Gobind Singh Deo Embedding safety architecture from the design stage and partnering with advanced economies/companies enables developing nations to adopt robust protection measures – Sangbu Kim The massive flow of capital into frontier AI creates both risk and leverage; strong public pressure and consensus are prerequisites for any effective prohibition of superintelligence development – Jann Tallinn
All speakers stress that AI safety challenges are trans-national and can only be managed through coordinated international effort, regular global convenings, and multistakeholder collaboration [10-15][44-46][61-64][86-87][142-148][158-166][178-185][219-226].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors arguments by Russell that AI harms transcend national boundaries and therefore require coordinated international action [S42][S44].
Inclusive, evidence‑based multi‑stakeholder processes are needed to build trust in AI systems
Speakers: Mathias Cormann, Nicolas Miailhe, Stuart Russell, Eileen Donahoe, Josephine Teo, Gobind Singh Deo
Trust is built through inclusive, evidence‑based processes; international consistency and shared incident‑reporting frameworks are the most critical frontier infrastructure – Mathias Cormann AI Safety Connect will continue rapid, semi‑annual global convenings (Paris, India, Switzerland, UNGA) to keep the safety agenda moving forward – Nicolas Miailhe AI safety requires both technical solutions and governance; harms cross borders, making global coordination indispensable – Stuart Russell Current governance is fragmented and ill‑adapted; middle powers must help close coordination gaps and shape real‑world impact – Eileen Donahoe Translating scientific knowledge into effective policy and interoperable standards demands international collaboration and avoids fragmentation – Josephine Teo Building enforceable agencies and institutional mechanisms is essential; without them, standards and regulations remain ineffective on paper – Gobind Singh Deo
Speakers converge on the need for broad inclusion of governments, industry, civil society and technical experts, backed by objective evidence, to generate trust and effective governance of AI [77-84][10-15][44-46][61-64][110-118][162-166].
POLICY CONTEXT (KNOWLEDGE BASE)
Multi-stakeholder governance is highlighted as essential for legitimacy and trust in AI policy frameworks, as noted in the IGF multi-stakeholder analysis and the inclusive AI governance model for public services [S59][S61].
Development of practical safety tools, testing, transparency and incident‑reporting mechanisms is a priority
Speakers: Mathias Cormann, Josephine Teo, Eileen Donahoe, Sangbu Kim
Coordinated transparency and incident reporting (e.g., Hiroshima AI Code of Conduct) are foundational; an international incident‑response centre should evolve from these mechanisms – Mathias Cormann Effective policy requires rigorous testing, simulations, and standards that work across diverse environments; investment in research and interoperable metrics is needed – Josephine Teo Future panels and follow‑up reports (e.g., Singapore Consensus, OECD AI Safety Report) will guide policymakers on immediate steps within the next 12‑24 months – Eileen Donahoe Embedding safety architecture from the design stage and partnering with advanced economies/companies enables developing nations to adopt robust protection measures – Sangbu Kim
All highlighted the necessity of concrete technical measures such as open-source safety tools, systematic testing, incident reporting frameworks and early-stage safety architecture to make AI trustworthy [91-96][119-138][66-71][178-185].
POLICY CONTEXT (KNOWLEDGE BASE)
Standard-setting bodies are urged to create concrete risk-management approaches, and technical testing, standards and independent assurance are identified as core components of a robust AI safety ecosystem [S45][S48].
Middle powers and regional organisations play a pivotal role in bridging coordination gaps
Speakers: Eileen Donahoe, Josephine Teo, Gobind Singh Deo, Mathias Cormann
Current governance is fragmented and ill‑adapted; middle powers must help close coordination gaps and shape real‑world impact – Eileen Donahoe Translating scientific knowledge into effective policy and interoperable standards demands international collaboration and avoids fragmentation – Josephine Teo ASEAN’s AI Safety Network illustrates a dual‑track approach (national capacity + regional coordination); sustained political will and enforcement institutions are vital – Gobind Singh Deo The OECD’s long‑standing work on principles, definitions, and policy observatories demonstrates how regional bodies can reduce fragmentation and provide shared evidence – Mathias Cormann
Speakers agree that countries not in the AI ‘super-power’ group can leverage pooled resources, normative influence and regional mechanisms (OECD, ASEAN) to close gaps and drive coordinated safety efforts [61-64][142-148][158-166][87-90].
POLICY CONTEXT (KNOWLEDGE BASE)
Regional organisations are recognised as key actors for aggregating fragmented positions and fostering consensus in AI and cyber-security diplomacy [S56].
There is an urgent, narrow window to act on frontier AI safety
Speakers: Nicolas Miailhe, Eileen Donahoe, Josephine Teo, Mathias Cormann
AI safety is lagging behind rapid AI development; coordinated convenings and capacity‑building are essential – Nicolas Miailhe Many in the AI safety community believe we have a narrow window, perhaps 12 to 24 months before frontier AI capabilities advance beyond our ability to evaluate and govern them – Eileen Donahoe What would you recommend is prioritized between now and the next year to two years, each of you to enhance safety? – Josephine Teo I’ll be really quick … we have to go as fast as we can to play catch up … there’s no alternative, there’s catch up to be played … – Mathias Cormann
All speakers underscore the immediacy of the challenge, calling for rapid action within the next 12-24 months to prevent a gap between AI capabilities and governance capacity [2][238-239][236-240][251].
POLICY CONTEXT (KNOWLEDGE BASE)
Several jurisdictions have flagged new risks associated with frontier AI and call for rapid development of risk-management and governance measures, underscoring the limited time to act [S45].
Similar Viewpoints
Both emphasize that technical safety cannot be separated from governance and that inclusive, internationally consistent mechanisms are required to manage cross‑border AI risks [44-46][86-87].
Speakers: Mathias Cormann, Stuart Russell
Trust is built through inclusive, evidence‑based processes; international consistency and shared incident‑reporting frameworks are the most critical frontier infrastructure – Mathias Cormann AI safety requires both technical solutions and governance; harms cross borders, making global coordination indispensable – Stuart Russell
Both argue that fragmented governance must be remedied through international collaboration that translates scientific insight into actionable, interoperable policy frameworks [61-64][110-118].
Speakers: Eileen Donahoe, Josephine Teo
Current governance is fragmented and ill‑adapted; middle powers must help close coordination gaps and shape real‑world impact – Eileen Donahoe Translating scientific knowledge into effective policy and interoperable standards demands international collaboration and avoids fragmentation – Josephine Teo
Both highlight the necessity of institutional structures (regional bodies, enforcement agencies) to turn standards into effective, enforceable governance [87-90][162-166].
Speakers: Gobind Singh Deo, Mathias Cormann
Building enforceable agencies and institutional mechanisms is essential; without them, standards and regulations remain ineffective on paper – Gobind Singh Deo The OECD’s long‑standing work on principles, definitions, and policy observatories demonstrates how regional bodies can reduce fragmentation and provide shared evidence – Mathias Cormann
Unexpected Consensus
Large capital flows into AI can be leveraged to improve governance rather than only increase risk
Speakers: Nicolas Miailhe, Jann Tallinn
AI safety is lagging behind rapid AI development; billions and maybe trillions now of dollars are getting deployed to push the frontier of artificial intelligence – Nicolas Miailhe Having these trillions flow into AI actually makes it easier to govern than harder – Jann Tallinn
While many participants focus on the risks of massive investment, both Nicolas and Jann acknowledge that the sheer scale of funding also provides an opportunity to shape safety governance, an alignment not explicitly drawn elsewhere in the discussion [2][226-227].
POLICY CONTEXT (KNOWLEDGE BASE)
Investors are urged to adopt board-level AI risk responsibilities and align incentives with long-term safety, turning capital into a governance lever; this view is supported by analyses of capital dynamics in emerging markets [S47][S51][S53].
Overall Assessment

The panel shows strong convergence on the need for accelerated global coordination, inclusive multi‑stakeholder processes, practical safety tools (testing, incident reporting, open‑source resources), and the pivotal role of middle powers and regional bodies. There is also a shared sense of urgency to act within the next 12‑24 months.

High consensus across most speakers, indicating a solid foundation for coordinated policy initiatives and the likelihood of collective action in upcoming forums such as the UNGA AI Safety Connect and the ICI Paris conference.

Differences
Different Viewpoints
Role of investors in AI safety governance
Speakers: Eileen Donahoe, Jann Tallinn
They are obviously playing a decisive role in shaping the incentives, but they’re largely absent from the governance conversation. What would it take to bring investors meaningfully into the safety conversation? – Eileen Donahoe Investors’ influence has waned as leading AI firms become too large and IPO‑driven; traditional investor leverage is insufficient for shaping safety incentives today – Jann Tallinn
Eileen argues that investors could be a decisive lever for AI safety and asks how to involve them [228-230], while Jann counters that investors no longer have meaningful influence over the biggest AI firms and therefore cannot shape safety outcomes [231-235].
POLICY CONTEXT (KNOWLEDGE BASE)
Debate centers on whether investors can meaningfully influence AI risk mitigation, with proposals for board-level oversight and incentive alignment as a way to embed safety in investment decisions [S47][S53].
What constitutes the most critical frontier AI safety infrastructure
Speakers: Mathias Cormann, Josephine Teo
Trust is built through inclusive, evidence‑based processes; international consistency and shared incident‑reporting frameworks are the most critical frontier infrastructure – Mathias Cormann Translating scientific knowledge into effective policy and interoperable standards demands international collaboration; effective policy requires rigorous testing, simulations, and standards – Josephine Teo
Mathias emphasizes coordinated transparency and incident-reporting (e.g., Hiroshima Code, GPI framework) as the core infrastructure needed for trustworthy AI [91-96], whereas Josephine stresses the need for robust testing, simulations, and interoperable standards derived from scientific research before policies can be effective [110-118][119-138].
POLICY CONTEXT (KNOWLEDGE BASE)
The core elements of an AI assurance ecosystem-technical testing, standards defining ‘good enough’, and independent third-party assurance-are identified as essential infrastructure, yet consensus on prioritisation remains unsettled [S48].
How to ensure AI safety policies are effective – enforcement agencies vs testing and standards
Speakers: Gobind Singh Deo, Josephine Teo
Building enforceable agencies and institutional mechanisms is essential; without them, standards and regulations remain ineffective on paper – Gobind Singh Deo Effective policy requires rigorous testing, simulations, and standards that work across diverse environments; investment in research and interoperable metrics is needed – Josephine Teo
Gobind argues that the creation of strong enforcement institutions is the prerequisite for any AI safety regulation to have impact [162-166][167-173], while Josephine focuses on the scientific side – extensive testing, simulations, and interoperable standards – as the foundation for effective policy [110-118][119-138].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions contrast enforcement-centric models with standards- and testing-driven approaches; standard-setting processes are preferred for flexibility, while some argue for stronger enforcement mechanisms [S45][S48][S54].
Unexpected Differences
Capital flow as a lever for governance versus a source of risk
Speakers: Jann Tallinn, Eileen Donahoe
Having these trillions flow into AI actually makes it easier to govern than harder – Jann Tallinn The technology is advancing rapidly with minimal guardrails; fragmented risk‑management processes fail to shape incentives for developers and funders – Eileen Donahoe
Jann views the massive investment in frontier AI as an opportunity that can be leveraged to enforce governance, whereas Eileen sees the same capital influx as exacerbating fragmented, ill-adapted risk-management and increasing incentives for unsafe development [226-227][56-60].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses highlight a tension: capital can be directed toward safety governance through investor stewardship, yet unchecked flows may amplify systemic risk, reflecting divergent views on the net impact of AI financing [S47][S51][S53].
Need for more safety investment versus belief that investors cannot influence outcomes
Speakers: Sangbu Kim, Jann Tallinn
AI safety measures are currently under‑invested; we must allocate money to protect systems from the start – Sangbu Kim Investors’ influence has waned as leading AI firms become too large and IPO‑driven; traditional investor leverage is insufficient – Jann Tallinn
Sangbu stresses that substantial financial resources must be dedicated to AI safety from the design phase [254-255], while Jann argues that even if more money were allocated, investors no longer have the power to steer safety decisions in large AI firms [231-235].
POLICY CONTEXT (KNOWLEDGE BASE)
While some argue that additional safety funding is essential, others doubt investors’ ability to shape outcomes without clear policy mandates, a debate echoed in recent investor-governance literature [S47][S53][S51].
Overall Assessment

The panel shows broad consensus that AI safety requires coordinated global action, but the participants diverge on the concrete mechanisms: investors’ role, the priority of incident‑reporting versus rigorous testing, and whether enforcement institutions or scientific standards should lead implementation. Unexpected tensions arise around the perception of massive AI capital – seen as both a governance lever and a risk amplifier – and between calls for more safety investment and the claim that investors can no longer influence outcomes.

Moderate to high. While all speakers agree on the urgency of AI safety, the substantive disagreements on governance levers, infrastructure priorities, and the influence of capital create significant strategic divergence, implying that achieving coordinated AI safety will require reconciling these differing pathways before effective policies can be implemented.

Partial Agreements
Both agree that a slowdown or pause in AI development is necessary to ensure safety, but Jann pushes for a broader prohibition backed by public pressure, whereas Mathias recommends occasional, targeted pauses combined with testing and monitoring [256][84-86].
Speakers: Jann Tallinn, Mathias Cormann
Prohibition would require broad scientific consensus on safe controllability and strong public buy‑in; transparency about lab‑level risks is key – Jann Tallinn Occasionally we should slow down, pause, test, monitor, audit, share information and invest in building confidence that systems work as intended – Mathias Cormann
Both aim to close the global coordination gap in AI safety, but Nicolas emphasizes regular multi‑regional convenings as the primary tool, while Mathias stresses building technical infrastructure for transparency and incident reporting as the core mechanism [10-15][12-14][91-96].
Speakers: Nicolas Miailhe, Mathias Cormann
AI Safety Connect will continue rapid, semi‑annual global convenings (Paris, India, Switzerland, UNGA) to keep the safety agenda moving forward – Nicolas Miailhe Coordinated transparency and incident‑reporting (e.g., Hiroshima AI Code, GPI framework) are foundational; an international incident‑response centre should evolve from these mechanisms – Mathias Cormann
Takeaways
Key takeaways
AI safety is falling behind the rapid development of frontier AI, creating an urgent coordination gap that must be closed. Effective AI safety requires both technical solutions and robust governance mechanisms; harms are trans‑national, so global coordination is essential. Middle‑power and regional organisations (e.g., Singapore, ASEAN, OECD) can leverage pooled resources, market influence, and regulatory innovation to shape global AI practices. Inclusive, evidence‑based processes build trust; international consistency and shared incident‑reporting frameworks are identified as the most critical frontier infrastructure. Transparency, incident reporting, and open‑source safety/evaluation tools are practical levers for improving safety across jurisdictions. Embedding safety architecture from the design stage and investing in rigorous testing, simulations, and interoperable standards are necessary for trustworthy deployment. Strong public pressure, broad scientific consensus, and widespread buy‑in are prerequisites for any effective prohibition of superintelligence development; traditional investor influence is now limited. AI Safety Connect will continue semi‑annual global convenings and will expand to the UN General Assembly to keep the safety agenda moving forward.
Resolutions and action items
Continue semi‑annual AI Safety Connect convenings (Paris, India, Switzerland, UNGA) and produce follow‑up reports on identified coordination gaps. OECD to further develop and scale the Global Partnership on AI Incident Reporting framework and move toward an international AI incident‑response centre. Singapore to publish a refreshed AI safety research priority list (second edition of the Singapore Consensus) within the next few months. ASEAN to operationalise the ASEAN AI Safety Network by establishing enforcement agencies, allocating resources for capacity‑building, and setting a concrete work plan for the next 12‑18 months. OECD to expand the open‑source safety‑tool catalogue (OECD.ai) and promote adoption of interoperable metrics. World Bank to facilitate partnerships between developing‑country clients and advanced‑economy firms for red‑team exercises and safety‑by‑design training. Stakeholders to explore mechanisms for bringing investors into safety discussions, acknowledging the reduced leverage of traditional investors.
Unresolved issues
Whether and how an international AI incident‑response centre can be created without exposing companies to legal or commercial penalties. Specific mechanisms for enforcing AI safety standards and regulations across jurisdictions, especially where enforcement agencies are weak or absent. How to achieve a globally binding prohibition on superintelligence development, including the process for attaining scientific consensus and public buy‑in. The role and influence of private investors in shaping AI safety incentives in the current IPO‑driven landscape. Funding models for sustained investment in safety research, testing infrastructure, and institutional capacity in low‑resource countries. Details on how middle powers can concretely translate pooled market leverage into normative influence on frontier AI developers.
Suggested compromises
Adopt a balanced approach that occasionally pauses or slows AI development to allow for testing, auditing, and confidence‑building, while not stalling overall progress. Combine rapid technical innovation with inclusive, evidence‑based policy processes to maintain trust and avoid fragmentation. Use open‑source safety tools and shared incident‑reporting data as a common baseline, allowing jurisdictions to adopt higher standards voluntarily. Encourage voluntary transparency from AI labs as a step toward broader public awareness and eventual regulatory frameworks.
Thought Provoking Comments
Middle powers and global majority states can shape the direction of global AI practices and safeties through pooled resources, market leverage, normative influence, and regulatory innovation – leading from the middle may be a more powerful approach than previously anticipated.
She reframes the narrative from a binary superpower vs. rest to a proactive role for middle powers, introducing the concept of ‘leading from the middle’ as a strategic lever in AI governance.
Shifted the discussion toward the agency of non‑superpower nations, prompting subsequent speakers (e.g., Singapore’s Minister Teo and Malaysia’s Minister Gobind) to discuss concrete ways their countries can influence standards and coordination, thereby broadening the focus from global to regional and national actions.
Speaker: Eileen Donahoe
The most critical piece of frontier AI safety infrastructure is coordinated transparency and incident reporting, potentially evolving into an international AI Incident Response Center that shares alerts without penalising companies for good‑faith reporting.
He identifies a concrete, actionable infrastructure gap and proposes a novel solution—a global incident response hub—that balances transparency with protection for firms.
Guided the panel toward discussing practical mechanisms (e.g., OECD’s AI Policy Observatory, GPI AI Common Framework) and set the stage for later remarks about the need for enforcement bodies and the feasibility of an international response center.
Speaker: Mathias Cormann
Policy effectiveness must be judged by real‑world outcomes, not just paper promises; translating scientific knowledge into policy requires rigorous testing, simulations, and interoperable standards—much like aviation safety decisions about runway separation distances.
She uses a vivid aviation analogy to illustrate the gap between scientific understanding and policy implementation, emphasizing evidence‑based regulation and the need for interoperable standards across jurisdictions.
Deepened the conversation about the science‑policy interface, leading other panelists to stress the importance of standards, testing, and institutional capacity (e.g., Gobind Singh Deo’s focus on enforcement agencies).
Speaker: Josephine Teo
Standards and regulations are useless without an agency that can enforce them; without enforcement mechanisms, policies remain strong on paper but have no real impact.
He highlights a critical missing piece in AI governance—implementation capacity—shifting the focus from rule‑making to the practicalities of enforcement.
Prompted the panel to consider institutional design and sustainability, reinforcing earlier points about coordinated incident reporting and leading to calls for building dedicated AI governance bodies at both national and regional levels.
Speaker: Gobind Singh Deo
The biggest danger lies in the race inside labs to create superintelligence; we need an effective prohibition, and massive public pressure (e.g., 130,000 signatures) is essential to make it politically viable.
He brings attention to the internal competitive dynamics of AI labs, arguing that external regulation alone may be insufficient without a coordinated global moratorium backed by public demand.
Introduced the idea of a global prohibition as a policy lever, influencing the final round of comments where panelists discussed slowing down development, transparency, and the limited role of investors.
Speaker: Jann Tallinn
AI attacks are like a sphere that can pierce any shield; yet we can build stronger shields using AI itself. The solution is close collaboration with advanced economies and companies to learn defensive techniques.
He uses a compelling metaphor to illustrate the dual nature of AI as both threat and defence, emphasizing the necessity of knowledge transfer from high‑capacity actors to the Global South.
Reinforced the theme of capacity‑building and partnership, supporting earlier remarks about middle‑power leadership and leading to a consensus that collaboration across development levels is vital.
Speaker: Sangbu Kim
Ensuring AI systems operate safely and ethically is partly a technical challenge and partly a governance challenge; the harms cross borders, so global coordination is essential.
He succinctly frames the dual nature of the problem and the need for international coordination, setting the conceptual foundation for the entire panel discussion.
Established the overarching lens through which all subsequent comments were interpreted, aligning participants around the necessity of both technical solutions and coordinated policy frameworks.
Speaker: Stuart Russell
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the conversation from a high‑level acknowledgment of AI risks to concrete, actionable pathways for governance. Stuart Russell’s framing of the dual technical‑governance challenge set the stage, while Eileen Donahoe’s call for middle‑power leadership broadened the arena of responsibility beyond superpowers. Mathias Cormann’s proposal for coordinated incident reporting and an international response center offered a tangible infrastructure solution, which was reinforced by Gobind Singh Deo’s emphasis on enforcement capacity. Josephine Teo’s aviation analogy and Sangbu Kim’s sphere‑shield metaphor deepened the analysis of translating science into policy and the need for collaborative defence capabilities. Jann Tallinn’s stark warning about the internal lab race and the call for a global prohibition injected urgency and highlighted the limits of traditional regulatory levers. Collectively, these comments redirected the panel toward actionable coordination mechanisms, the importance of enforcement institutions, and the strategic role of middle powers, ultimately framing AI safety as a multi‑layered, globally coordinated effort.

Follow-up Questions
What are the key lessons learned from building consensus on AI safety frameworks, and what is the most critical piece of coordinated frontier AI safety infrastructure to build now (e.g., an international incident response center)?
Understanding past successes and prioritizing essential infrastructure is vital for closing coordination gaps and improving global AI risk management.
Speaker: Eileen Donahoe (to Mathias Cormann)
What role can Singapore and other middle powers play in bridging the coordination gap to keep scientific and safety channels open, and what is the most important step they can take in the next 12 months to establish a shared minimum understanding of frontier safety?
Middle powers could act as bridges between superpowers; identifying concrete short‑term actions will help create a more inclusive global governance architecture.
Speaker: Eileen Donahoe (to Josephine Teo)
What lessons can other middle powers draw from Malaysia’s experience with the ASEAN AI Safety Network, and what concrete steps must ASEAN take in the next 12‑18 months to move from aspirational goals to operational outcomes?
Sharing practical experiences and defining actionable regional steps are essential for building sustainable AI safety capacity across Southeast Asia.
Speaker: Eileen Donahoe (to Gobind Singh Deo)
How can the World Bank help Global South countries transition from passive recipients of frontier AI to active shapers of safety and reliability requirements before large‑scale deployment?
The World Bank’s support could enable low‑capacity nations to embed safety by design, reducing global risk exposure.
Speaker: Eileen Donahoe (to Sangbu Kim)
What would an effective prohibition on the development of superintelligent AI look like in practice, and how could such a prohibition be implemented and enforced?
Designing a workable ban is crucial if the community decides that development should pause until safety conditions are met.
Speaker: Eileen Donahoe (to Jann Tallinn)
What would it take to bring investors meaningfully into the AI safety conversation and align their incentives with safety goals?
Investors shape funding flows; understanding how to engage them could create stronger safety incentives across the AI ecosystem.
Speaker: Eileen Donahoe (to Jann Tallinn)
What should be prioritized in the next 12‑24 months to enhance AI safety and security worldwide?
Identifying short‑term priority actions helps focus limited resources on the most impactful safety measures before capabilities outpace governance.
Speaker: Eileen Donahoe (to all panelists)
Research area: Developing coordinated transparency and incident‑reporting frameworks, including legal protections for companies that disclose AI failures or near‑misses, to enable an international AI incident response center.
A robust, trusted reporting system is needed to learn from incidents globally without penalising reporters, thereby reducing systemic risk.
Speaker: Mathias Cormann
Research area: Translating scientific AI safety findings into effective policy, including the design of testing, simulation, and interoperable standards that can be applied across jurisdictions.
Bridging the science‑policy gap ensures that regulations are evidence‑based and practically enforceable.
Speaker: Josephine Teo
Research area: Designing and institutionalizing enforcement mechanisms for AI standards and regulations across ASEAN member states, ensuring agencies have the authority and capacity to act.
Without enforceable institutions, standards remain on paper; research can identify governance models that work in diverse political contexts.
Speaker: Gobind Singh Deo
Research area: Understanding evolving AI threats and developing AI‑driven defensive (red‑team) capabilities that can protect critical systems, especially for low‑capacity countries.
Anticipating future attack vectors and building defensive AI tools are essential for proactive security.
Speaker: Sangbu Kim
Research area: Assessing the feasibility, economic impact, and legal design of a global prohibition on superintelligent AI development, including mechanisms for verification and compliance.
A prohibition could be a powerful safety lever, but its practicality and enforceability need rigorous study.
Speaker: Jann Tallinn
Research area: Updating and refreshing AI safety research priorities (e.g., the Singapore Consensus) to keep pace with rapid advances in AI capabilities.
Continual revision ensures that research agendas remain relevant and address emerging risks.
Speaker: Josephine Teo
Research area: Expanding open‑source safety and evaluation tools (e.g., the OECD.ai catalog) and creating metrics that make trustworthy AI easier to implement in practice.
Accessible tools lower barriers for developers to adopt safety practices, promoting wider compliance.
Speaker: Mathias Cormann
Research area: Publishing and analyzing the outcomes of closed‑door scientific dialogues on shared responsibility for AI safety to inform broader stakeholder engagement.
Transparent dissemination of expert discussions can accelerate consensus building and policy formulation.
Speaker: Nicolas Miailhe
Research area: Investigating the role of middle powers and global‑majority states in shaping AI governance through pooled resources, market leverage, and normative influence.
Understanding how non‑superpower actors can drive global norms may unlock new pathways for coordinated safety efforts.
Speaker: Eileen Donahoe

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Partnering on American AI Exports Powering the Future India AI Impact Summit 2026

Partnering on American AI Exports Powering the Future India AI Impact Summit 2026

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion centered on deepening U.S.-India technology collaboration, particularly in AI and semiconductor supply chains, featuring government officials and industry leaders at an India AI Summit. The conversation highlighted the launch of “Pax Silica,” a new partnership initiative between the two nations aimed at securing critical technology supply chains and advancing AI cooperation.


Ambassador Sergio Gor emphasized the “limitless potential” of the U.S.-India partnership, noting the strong personal relationship between President Trump and Prime Minister Modi as a key enabler for expanded collaboration over the next three years. Secretary Krishnan stressed the importance of building resilient supply chains with trusted partners who share common values, particularly in light of lessons learned during the pandemic and recent geopolitical upheavals.


Industry representatives provided concrete examples of this collaboration in action. Sanjay Mehrotra from Micron described the company’s $2.75 billion investment in India for semiconductor assembly and testing operations, complementing their U.S. manufacturing facilities. Dr. Randhir Thakur highlighted India’s significant role in global semiconductor design, with 20% of worldwide chip design conducted by Indian engineers, and noted over $25 billion in current semiconductor investments across 10 factories in India.


The discussion then shifted to the Trump administration’s American AI Export Program, presented by Michael Kratsios and further detailed by Commerce Department officials. This initiative aims to facilitate the export of America’s “AI stack” – including chips, models, and applications – to partner countries while supporting their development of sovereign AI capabilities. The program will work through industry-led consortia offering full-stack solutions to international buyers, with applications spanning healthcare, education, agriculture, and manufacturing. Officials emphasized that the program is designed to provide flexibility and choice, allowing countries to select which components of the AI stack best meet their specific needs and sovereignty requirements.


Keypoints

Major Discussion Points:

U.S.-India Technology Partnership and “Pax Silica” Initiative: The discussion centers around deepening technology collaboration between the United States and India, highlighted by the signing of the “Pax Silica” agreement aimed at creating resilient and secure supply chains in critical technology areas, particularly semiconductors and AI infrastructure.


AI Export Program and Technology Stack Sharing: A significant focus on the newly announced American AI Export Program, which aims to share the “American AI stack” (including chips, models, and applications) with partner countries through industry-led consortia, enabling countries to build sovereign AI capabilities while maintaining strategic partnerships.


Semiconductor Manufacturing and Supply Chain Resilience: Extensive discussion of semiconductor investments and manufacturing partnerships, including Micron’s $2.75 billion investment in India’s Gujarat facility and the broader goal of diversifying global supply chains away from single-point dependencies.


AI Revolution and Economic Transformation: Speakers emphasized that the AI revolution is inevitable and transformative, comparing it to historical shifts like the transition from horse-and-buggy to automobiles, with particular focus on how AI will reshape industries from healthcare and education to manufacturing and agriculture.


Sovereign AI Capabilities and National Champions: Discussion of how countries can develop their own AI capabilities while leveraging American technology foundations, allowing nations to build domestic champions and maintain control over their data and AI infrastructure.


Overall Purpose:

The discussion serves as a high-level diplomatic and business forum to announce and promote new U.S.-India technology partnerships, particularly the AI Export Program and Pax Silica initiative. The goal is to strengthen bilateral cooperation in AI and semiconductor technologies while positioning both countries as leaders in the global AI revolution and creating alternatives to China-dependent supply chains.


Overall Tone:

The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “limitless potential,” mutual benefits, and shared democratic values. The atmosphere is celebratory regarding new partnerships and confident about technological progress. There’s a diplomatic warmth highlighting the personal relationship between President Trump and Prime Minister Modi, and the tone remains enthusiastic and partnership-focused from beginning to end, with no notable shifts in sentiment.


Speakers

Speakers from the provided list:


Secretary S. Krishnan – Secretary (specific department not mentioned)


Jacob Helberg – Moderator/Discussion facilitator


Ambassador Sergio Gor – U.S. Ambassador to India


Sanjay Mehrotra – Works with Micron (global memory and storage leader)


Dr. Randhir Thakur – Doctor/Expert in semiconductor and technology sector


Moderator – Event moderator for various sessions


Michael Kratsios – Head of delegation for the United States to the India AI Impact Summit, President Trump’s National Science and Technology Advisor, Director of the White House Office of Science and Technology Policy


Mr. Sriram Krishnan – Senior Advisor for Artificial Intelligence at the Office of Science and Technology Policy, Panel moderator


William Kimmett – Department of Commerce Undersecretary for International Trade


Brendan Remington – Department of Commerce Deputy Undersecretary for Policy at the International Trade Administration


Additional speakers:


Mr. Jeetu Patel – President and Chief Product Officer, Cisco (mentioned at the end as upcoming keynote speaker)


Full session reportComprehensive analysis and detailed insights

This discussion comprised multiple sessions focused on U.S.-India technology cooperation, featuring government officials and industry leaders discussing AI and semiconductor partnerships. The sessions included a panel discussion, a keynote by Michael Kratsios on the American AI Export Program, and a panel on AI export initiatives.


Opening Panel Discussion

The initial panel featured Ambassador Sergio Gor, Secretary S. Krishnan, and industry representatives discussing the foundation of U.S.-India technology cooperation. Ambassador Gor emphasized the “limitless potential” of the U.S.-India relationship, noting the personal relationship between President Trump and Prime Minister Modi as a significant enabler. He observed with diplomatic candour that “for those colleagues of mine from Washington to understand the difference that it makes when our president likes you or he doesn’t like you,” suggesting this creates opportunities for enhanced cooperation over the next three years.


Secretary S. Krishnan outlined three key priorities: building infrastructure, focusing on innovation, and maintaining a spirit of partnership. He emphasized lessons learned from the pandemic and recent geopolitical upheavals, arguing for the need to “align and ally on lines which really work for people who share values” and avoid becoming “enslaved or tied down to just one dependence.”


Dr. Randhir Thakur provided context on India’s semiconductor capabilities, noting that Indian engineers conduct 20% of worldwide chip design and that India produces 1.5 million engineers annually. He highlighted dramatic growth in India’s semiconductor investments, growing from zero to over $25 billion across 10 factories in three years. Dr. Thakur also referenced a quote from “Undersecretary Halbert” that “the 20th century ran on oil and steel. The 21st century runs on compute and the minerals that feed it.”


Industry Investment Examples

Sanjay Mehrotra from Micron detailed the company’s $2.75 billion investment in assembly and test operations in Sanand, Gujarat, India. He explained that “memory is a critical enabler of AI” and that “if AI is driving, is the growth engine of the digital economy, then memory is the fuel.” Mehrotra described how Micron’s R&D facilities in India contribute to leading-edge memory design while manufacturing operations create synergies with U.S. facilities.


The discussion included references to partnerships with companies including Analog Devices, Qualcomm, Synopsys, Intel, and investments by Tata Electronics. Dr. Thakur noted that India produced $70 billion worth of mobile phones with $30 billion exported, demonstrating the country’s manufacturing capabilities.


Policy Context and Regulatory Changes

An important development mentioned was President Trump’s decision to rescind the Biden diffusion rule in his first week in office, making it easier for countries like India to access advanced semiconductor chips. This regulatory change was presented as facilitating enhanced technology cooperation.


The discussion also referenced the signing of the “Pax Silica” agreement, though specific implementation details were not elaborated in the transcript.


American AI Export Program Keynote

Michael Kratsios delivered remarks on the American AI Export Program, describing it as an initiative to share America’s “AI stack” globally while supporting partner countries’ sovereign AI capabilities. The program is designed around industry-led consortia offering full-stack solutions to international buyers.


Kratsios emphasized the program’s approach to “real AI sovereignty” – enabling the adoption and deployment of sovereign infrastructure, sovereign data, sovereign models, and sovereign policies within national borders under national control. He noted that this approach allows countries to build domestic champions on American technology infrastructure rather than viewing sovereignty and partnership as contradictory.


The program includes the modernization of the Peace Corps into the U.S. Tech Corps, which will embed volunteer technical talent with partner countries to provide implementation support. Financing mechanisms mentioned include new programs from the U.S. International Development Finance Corporation and Export-Import Bank.


AI Export Program Implementation Panel

Commerce Department officials William Kimmett and Brendan Remington provided details on program implementation. Kimmett explained that the Department of Commerce conducted a request for information process that generated significant industry interest and provided insights shaping the program design.


Remington emphasized the program’s flexibility, noting that “we hear about AI sovereignty a lot. And there are many different versions of this.” The program accommodates diverse national priorities, from countries wanting complete control over their data to those seeking transparency and choice in their AI implementations.


The officials highlighted applications in healthcare, education, agriculture, and manufacturing, with particular focus on improving public services and economic outcomes in emerging markets through partnerships with ministries and government agencies.


Sovereign AI Capabilities and National Champions

The discussion included references to sovereign AI development, with specific mention of Sarvam’s new model launch as an example of sovereign AI capability. The program includes initiatives for AI agent standards and national champions development.


Sriram Krishnan specifically acknowledged students in the audience and their enthusiasm for AI, emphasizing the importance of the next generation in driving technological advancement.


Program Structure and Approach

The American AI Export Program is structured around industry consortia providing comprehensive solutions rather than individual technology transfers. This approach recognizes that successful AI deployment requires not just technology but also capacity building and institutional support.


The program acknowledges different types of AI sovereignty requirements, from village-level data centers to transparency needs, and is designed to accommodate these varying national priorities while maintaining integration with trusted international partners.


Conclusion

These sessions outlined a framework for U.S.-India cooperation in AI and semiconductor technologies, emphasizing shared democratic values while recognizing diverse sovereignty requirements. The combination of regulatory changes, investment commitments, and new export programs creates multiple pathways for enhanced technology cooperation.


The discussions emphasized that successful partnerships require enabling local capability building rather than creating dependency relationships. As these initiatives move forward, their success will depend on effective execution of the detailed mechanisms still being developed.


Note: The transcript appears to conclude mid-sentence with the introduction of Jeetu Patel from Cisco, suggesting additional content may have followed these recorded portions.


Session transcriptComplete transcript of the session
Secretary S. Krishnan

and resilient supply chain in these critical areas of technology which the world needs.

Jacob Helberg

And that’s actually a great segue to shift to Ambassador Gore, who just arrived in India with a bang. Ambassador Gore, could you help us understand your vision for the opportunities that you see to deepen U .S.-India technology collaboration? Thank you.

Ambassador Sergio Gor

Thank you, Jacob. Jacob, look, liberalist potential, those are the two words. And I truly mean it. As I’ve started traveling around this country, and I’ve been to multiple states already, what I have seen here, it’s such a natural partnership. And what the United States has with the best technology and with the innovation that we see here across India, this is a natural partnership. The President and the Prime Minister have a special relationship, and I mean that. And that goes a long way. And I think that’s a great point. I think that’s a great point. I think that’s a long way. You have great elements here in the sense of the technology, in the sense of the innovation.

and in the want. India wants to get involved. But also the magic touch is that special relationship between our two leaders. It’s a friendship that goes back many years. And for those colleagues of mine from Washington to understand the difference that it makes when our president likes you or he doesn’t like you. And with India, our president really, really, really likes the prime minister. And so that makes a huge difference for the next three years. Not only the administration, but the White House itself is open to engaging India. And one of those areas where we can further this to a record is this AI. It’s the technology sector. And so that’s something that I’ll be focused on over the next three years.

Jacob Helberg

Thank you. Sanjay, could you help us understand a little bit, what does the partnership between America and India mean for the security of the supply chains of a company like Micron? Which obviously operates on a global scale.

Sanjay Mehrotra

First of all, Jacob, let me just say congratulations on this India and U .S. Paxilica signing today. This is certainly a tremendous initiative and wonderful to see the collaboration between the two great countries on the technology front, semiconductors, and, of course, resilient, secure supply chains. Micron, as I was mentioning earlier, a global memory and storage leader. And, of course, Micron is headquartered out of U .S., an American company, and Innovation Powerhouse, 60 ,000 -plus patents. We are here in India. We have R &D facilities here in India, absolutely contributing to leading -edge memory design. I should name Secretary Vaishnav earlier. Earlier. We earlier talked about two -nanometer designs. In memory, the most advanced designs in the world are also taking place here in India.

very much in collaboration with our teams in the U .S. So it’s an example, good example, of how we are advancing AI forward. Memory is a critical enabler of AI. Just think of it this way, that if, you know, AI is driving, is the growth engine of the digital economy, then memory is the fuel. And that fuel is being, you know, really developed, is manufactured between collaboration between U .S. and India here with R &D teams here, but also manufacturing. And that’s an important piece with Micron performing assembly and test operations here in the Sanand, Gujarat facility with investments, with the support from the Indian government, with $2 .75 billion of investments, with time that will result in, hundreds of millions of chips assembled and tested here.

And that complements Micron’s manufacturing plan. in the U .S. Actually, as you look at our manufacturing plants in the U .S. on the silicon side, as well as advanced packaging side, the work we’ll do here will complement that. It will add to it. It will contribute to it in terms of AI in manufacturing, in terms of automation in manufacturing, refining and making workflows related to manufacturing more efficient. This will be a win -win partnership with Micron’s investments in the U .S. getting the support and all the learnings of large -scale manufacturing of assembly and test operations here in India as well. So we are really looking forward to it. And it’s initiatives like Paxilica absolutely ensure that there is successful supply chain resiliency and security built in to continue to build the AI infrastructure and advance the technology.

And it’s

Jacob Helberg

Thank you. And Dr. Thakur, could you help us understand a little bit better the special connection between heavy data center investments and edge technology like smartphones and connected vehicles, especially in emerging markets?

Dr. Randhir Thakur

Well, thank you very much. And I first want to really congratulate on Paxilica at a personal level. It’s very exciting. We are doing this between our two countries. Truth be told, for my PhD, I went to Oklahoma of hot places. You know, so I’m a sooner and pretty soon I realized that football, I don’t have chance to do silicon, you know, so I worked on silicon. So, but you know, the key is that first transistor built was really built on germanium produced near Oklahoma, germanium transistor. Until we switched to silicon and thank God we did and Shockley made the first transistor in Bell Labs and rest is history. So our industry has always been dependent on this material engineering, ability to work these minerals, deploy them into making the chips.

And as far as the question about data center, I think the enablement of the data centers or AI is hardware driven. Because AI was known long time, but the hardware was not ready. Our ability to compute was just not there. And as you have said, Undersecretary Halbert, that the 20th century ran on oil and steel. The 21st century runs on compute and the minerals that feed it. That is so true. So packed silica is just such a timely change. For us in India, the innovation and the drive we have is tremendous. 1 .5 million engineers are produced every year. 20 % of the global semiconductor industry. 20 % of the global semiconductor industry designed. is done, the chip design is done by Indian engineers here in India.

And we never really had any non -coercive issues in the design space. So I think this is a very, very natural fit. In terms of the progress we are making, I think three years ago, there was no investment in India on the semiconductor side. Today, we have more than $25 billion being invested in 10 different factories, including Micron and Tata Electronics. We are working on the first AI -enabled fab that will be producing the AI -specific chips in India. We are using the indigenously developed packaging technology in Northeast Assam, where we’ll be packaging all of the automotive and other chips that are at the edge, being done for the U .S. companies. Partnership -wise with the U .S., because semiconductors brings us together, we are working with companies like Analog Devices.

Qualcomm, Synopsys, and Inter, where we have memorandum of understanding to work together. to deploy the ecosystem. Sometimes we are the customers, sometimes they are the customers. So at a holistic level, that engagement moving extremely well. On mobile phone, we are producing now, I think this year India produced mobile phones worth $70 billion in the last year, $30 billion of which were exported out. So there is just tremendous push all around in terms of manufacturing. And this initiative today, I really believe it’s going to bring and accelerate the momentum that we already have. Thank you.

Jacob Helberg

I want to end by zooming out and asking a question for all of our panelists that’s a little bit more macro. And as we gather here in India in front of world leaders and business executives, and as the global economy undergoes this incredible change driven by the reorganization of our supply chains and the AI revolution. What is your message to this? And maybe we can start with Secretary Krishnan and work our way down. The message

Secretary S. Krishnan

to this audience is that we need to align and ally on lines which really work for people who share values, for countries that share values, and to ensure that we do not become enslaved or do not become tied down to just one dependence. I think that is the critical thing. That is what we learned. Through the pandemic and through all the geopolitical upheavals. And therefore, we need to have trusted partners with whom we can work and trusted value chains so that technology can work for all of us in organizing this India AI Summit. I think what we have truly managed to do is to democratize an important element of technology. The people have been let into the room, and that needs to continue through.

valuable partnerships.

Jacob Helberg

Thank you. Ambassador Gore, you talked about limitless potential earlier. Can you give us a little bit of a color on what your main top -level message is to this audience?

Ambassador Sergio Gor

Look, the message is the AI revolution is here. People can pretend it’s not. It’s coming. And so it’s one of those things, the sooner that people can adapt to your point, the sooner that people can partner with like -minded individuals, that’s a good thing. And so you find in some places of the world, not India, but in other places of the world where they’re going to resist AI, where they’re going to resist this revolution, it’s here. It’s here to stay. Every hundred years, every so often, we see in history something that changes the world. And you always have a sector that resists. When Ford had the first Model T come off the assembly line, the first people that protested were those in a horse and buggy.

But today, nobody would want to go back to a horse and buggy. They would want to go back to a horse and buggy. They would want to go back to a horse and buggy and give up their cars. That revolution came, whether you like it or not. And the same thing is going to happen here over the next few years. And so India and the United States being at the leading, at the cutting edge of this new technology, embracing it, using it for good, and partnering with those who share our common values. We’re the world’s oldest democracy. This year we’re celebrating 250 years. India is the world’s largest democracy. This is a national partnership for both of our nations.

Jacob Helberg

Thank you. Sanjay?

Sanjay Mehrotra

Micron’s vision statement defined several years ago now is transforming how the world uses information to enrich life for all. And that vision is truly coming to life today. This AI summit, the message of Sarvajan Hittai and Sarvajan Sukhai, welfare for all, happiness for all, is very much aligned. with Micron’s vision. U .S. vision for AI in terms of national and economic security, and, of course, the businesses and the global leaders around the globe working toward AGI, artificial general intelligence, all of this critically relies on memory and storage, and Micron is very proud to be at the center of it. More and more memory is needed. Micron is making the investments in order to increase the supply.

But it’s not about just the importance of memory and storage to advancement of AI. It’s not just about investments that Micron is making in the U .S. to advance semiconductor supply chain as well as in India and other locations, but it is also absolutely about initiatives like Pax Silica, that really secure the future of supply chain. and ensure that AI infrastructure and AI capabilities will be there ready to shape the world of the future. We are very proud to be part of this, very proud as an American company to be able to bring up advanced technology capability here in India, which will benefit our U .S. operations as well, and very thankful to the partnership between U .S.

and India to jointly together define the future of AI and shape the future of the world.

Jacob Helberg

Thank you so much. Dr. Thakur, any closing thoughts for the audience?

Dr. Randhir Thakur

Well, thank you very much. As our Tata Sun chairman, Mr. Chandrasekharan, said yesterday, under the vision of our prime minister, India has treated AI as strategic national capability. So I see. I see the declaration of Pax Silica as a response and an enabler, a codification of trust, and for us. the opportunity to work together. The expectation is laid out from the nations. It is now up to us to deliver on this promise as an industry. So, Honorable Undersecretary Helberg, Ambassador Gore, I really want to thank you from bottom of my heart for Paxilica. We’ll make it work. Thank you. Thank you so

Moderator

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you Thank you. Thank you. you you Thank you. Thank you. And this is the panel partnering on American AI Exports Program. First, I take this opportunity to welcome Mr. Michael Kratios for the keynote remarks to kick off this session. Michael Kratios is the head of delegation for the United States to the India AI Impact Summit. And also, he is President Trump’s National Science and Technology Advisor and the Director of the White House Office of Science and Technology Policy.

Ladies and gentlemen, please welcome Mr. Michael Kratios.

Michael Kratsios

asked to choose between completing the stack and developing a domestic AI, we have established a national champions initiative. We recognize that partners need a chance to build their native technology industries and believe facilitating this will be a critical part of the export program. To facilitate the development of industry -led, open, and secure AI standards and to give the public confidence in this next generation of technology, we are creating an AI agent standards initiative. To empower developing partner countries to overcome financing obstacles as they import the American AI stack, the U .S. International Development Finance Corporation and the Export -Import Bank of the United States, the U .S. Trade and Development Agency, the Millennium Challenge Corporation, and a new World Bank fund have initiated new AI -focused programs.

And to further enable AI adoption in the developing world, the Trump administration is bringing America’s historic Peace Corps into the 21st century. with the launch of the U .S. Tech Corps. This initiative will embed volunteer technical talent with import partners to provide last -mile support in deploying powerful AI applications for enhanced public services. In everything from energy and education to manufacturing and medicine to transportation and agriculture, I’m confident that the American AI stack can be key to unlocking new economic and social benefits for your people. The hope of the United States is that the pursuit of real AI sovereignty, the adoption and deployment of sovereign infrastructure, sovereign data, sovereign models, and sovereign policies within your borders under your control will become an occasion for bilateral diplomacy, international development, and global economic dynamism.

The American AI Export Program exists to make that happen. The U .S. wants to share the American AI stack because this technology presents the opportunity to lead. as our nation’s founders did 250 years ago, a revolution in human history to the benefit of all of mankind. These tools used well will unlock new knowledge for our growth and new sources of prosperity and challenge us to grow the strength of our humanity to match our growing capabilities. American AI is settling a new frontier, but America does not seek to build this new future alone. So I ask you to join us. Thank you.

Moderator

Thank you so much, Mr. Kratios, for your ideas, your remarks, which are truly enlightening and illuminating as well. Ladies and gentlemen, next I would like to invite the speakers for a panel on partnering on AI exports. Interesting, isn’t it? Well, the moderator is Mr. Sriram Krishnan, the Senior Advisor for Artificial Intelligence at the Office of Science and Technology Policy, and the panelists are Department of Commerce Undersecretary for International Trade, Mr. William Kimmett, and Department of Commerce Deputy Undersecretary for Policy at the International Trade Administration, Mr. Brendan Remington. Please welcome the panelists. Over to you, Mr. Krishnan.

Mr. Sriram Krishnan

Good morning. How is everyone doing? How is everyone doing? First off, before we get started, I just want to say what a privilege and honor it has been for us to be here the last couple of days. I want to thank all of our hosts. I want to thank the Honorable Prime Minister Narendra Modi. I want to thank the huge team which has made this possible. It has been an amazing privilege. And especially today, when I was roaming the halls and I was here, I was just struck by the honor and the privilege of being here. And I want to thank the optimism of so many of the delegates and attendees here. in particular I was struck by the optimism of so many young people so I’m curious how many of you here are students okay can all of you who are students just please stand up okay can everyone else give them a round of applause because I was just so blown away by the enthusiasm they have for AI and you know and you know the hope and the potential and you know thank you for coming here and I think you need to get back to studying after this but thank you for coming here it really blew me away and so I wanted to say that just because I think that hope and optimism is what we in the Trump administration have really embraced when it comes to AI and I think that’s going to be a core part of when we talk about AI exports so first off I want to kind of introduce my distinguished fellow panelists we have Under Secretary William Kimmett from Department of Commerce we have Deputy Under Secretary Remington Well, before we get into the serious stuff, you’ve been all over India for the last couple of days.

No pressure, but what has been your favorite part? Everyone here is judging you.

William Kimmett

My favorite part, I think, it’s been fabulous. We actually did a stop in Bangalore before we came to Delhi, which was really fabulous and really just amazing. I want to echo what Sri Rama said about the excitement and the dynamism we’re seeing in the ecosystem here, and it’s just really remarkable, and particularly the young, talented students here in India. It’s just really been remarkable to see. And I’d say riding in the streets of Bangalore, that was an experience, and seeing the traffic there. But what I noticed while we were driving throughout all the traffic around us was how, well, digitalized the country is. And, you know, I see people on motorcycles, and they’re on the back with their phones, and everybody’s on their phone, and just how digital the country is, and it’s really remarkable.

So I’d say experiencing the streets of Bangalore on the riding side, but also seeing how integrated tech is in everybody’s everyday life here has been really remarkable to see.

Mr. Sriram Krishnan

Amazing. Anybody from Bangalore or Karnataka here? Okay, a couple of folks. Okay, you need to help show them around next time he’s there. There you go. And Deputy Undersecretary, what about you?

Brendan Remington

I’d say the energy and the pace. I mean, it’s just unreal. I’ll stick with the driving theme. I think you can see it. It’s both precise and it’s decisive. It doesn’t wait for you. It’s representative of a lot of things, and Indians keep pace. I love the energy.

Mr. Sriram Krishnan

That’s true. I think the energy has been amazing. And so we’re going to talk about exports, but all of this comes from what President Trump set into motion in his very first week in office, where he did two things. First, he rescinded the Biden diffusion rule, which, as Dr. Kratzio said, made it difficult. It made it near impossible for countries like India to access advanced semiconductor chips. So I think that’s a big thing. I think that’s a big thing. I think that’s a big thing. I think that’s a big thing. I think that’s a big thing. I think that’s a big thing. I think that’s a big thing. I think that’s a big thing. Second, he tasked all of us with coming up with an action plan to deliver on what the country’s, America’s priorities should be when it comes to AI.

And we did that in July, and we have come up with three priorities. First is to build infrastructure. AI needs energy. AI needs data centers. And we’ve been focused on building those in a way that works for America and works for our citizens. Second, we’re focusing on innovation. How do we make sure that we have our entrepreneurs and we have our companies building the technologies that are necessary? But third, I think, is a spirit of partnership. How do we share these technologies that are built in Silicon Valley, in America, with our allies and with the rest of the world? And that’s what we’ve been really focused on. And on that end, and Will, I’d love to start with you.

Could you talk a little bit about the AI? The AI export program that Dr. Kratz has talked about, what it

William Kimmett

Absolutely. So certainly, President Trump has made AI a national priority. And so what does that mean? And when you think of the United States and our great tech companies, obviously, we’re doing what we can to support them. And of course, we’re doing that for our national security, our economic security and the success of our great companies. But how do we use that to share that with the rest of the world? And so specifically on this AI exports program, the president issued an executive order last July that tasked the Department of Commerce with standing up the AI exports program. And what that is, is it’s going to call for industry led proposals of consortia that will offer full stack offerings to the world and how we can promote the exports of those full stack consortia.

So it’s sort of a question of what does that mean? What does full stack mean? And so we wanted to make sure we were as thoughtful as possible in this process. us. And so we issued a request for information, asking companies to give us information, tell us what might be helpful, tell us maybe what wouldn’t be helpful. And we got a tremendous, tremendous response from the industry. We got hundreds of submissions, and we have spent the last several weeks digesting those and understanding the dynamics that maybe we weren’t aware of and things we should think about as we craft this program. And we are putting the finishing touches on it. And the next step is going to be a public call for proposals from the industry to submit these consortia and how we’re going to shape that program to do full -stack offerings and maybe other offerings as well.

Mr. Sriram Krishnan

That’s awesome. And Deputy Undersecretary, if I may come to you, maybe if you can just get into the details. We have guests from multiple countries over here. We have companies from all over the world here. Could you maybe break down a little bit about the next level of granularity? How do these consortia work if I’m a country attending this event or if I’m a company? what should I be doing?

Brendan Remington

Sure. I’ll start by saying you’ll hear more on how it actually works, but I’ll describe what we’ve heard so far and what people have asked of us. We’ve heard really two motions. One is how can we go outbound to the world? How do we offer, how do we help companies find buyers? And then on the other side for foreign buyers, how do we make it easier for you? And so as we’ve looked at that, we’ve decided, and as we’ve approached it, we’ve looked at a couple of different kinds of consortia. On the one hand, you would think, and what we’ve heard is make it easy, make it simple, like t -shirt sizes, small, medium, large.

I don’t need 100 permutations. I just need to know what’s available. But there are others who do want that special, very, very unique niche kind of thing and to accommodate both of those. And we’ll say in each of these, we’re looking for simplicity. We’re looking for elegant solutions. Our goal here is to make this easy for both sides. for both buyers, whether they are governments, whether they are state -owned enterprises or any sort, and then also for the real companies that we talk to, both the large ones but also the small startups who are thinking, what should I do next? I’m in my Series A, I’m in my Series B. Should I sell abroad? Is this possible to make that feasible for them?

Mr. Sriram Krishnan

And so if I’m a founder, should I come find you?

Brendan Remington

Yes.

Mr. Sriram Krishnan

Oh, there we go. Wow, I like putting him on the spot over there. So find him.

Brendan Remington

Through the website, not me personally.

Mr. Sriram Krishnan

He’s the man. I think for the last couple of days, one of the remarkable announcements was the launch of Sarvam’s new model, which I was really blown away by. And if you folks haven’t checked it out, you should check out some of the technical details. It is really, really impressive. And I think that is a good segue to the theme of sovereign AI. We have countries all over the world who want to have a sovereign AI kit. capability. What does it mean when working with some of the programs that we are talking about today with if you’re a national champion or if you’re a country which wants to have sovereign capabilities?

William Kimmett

Sure. So I think the program is going to, of course, be built on the American AI tech stack. But then, like I said, what does that mean exactly? And so what it really means is we want to set the foundation for possibilities as we’re exporting to other countries. And so in the context of a national champion, you know, if there’s a great company that wants to use American tech, we provide that foundation and then allow that national champion to build on that foundation of American tech. And so it’s really providing a level of the tech stack to countries so that they can build on that with their great local domestic champions.

Mr. Sriram Krishnan

I totally agree. I think one thing that, you know, when it comes to the stack, is there are multiple parts of it. There are the chips, the GPUs, the TPUs, whether using NVIDIA or AMD or Google. For example, you know, Servum has done great work working with NVIDIA on training their model. Then there is the model layer. There are agents or applications on top. So I think when we talk about the program and the stack, it is really you can pick as a company or as a country what part of the stack you want to build on. And there’s a whole range of possibilities.

Brendan Remington

If I could add one thing, we’re trying to facilitate choice. We hear about AI sovereignty a lot. And there are many different versions of this, right? We hear about does every village need their own data center, right? Or does everyone need an LLM for, like, their specific context? Some of them just say I want control over my data. I want to know where it goes. I want transparency. Because there are so many permutations, we want to offer these many choices and allow each context. And we’re trying to do that. And we’re trying to do that. And we’re trying to do that. and each buyer to make those choices.

Mr. Sriram Krishnan

And that is true. And I think I want to go back to what Dr. Katcha said, and I said about the first week of President Trump being in office, he wanted to make it easy for other countries to get access to our technology, and that set this in motion last January. Next up, I want to move to use cases. A lot of countries all over the world that Dr. Katcha is talking about and who are trying to figure out how to adopt AI, how to provide their citizens a better quality of life, better services. What are use cases that are interesting to you that you think, you know, we are going to see a lot of great progress and work on in the next year or two?

William Kimmett

Yeah, so I think the ones that are interesting to me certainly are in emerging markets are both in the health space and the education space and what we can do to bring AI solutions in those crucial sectors in various countries. And so. working with like the Ministry of Health in an emerging market and coming up with a solution that would revolutionize their health industry to the benefit of their citizens, that’s the part of the program that really excites me.

Mr. Sriram Krishnan

Ben?

Brendan Remington

Yeah, there’s so many. I mean, it’s so sweeping, but others we’ve also heard have been agriculture, manufacturing, I mean, maritime, you name it. There are a lot of verticals that have so many new use cases and so many new applications coming out all the time. I think back to the simplicity point, organizing around verticals, a one -stop shop, if you come here, this is where you can find offerings, has been something we’ve heard would be very useful.

Mr. Sriram Krishnan

I think so. For me, there’s obviously many, but I think education is something which has just blown me away. Even this morning, talking to a student, I met somebody from my alma mater and from second year of undergrad who’s just doing amazing things with AI at an age when I was not doing anything at all. And I think that’s something that’s really important. I think that’s something that’s really important. And, you know, that just fills me with hope and inspiration. Imagine every student, whether you could be five years old or maybe you’re 50 years old, and having access to a teacher, a lecturer, a professor who never gets tired, who knows how to speak to you in a local language, can answer any single question.

I think that is going to change so many people’s lives. Okay, one last note. We’re all working on AI. Just on a broad theme, what is something about AI, whether it’s in the U .S. government, whether it is how you’re approaching your work that fills you with optimism?

William Kimmett

I’d say for me, speaking, you know, working for the government, we’re talking about helping export USAI tech stack to the world. We actually, in the U .S. government, need to do a good job also bringing it into government in a lot of the work we do. I run the International Trade Administration. As part of that, we do a lot of analysis of supply chains and looking at things, and there are certainly better ways we can. So that’s one area where I think as we’re bringing tech. to the world, we also need to do a good job of bringing it into the U .S. government and helping us become more efficient as well.

Mr. Sriram Krishnan

I love that. I have to know that the U .S. government, I mean, we’ve done a lot of work. If you look at the action plan, a lot of work on making things more efficient. And you?

Brendan Remington

I’d say two things. The first, it’s so sweeping. There are very few technologies that change like your personal life and your work life. And they’re both going very quickly. The second is the hunger for this is so high. It’s not hard. We don’t have to sell AI in the sense of, do you want this? People want this. It’s really, how should we best provide it to them? How do we help both sides of this? How do we help the companies and how do we help the buyers? And being in the middle of that and enabling that is very exciting.

Mr. Sriram Krishnan

I agree. I think it’s a great note to end on. And I think I just want to close out by reemphasizing something that the Ambassador Gore spoke about earlier, which is, you know, these are two great nations. the world’s oldest democracy and the world’s largest democracy. Both countries, I obviously have very deep ties to. And a lot of this has been made possible by the special relationship between President Trump and Prime Minister Modi. And I think today, what you saw with the Pax Silica signing with Undersecretary Helberg and Dr. Kratios, what you saw with Dr. Kratios’ announcement is such a remarkable moment. But for me, it is just the beginning of what is going to be an amazing, enduring technology partnership.

But thank you so much. And thank you for being an amazing audience. And thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Moderator

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Hello everyone, welcome back once again. I’m sure you’re all refreshed after this break. And now we’re going to start with the next session and have some wonderful keynote speakers once again with us today. And a great lineup, as I said in the morning as well. So now I’m going to invite our keynote speaker. He is Mr. Jeetu Patel, President and Chief Product Officer, Cisco. Well, Mr. Patel sits at the intersection of AI and enterprise infrastructure. It’s kind of the plumbing that makes it work.

At Cisco, he’s leading the company’s transformation into an AI -native networking and security powerhouse. In a world obsessed with models and algorithms, his reminder that none of it works without resilient, secure infrastructure is both timely and essential. Ladies and gentlemen, please welcome Mr. G.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Ambassador Sergio Gor
3 arguments193 words per minute504 words156 seconds
Argument 1
Natural partnership exists between U.S. technology and Indian innovation, strengthened by special relationship between President Trump and Prime Minister Modi
EXPLANATION
Ambassador Gor argues that the U.S.-India partnership is natural due to America’s best technology combined with India’s innovation capabilities. He emphasizes that the personal friendship between President Trump and Prime Minister Modi, spanning many years, creates significant advantages for collaboration over the next three years.
EVIDENCE
The Ambassador notes he has traveled to multiple states in India and witnessed the innovation firsthand. He specifically mentions that when the U.S. president likes a leader, it makes a huge difference, and Trump ‘really, really, really likes the prime minister,’ making the White House open to engaging India.
MAJOR DISCUSSION POINT
U.S.-India Technology Partnership and Collaboration
AGREED WITH
Secretary S. Krishnan, Sanjay Mehrotra, Dr. Randhir Thakur, Jacob Helberg
Argument 2
AI revolution is inevitable and transformative, comparable to the automobile replacing horse and buggy, requiring adaptation and partnership with like-minded nations
EXPLANATION
Ambassador Gor contends that the AI revolution is here to stay and will fundamentally change the world, similar to major historical technological shifts. He argues that resistance is futile and that countries should embrace AI and partner with nations sharing common values.
EVIDENCE
He provides the historical analogy of Ford’s Model T and the assembly line, noting that those who protested (horse and buggy operators) eventually adapted, and today nobody would want to return to horse and buggy transportation. He emphasizes that the U.S. is the world’s oldest democracy celebrating 250 years, while India is the world’s largest democracy.
MAJOR DISCUSSION POINT
AI Revolution and Global Transformation
AGREED WITH
Dr. Randhir Thakur, Michael Kratsios, Brendan Remington
Argument 3
Collaboration between world’s oldest and largest democracies creates win-win opportunities in manufacturing, innovation, and global competitiveness
EXPLANATION
Ambassador Gor emphasizes the strategic importance of the partnership between the United States (world’s oldest democracy) and India (world’s largest democracy). He argues this collaboration positions both nations at the cutting edge of AI technology while promoting shared democratic values.
EVIDENCE
He references the U.S. celebrating 250 years as a democracy and India being the world’s largest democracy, positioning this as a natural partnership for leading AI development and using technology for good.
MAJOR DISCUSSION POINT
Economic and Strategic Benefits of Partnership
S
Secretary S. Krishnan
3 arguments139 words per minute145 words62 seconds
Argument 1
Partnership enables secure and resilient supply chains in critical technology areas through trusted allies sharing common values
EXPLANATION
Secretary Krishnan argues that countries need to align with partners who share similar values to create secure supply chains. He emphasizes the importance of avoiding dependence on single sources and building trusted partnerships for technology development.
EVIDENCE
He references lessons learned through the pandemic and geopolitical upheavals, highlighting the need for trusted partners and value chains. He mentions the India AI Summit as an example of democratizing technology by letting people into the room.
MAJOR DISCUSSION POINT
U.S.-India Technology Partnership and Collaboration
AGREED WITH
Sanjay Mehrotra, Dr. Randhir Thakur, Jacob Helberg
Argument 2
Pax Silica initiative ensures supply chain resiliency and security for AI infrastructure development between trusted partners
EXPLANATION
Secretary Krishnan highlights the Pax Silica initiative as a mechanism to ensure secure and resilient supply chains specifically for AI infrastructure. He emphasizes that this partnership between trusted allies will support continued AI development and deployment.
EVIDENCE
He references the signing of Pax Silica and its role in ensuring supply chain security for AI infrastructure, though specific details of the initiative are not elaborated in his remarks.
MAJOR DISCUSSION POINT
Supply Chain Security and Semiconductor Manufacturing
Argument 3
Partnership democratizes technology access and ensures countries don’t become dependent on single sources
EXPLANATION
Secretary Krishnan argues that the U.S.-India partnership helps democratize access to important technology elements. He emphasizes the critical need to avoid becoming enslaved or tied down to single dependencies, promoting diversified and trusted partnerships instead.
EVIDENCE
He points to the India AI Summit as an example of democratizing technology by bringing people into the room, and references lessons learned from the pandemic and geopolitical upheavals about the dangers of single-source dependencies.
MAJOR DISCUSSION POINT
Economic and Strategic Benefits of Partnership
S
Sanjay Mehrotra
3 arguments132 words per minute667 words302 seconds
Argument 1
Micron’s collaboration demonstrates how U.S.-India partnership advances AI through R&D facilities in India working on cutting-edge memory design while complementing U.S. manufacturing
EXPLANATION
Mehrotra explains how Micron’s operations in India exemplify successful U.S.-India collaboration in AI technology. The company conducts advanced memory design work in India that directly supports AI development while complementing manufacturing operations in the United States.
EVIDENCE
He provides specific details about Micron’s 60,000+ patents, R&D facilities in India contributing to leading-edge memory design, and mentions that the most advanced memory designs in the world are taking place in India in collaboration with U.S. teams. He also references two-nanometer designs mentioned earlier in the discussion.
MAJOR DISCUSSION POINT
U.S.-India Technology Partnership and Collaboration
AGREED WITH
Ambassador Sergio Gor, Secretary S. Krishnan, Dr. Randhir Thakur, Jacob Helberg
Argument 2
Memory serves as the critical fuel for AI as the growth engine of the digital economy, making semiconductor partnerships essential
EXPLANATION
Mehrotra argues that memory technology is fundamental to AI development, comparing it to fuel that powers the AI engine driving digital economic growth. He emphasizes that this critical role makes partnerships in semiconductor manufacturing essential for AI advancement.
EVIDENCE
He uses the analogy that ‘if AI is the growth engine of the digital economy, then memory is the fuel,’ and explains how memory is a critical enabler of AI technology development and deployment.
MAJOR DISCUSSION POINT
AI Revolution and Global Transformation
Argument 3
Micron’s $2.75 billion investment in India for assembly and test operations will complement U.S. manufacturing and contribute to AI advancement
EXPLANATION
Mehrotra details Micron’s significant financial commitment to India through assembly and test operations in Gujarat. He argues this investment will create a complementary relationship with U.S. operations, enhancing overall AI manufacturing capabilities and efficiency.
EVIDENCE
He provides specific investment figures of $2.75 billion in the Sanand, Gujarat facility with support from the Indian government, which will result in hundreds of millions of chips assembled and tested in India. He explains this will complement Micron’s U.S. manufacturing plants in silicon and advanced packaging, contributing to AI manufacturing automation and workflow efficiency.
MAJOR DISCUSSION POINT
Supply Chain Security and Semiconductor Manufacturing
AGREED WITH
Secretary S. Krishnan, Dr. Randhir Thakur, Jacob Helberg
D
Dr. Randhir Thakur
3 arguments144 words per minute609 words252 seconds
Argument 1
India produces 1.5 million engineers annually and handles 20% of global semiconductor chip design, making it a natural fit for partnership
EXPLANATION
Dr. Thakur emphasizes India’s massive engineering talent pipeline and significant role in global semiconductor design as key factors that make U.S.-India partnership natural and beneficial. He argues that India’s engineering capabilities and existing semiconductor expertise create ideal conditions for collaboration.
EVIDENCE
He provides specific statistics: 1.5 million engineers produced annually in India and 20% of global semiconductor chip design being done by Indian engineers in India. He also mentions that there have never been non-coercive issues in the design space, making it a natural fit for partnership.
MAJOR DISCUSSION POINT
U.S.-India Technology Partnership and Collaboration
AGREED WITH
Ambassador Sergio Gor, Secretary S. Krishnan, Sanjay Mehrotra, Jacob Helberg
Argument 2
AI represents a strategic national capability that requires nations to work together on infrastructure development and deployment
EXPLANATION
Dr. Thakur argues that AI should be treated as a strategic national capability, referencing the Indian Prime Minister’s vision. He contends that successful AI development and deployment requires collaborative efforts between nations on infrastructure and technology development.
EVIDENCE
He quotes Tata Sons chairman Mr. Chandrasekharan’s statement that under the Prime Minister’s vision, India has treated AI as a strategic national capability. He also references the Pax Silica declaration as a codification of trust and an enabler for collaborative work.
MAJOR DISCUSSION POINT
AI Revolution and Global Transformation
AGREED WITH
Ambassador Sergio Gor, Michael Kratsios, Brendan Remington
Argument 3
India’s semiconductor investments have grown from zero to $25 billion across 10 factories in three years, including AI-enabled fabs and indigenous packaging technology
EXPLANATION
Dr. Thakur highlights India’s rapid progress in semiconductor manufacturing infrastructure, demonstrating the country’s commitment and capability in this critical technology sector. He emphasizes the speed of development and the focus on AI-specific manufacturing capabilities.
EVIDENCE
He provides specific figures showing growth from no semiconductor investment three years ago to more than $25 billion being invested across 10 different factories, including partnerships with Micron and Tata Electronics. He mentions work on the first AI-enabled fab producing AI-specific chips and indigenous packaging technology development in Northeast Assam for automotive and edge chips for U.S. companies.
MAJOR DISCUSSION POINT
Supply Chain Security and Semiconductor Manufacturing
AGREED WITH
Secretary S. Krishnan, Sanjay Mehrotra, Jacob Helberg
W
William Kimmett
2 arguments173 words per minute779 words269 seconds
Argument 1
Program offers full-stack AI solutions through industry-led consortia while supporting national champions and sovereign AI capabilities in partner countries
EXPLANATION
Kimmett explains that the AI export program will facilitate industry-led consortia offering comprehensive AI solutions to global markets. The program is designed to provide foundational American technology while allowing countries to build their own national champions and sovereign capabilities on top of this foundation.
EVIDENCE
He mentions that the Department of Commerce issued a request for information and received hundreds of submissions from industry. The program will issue public calls for proposals from industry to submit consortia for full-stack offerings and other solutions, built on American AI tech stack but allowing national champions to build on that foundation.
MAJOR DISCUSSION POINT
American AI Export Program and Sovereignty
AGREED WITH
Brendan Remington, Michael Kratsios, Mr. Sriram Krishnan
Argument 2
AI applications in healthcare, education, agriculture, and manufacturing can unlock economic and social benefits globally
EXPLANATION
Kimmett argues that AI deployment across key sectors like healthcare and education in emerging markets can revolutionize these industries and provide significant benefits to citizens. He emphasizes the transformative potential of AI solutions when applied to crucial public services and economic sectors.
EVIDENCE
He specifically mentions working with Ministry of Health in emerging markets to develop solutions that would revolutionize their health industry for citizen benefit. He also references education, agriculture, manufacturing, and maritime as verticals with numerous new use cases and applications.
MAJOR DISCUSSION POINT
Economic and Strategic Benefits of Partnership
B
Brendan Remington
2 arguments195 words per minute603 words185 seconds
Argument 1
Initiative provides multiple choices and flexibility for countries to build sovereign AI capabilities on American technology foundation
EXPLANATION
Remington explains that the program is designed to facilitate choice and accommodate different versions of AI sovereignty. Rather than imposing a single solution, the initiative offers various options allowing each buyer to make choices based on their specific context and needs.
EVIDENCE
He describes hearing about many different versions of AI sovereignty, from villages needing their own data centers to specific LLMs for particular contexts, to simple requirements for data control and transparency. The program aims to offer ‘many choices’ and allow each context and buyer to make appropriate selections.
MAJOR DISCUSSION POINT
American AI Export Program and Sovereignty
AGREED WITH
William Kimmett, Michael Kratsios, Mr. Sriram Krishnan
Argument 2
Energy and enthusiasm in emerging markets, particularly among young people, drives demand for AI technology adoption
EXPLANATION
Remington emphasizes the high level of enthusiasm and energy he has observed, particularly among young people in emerging markets, as a driving force for AI adoption. He argues that this natural demand makes the export program’s job easier since people actively want AI technology.
EVIDENCE
He describes the ‘energy and pace’ as ‘unreal’ and notes that ‘the hunger for this is so high.’ He explains that they don’t have to sell AI in the sense of convincing people they want it, because ‘people want this’ – the challenge is how to best provide it to them.
MAJOR DISCUSSION POINT
Economic and Strategic Benefits of Partnership
AGREED WITH
Ambassador Sergio Gor, Dr. Randhir Thakur, Michael Kratsios
M
Michael Kratsios
1 argument148 words per minute375 words151 seconds
Argument 1
Program aims to democratize AI technology and share American AI stack globally while maintaining security and promoting innovation
EXPLANATION
Kratsios outlines the comprehensive American AI Export Program designed to share U.S. AI technology globally through various initiatives. The program aims to make AI technology accessible worldwide while ensuring security and fostering innovation through partnerships and support mechanisms.
EVIDENCE
He details multiple components including national champions initiative, AI agent standards initiative, financing programs through U.S. International Development Finance Corporation and Export-Import Bank, and the new U.S. Tech Corps program that will embed volunteer technical talent with import partners to provide deployment support.
MAJOR DISCUSSION POINT
American AI Export Program and Sovereignty
AGREED WITH
William Kimmett, Brendan Remington, Mr. Sriram Krishnan
M
Mr. Sriram Krishnan
1 argument117 words per minute1491 words762 seconds
Argument 1
Government approach focuses on making AI exports simple and accessible for both large companies and startups seeking international markets
EXPLANATION
Krishnan emphasizes the administration’s commitment to simplifying AI export processes and making them accessible to companies of all sizes. He highlights the government’s role in facilitating partnerships and removing barriers for American AI technology companies looking to expand globally.
EVIDENCE
He mentions President Trump’s actions in the first week of office, including rescinding the Biden diffusion rule that made it difficult for countries like India to access advanced semiconductor chips, and tasking the administration with developing an AI action plan focused on infrastructure, innovation, and partnership.
MAJOR DISCUSSION POINT
American AI Export Program and Sovereignty
AGREED WITH
William Kimmett, Brendan Remington, Michael Kratsios
J
Jacob Helberg
4 arguments156 words per minute240 words92 seconds
Argument 1
U.S.-India technology collaboration represents a natural partnership with limitless potential for deepening cooperation
EXPLANATION
Helberg facilitates discussion about opportunities to deepen U.S.-India technology collaboration, framing it as having significant potential. He guides the conversation toward understanding how this partnership can benefit both nations in critical technology areas.
EVIDENCE
He references Ambassador Gore’s arrival in India ‘with a bang’ and asks about the vision for deepening collaboration opportunities
MAJOR DISCUSSION POINT
U.S.-India Technology Partnership and Collaboration
AGREED WITH
Ambassador Sergio Gor, Secretary S. Krishnan, Sanjay Mehrotra, Dr. Randhir Thakur
Argument 2
Partnership between America and India is crucial for securing global supply chains, particularly for companies operating at global scale
EXPLANATION
Helberg emphasizes the importance of the U.S.-India partnership for supply chain security, particularly for global technology companies. He frames this as essential for companies like Micron that operate across international markets.
EVIDENCE
He specifically asks about what the partnership means ‘for the security of the supply chains of a company like Micron’ which ‘operates on a global scale’
MAJOR DISCUSSION POINT
Supply Chain Security and Semiconductor Manufacturing
AGREED WITH
Secretary S. Krishnan, Sanjay Mehrotra, Dr. Randhir Thakur
Argument 3
Heavy data center investments have special connections to edge technology deployment in emerging markets
EXPLANATION
Helberg explores the relationship between large-scale data center infrastructure investments and edge computing technologies like smartphones and connected vehicles. He suggests this connection is particularly important for emerging market deployment.
EVIDENCE
He asks Dr. Thakur to explain ‘the special connection between heavy data center investments and edge technology like smartphones and connected vehicles, especially in emerging markets’
MAJOR DISCUSSION POINT
AI Revolution and Global Transformation
Argument 4
Global economy is undergoing incredible change driven by supply chain reorganization and AI revolution requiring new approaches
EXPLANATION
Helberg frames the current moment as one of fundamental economic transformation driven by two major forces: the reorganization of global supply chains and the AI revolution. He suggests this requires new thinking and approaches from world leaders and business executives.
EVIDENCE
He asks panelists to address ‘the global economy undergoes this incredible change driven by the reorganization of our supply chains and the AI revolution’ while gathering ‘in front of world leaders and business executives’
MAJOR DISCUSSION POINT
AI Revolution and Global Transformation
M
Moderator
3 arguments14 words per minute416 words1777 seconds
Argument 1
India AI Impact Summit represents a significant platform for showcasing U.S.-India technology collaboration and AI leadership
EXPLANATION
The moderator frames the summit as an important venue for demonstrating the partnership between the two nations in AI and technology. They emphasize the significance of having high-level U.S. officials participate in India’s AI summit.
EVIDENCE
The moderator welcomes Michael Kratsios as ‘head of delegation for the United States to the India AI Impact Summit’ and highlights his role as ‘President Trump’s National Science and Technology Advisor’
MAJOR DISCUSSION POINT
U.S.-India Technology Partnership and Collaboration
Argument 2
American AI Export Program represents a structured approach to partnering on AI technology deployment globally
EXPLANATION
The moderator introduces the concept of the American AI Export Program as a formal initiative for international AI collaboration. They frame this as a systematic approach to sharing American AI capabilities with partner nations.
EVIDENCE
The moderator introduces ‘the panel partnering on American AI Exports Program’ and welcomes speakers specifically to discuss this initiative
MAJOR DISCUSSION POINT
American AI Export Program and Sovereignty
Argument 3
Enterprise AI infrastructure requires specialized expertise at the intersection of AI and networking/security systems
EXPLANATION
The moderator highlights the critical importance of enterprise infrastructure for AI deployment, emphasizing that AI systems require robust networking and security foundations. They position this as essential but often overlooked aspect of AI implementation.
EVIDENCE
The moderator introduces Jeetu Patel as someone who ‘sits at the intersection of AI and enterprise infrastructure’ and describes it as ‘the plumbing that makes it work,’ noting his reminder that ‘none of it works without resilient, secure infrastructure’
MAJOR DISCUSSION POINT
AI Revolution and Global Transformation
Agreements
Agreement Points
U.S.-India partnership is natural and mutually beneficial for technology collaboration
Speakers: Ambassador Sergio Gor, Secretary S. Krishnan, Sanjay Mehrotra, Dr. Randhir Thakur, Jacob Helberg
Natural partnership exists between U.S. technology and Indian innovation, strengthened by special relationship between President Trump and Prime Minister Modi Partnership enables secure and resilient supply chains in critical technology areas through trusted allies sharing common values Micron’s collaboration demonstrates how U.S.-India partnership advances AI through R&D facilities in India working on cutting-edge memory design while complementing U.S. manufacturing India produces 1.5 million engineers annually and handles 20% of global semiconductor chip design, making it a natural fit for partnership U.S.-India technology collaboration represents a natural partnership with limitless potential for deepening cooperation
All speakers agree that the U.S.-India partnership represents a natural, mutually beneficial collaboration based on complementary strengths – American technology and innovation combined with India’s engineering talent and manufacturing capabilities
AI revolution is transformative and requires global collaboration and adaptation
Speakers: Ambassador Sergio Gor, Dr. Randhir Thakur, Michael Kratsios, Brendan Remington
AI revolution is inevitable and transformative, comparable to the automobile replacing horse and buggy, requiring adaptation and partnership with like-minded nations AI represents a strategic national capability that requires nations to work together on infrastructure development and deployment Program aims to democratize AI technology and share American AI stack globally while maintaining security and promoting innovation Energy and enthusiasm in emerging markets, particularly among young people, drives demand for AI technology adoption
Speakers unanimously view AI as a revolutionary technology that will fundamentally transform society and requires collaborative approaches for successful development and deployment
Supply chain security and resilience require trusted partnerships and diversification
Speakers: Secretary S. Krishnan, Sanjay Mehrotra, Dr. Randhir Thakur, Jacob Helberg
Partnership enables secure and resilient supply chains in critical technology areas through trusted allies sharing common values Micron’s $2.75 billion investment in India for assembly and test operations will complement U.S. manufacturing and contribute to AI advancement India’s semiconductor investments have grown from zero to $25 billion across 10 factories in three years, including AI-enabled fabs and indigenous packaging technology Partnership between America and India is crucial for securing global supply chains, particularly for companies operating at global scale
All speakers emphasize the critical importance of building secure, resilient supply chains through trusted partnerships and avoiding single-source dependencies, particularly in semiconductor manufacturing
American AI Export Program should provide flexible, choice-driven solutions for sovereign AI capabilities
Speakers: William Kimmett, Brendan Remington, Michael Kratsios, Mr. Sriram Krishnan
Program offers full-stack AI solutions through industry-led consortia while supporting national champions and sovereign AI capabilities in partner countries Initiative provides multiple choices and flexibility for countries to build sovereign AI capabilities on American technology foundation Program aims to democratize AI technology and share American AI stack globally while maintaining security and promoting innovation Government approach focuses on making AI exports simple and accessible for both large companies and startups seeking international markets
All speakers involved in the AI Export Program agree it should offer flexible, choice-driven solutions that allow countries to build sovereign capabilities while leveraging American technology foundations
Similar Viewpoints
Both emphasize the strategic importance of democratic nations collaborating on AI as a national capability, with particular focus on the U.S.-India partnership as a model for democratic cooperation in technology
Speakers: Ambassador Sergio Gor, Dr. Randhir Thakur
Collaboration between world’s oldest and largest democracies creates win-win opportunities in manufacturing, innovation, and global competitiveness AI represents a strategic national capability that requires nations to work together on infrastructure development and deployment
Both view semiconductor partnerships and supply chain security as fundamental enablers of AI development, with Pax Silica representing a concrete mechanism for ensuring this security
Speakers: Secretary S. Krishnan, Sanjay Mehrotra
Pax Silica initiative ensures supply chain resiliency and security for AI infrastructure development between trusted partners Memory serves as the critical fuel for AI as the growth engine of the digital economy, making semiconductor partnerships essential
Both emphasize the transformative potential of AI across multiple sectors and the high demand for AI technology, particularly in emerging markets, making the export program’s mission both feasible and impactful
Speakers: William Kimmett, Brendan Remington
AI applications in healthcare, education, agriculture, and manufacturing can unlock economic and social benefits globally Energy and enthusiasm in emerging markets, particularly among young people, drives demand for AI technology adoption
Unexpected Consensus
Democratization of AI technology access
Speakers: Secretary S. Krishnan, Michael Kratsios, Mr. Sriram Krishnan
Partnership democratizes technology access and ensures countries don’t become dependent on single sources Program aims to democratize AI technology and share American AI stack globally while maintaining security and promoting innovation Government approach focuses on making AI exports simple and accessible for both large companies and startups seeking international markets
Unexpected consensus emerged around the concept of ‘democratizing’ AI technology – making it broadly accessible rather than restricting it. This represents a shift from traditional technology export controls toward more open sharing with trusted partners
Sovereign AI capabilities built on American technology foundation
Speakers: William Kimmett, Brendan Remington, Dr. Randhir Thakur
Program offers full-stack AI solutions through industry-led consortia while supporting national champions and sovereign AI capabilities in partner countries Initiative provides multiple choices and flexibility for countries to build sovereign AI capabilities on American technology foundation AI represents a strategic national capability that requires nations to work together on infrastructure development and deployment
Surprising alignment on supporting partner countries’ sovereign AI capabilities rather than creating dependency. This represents a collaborative approach to AI sovereignty that balances American technology leadership with partner nation autonomy
Overall Assessment

Strong consensus exists across all speakers on the strategic importance of U.S.-India AI partnership, the transformative nature of AI technology, the need for secure supply chains through trusted partnerships, and the value of flexible, choice-driven AI export programs that support sovereign capabilities

Very high level of consensus with no significant disagreements identified. This strong alignment suggests effective coordination among U.S. officials and industry leaders, and indicates potential for successful implementation of collaborative AI initiatives. The consensus spans both strategic vision and practical implementation approaches, suggesting robust policy coherence across government and industry stakeholders

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The discussion shows remarkable consensus among all speakers regarding U.S.-India AI collaboration, with no significant disagreements identified. All participants align on the benefits of partnership, the importance of AI technology sharing, and the strategic value of the relationship between the world’s oldest and largest democracies.

Very low disagreement level. The discussion represents a highly collaborative and aligned conversation where speakers complement rather than challenge each other’s viewpoints. This consensus suggests strong institutional alignment on AI export policy and U.S.-India technology partnership, though it may also indicate limited diversity of perspectives in this particular forum. The implications are positive for policy implementation but may benefit from broader stakeholder input to identify potential challenges or alternative approaches.

Partial Agreements
Both speakers agree on the goal of enabling sovereign AI capabilities for partner countries, but they emphasize different approaches – Kimmett focuses on industry-led consortia providing full-stack solutions with national champions building on American foundations, while Remington emphasizes providing multiple choices and flexibility to accommodate different versions of AI sovereignty based on specific country contexts and needs
Speakers: William Kimmett, Brendan Remington
Program offers full-stack AI solutions through industry-led consortia while supporting national champions and sovereign AI capabilities in partner countries Initiative provides multiple choices and flexibility for countries to build sovereign AI capabilities on American technology foundation
Takeaways
Key takeaways
The U.S.-India technology partnership represents a natural alliance between American innovation and Indian engineering talent, strengthened by the personal relationship between President Trump and Prime Minister Modi The AI revolution is inevitable and transformative, requiring nations to adapt quickly and partner with like-minded democracies rather than resist the change Supply chain security and diversification are critical – countries must avoid dependence on single sources and build trusted partnerships with nations sharing common values The American AI Export Program will offer full-stack AI solutions through industry-led consortia while supporting sovereign AI capabilities in partner countries Memory and semiconductors serve as the fundamental infrastructure enabling AI advancement, making U.S.-India collaboration in this sector strategically vital India’s rapid growth in semiconductor manufacturing (from zero to $25 billion investment in three years) and engineering capacity (1.5 million engineers annually, 20% of global chip design) positions it as an ideal partner AI applications in healthcare, education, agriculture, and manufacturing can unlock significant economic and social benefits globally, particularly in emerging markets The partnership between the world’s oldest and largest democracies creates a model for technology collaboration based on shared democratic values
Resolutions and action items
Launch of Pax Silica initiative to ensure supply chain resiliency and security for AI infrastructure between U.S. and India Department of Commerce to finalize and issue public call for proposals from industry for AI export consortia following completion of request for information process Micron to proceed with $2.75 billion investment in assembly and test operations in Gujarat, India, complementing U.S. manufacturing capabilities Development of AI-enabled semiconductor fabrication facilities in India using indigenous packaging technology Creation of AI agent standards initiative to establish industry-led, open, and secure AI standards Launch of U.S. Tech Corps to embed volunteer technical talent with partner countries for AI deployment support Establishment of new AI-focused financing programs through U.S. International Development Finance Corporation and Export-Import Bank
Unresolved issues
Specific technical details and implementation timeline for the AI export consortia program remain to be finalized Exact mechanisms for balancing sovereign AI capabilities with American technology stack integration need further definition Detailed frameworks for ensuring supply chain security while maintaining open technology collaboration require development Specific use case priorities and resource allocation across different sectors (healthcare, education, agriculture, manufacturing) need clarification Integration challenges between U.S. government AI adoption and export promotion efforts remain to be addressed
Suggested compromises
Offering multiple consortium models to accommodate both simple ‘T-shirt size’ solutions and complex customized AI implementations Providing flexible AI sovereignty options allowing countries to choose which parts of the technology stack to build domestically versus import Balancing national champion support with American technology foundation to enable local innovation while maintaining strategic partnerships Creating tiered access levels for different types of buyers including governments, state-owned enterprises, and private companies Establishing financing mechanisms to help developing countries overcome economic barriers to AI adoption while maintaining commercial viability
Thought Provoking Comments
The 20th century ran on oil and steel. The 21st century runs on compute and the minerals that feed it.
This analogy provides a powerful historical framework for understanding the current technological shift. It elegantly captures how fundamental resources define entire centuries and positions semiconductors/minerals as the new strategic resources, similar to how oil and steel shaped the industrial age.
This comment reframed the entire discussion around supply chain security and semiconductor partnerships as not just technical cooperation, but as fundamental to 21st-century geopolitical and economic power. It elevated the conversation from tactical partnerships to strategic historical significance.
Speaker: Dr. Randhir Thakur
Memory is a critical enabler of AI. Just think of it this way, that if AI is driving, is the growth engine of the digital economy, then memory is the fuel.
This metaphor brilliantly simplifies a complex technical relationship by comparing AI infrastructure to automotive mechanics that everyone can understand. It highlights how often-overlooked components (memory/storage) are actually foundational to the entire AI revolution.
This shifted the focus from high-level AI capabilities to the critical infrastructure components that enable them. It helped ground the discussion in practical realities and emphasized why companies like Micron’s manufacturing partnerships are strategically important to the AI ecosystem.
Speaker: Sanjay Mehrotra
India wants to get involved. But also the magic touch is that special relationship between our two leaders… for those colleagues of mine from Washington to understand the difference that it makes when our president likes you or he doesn’t like you.
This comment provides unusually candid insight into how personal diplomatic relationships translate into policy outcomes. It acknowledges the often-unspoken reality that personal chemistry between leaders can be as important as formal agreements in international relations.
This comment introduced a more personal and political dimension to what had been primarily a technical and economic discussion. It helped explain why the US-India partnership has momentum and set the stage for understanding the policy announcements that followed.
Speaker: Ambassador Sergio Gor
We recognize that partners need a chance to build their native technology industries and believe facilitating this will be a critical part of the export program… To facilitate the development of industry-led, open, and secure AI standards and to give the public confidence in this next generation of technology, we are creating an AI agent standards initiative.
This represents a significant policy shift from traditional export approaches that often create dependency. Instead, it acknowledges that true partnerships require enabling local capability building while maintaining American technological leadership through standards and infrastructure.
This comment fundamentally reframed the export discussion from a traditional seller-buyer relationship to a more collaborative partnership model. It introduced the concept of ‘AI sovereignty’ as compatible with American technology leadership, which became a central theme in subsequent discussions.
Speaker: Michael Kratsios
We hear about AI sovereignty a lot. And there are many different versions of this… Some of them just say I want control over my data. I want to know where it goes. I want transparency. Because there are so many permutations, we want to offer these many choices.
This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents different concerns and needs across different contexts. It shows policy flexibility and recognition of diverse national priorities.
This nuanced view helped move the conversation away from binary thinking about technology dependence vs. independence toward a more flexible, modular approach to AI partnerships. It opened space for discussing how different countries could engage with American AI technology in ways that meet their specific sovereignty concerns.
Speaker: Brendan Remington
20% of the global semiconductor industry designed is done by Indian engineers here in India. And we never really had any non-coercive issues in the design space.
This statistic powerfully illustrates India’s existing centrality to global semiconductor design while subtly referencing geopolitical tensions (the ‘non-coercive’ comment likely refers to China). It positions India as already integral to global tech supply chains, not just an emerging market.
This comment shifted the framing of the US-India partnership from developed-developing country cooperation to a partnership between two already-integral parts of the global technology ecosystem. It strengthened the case for deeper integration and helped justify major policy initiatives like Pax Silica.
Speaker: Dr. Randhir Thakur
Overall Assessment

These key comments collectively transformed what could have been a routine diplomatic and business discussion into a more profound conversation about the historical moment we’re living through and how technological partnerships will shape the 21st century. The most impactful insights came from speakers who provided either powerful analogies (Thakur’s oil/steel comparison, Mehrotra’s memory-as-fuel metaphor) or candid political realities (Gore’s comments on personal relationships, Kratsios and Remington’s nuanced views on sovereignty). Together, these comments elevated the discussion from tactical cooperation to strategic historical significance, while also introducing important nuances about how AI partnerships can respect different national priorities and sovereignty concerns. The conversation evolved from simple export promotion to a more sophisticated dialogue about collaborative technological leadership in a multipolar world.

Follow-up Questions
How can the U.S. government better integrate AI into its own operations to become more efficient?
Kimmett noted that while they’re helping export U.S. AI tech stack to the world, the U.S. government itself needs to do a better job bringing AI into government operations, particularly in areas like supply chain analysis at the International Trade Administration
Speaker: William Kimmett
What are the specific mechanisms and processes for how AI export consortia will actually work in practice?
Remington mentioned that while they described what they’ve heard so far, audiences would ‘hear more on how it actually works,’ indicating that detailed operational mechanisms are still being developed
Speaker: Brendan Remington
How can the AI export program accommodate both standardized ‘small, medium, large’ solutions and highly customized niche requirements?
Remington identified the challenge of serving different market needs – some wanting simple standardized solutions and others requiring very unique, specialized offerings
Speaker: Brendan Remington
What are the specific financing mechanisms and programs being developed by U.S. financial institutions to support AI adoption in developing countries?
Kratsios mentioned that the U.S. International Development Finance Corporation, Export-Import Bank, Trade and Development Agency, Millennium Challenge Corporation, and a new World Bank fund have initiated new AI-focused programs, but details weren’t provided
Speaker: Michael Kratsios
How will the U.S. Tech Corps initiative work to embed volunteer technical talent with import partners?
Kratsios announced this new initiative to bring the Peace Corps into the 21st century but didn’t provide operational details on how volunteers would be selected, trained, or deployed
Speaker: Michael Kratsios
What are the specific technical capabilities and innovations of Sarvam’s new model that was launched?
Krishnan mentioned being ‘blown away’ by Sarvam’s new model and encouraged people to check out technical details, but those details weren’t discussed in the session
Speaker: Mr. Sriram Krishnan

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Secure Finance Risk-Based AI Policy for the Banking Sector

Secure Finance Risk-Based AI Policy for the Banking Sector

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on the governance of artificial intelligence in India’s financial services sector, emphasizing the need for embedded governance frameworks rather than reactive regulatory approaches. The panel featured key policymakers and industry leaders examining how AI governance should be integrated into financial systems from inception rather than applied as an afterthought.


Ajay Kumar Chaudhary opened by highlighting India’s opportunity to lead in AI development while managing associated risks through embedded governance across the AI lifecycle. He emphasized the concept of “MANO” (humanity) as introduced by India’s Prime Minister, suggesting a shift from “responsible AI” to “human AI” that encompasses moral, ethical, and inclusive considerations. Chaudhary outlined four foundational pillars for embedded governance: proportionality, fairness and non-discrimination, explainability and transparency, and clear accountability.


Economic Advisor Sanjeev Sanyal challenged traditional risk-based regulatory approaches, arguing that AI’s emergent and evolving nature makes it impossible to predict risks accurately in advance. He advocated for a system similar to financial market regulation, emphasizing transparency, explainability, audit systems, compartmentalization, and clear accountability with “skin in the game” for those developing AI systems. Sanyal warned against the “AI of everything” approach, favoring bounded, compartmentalized AI applications that are easier to control and more energy-efficient.


The discussion highlighted India’s unique position in developing AI governance frameworks that balance innovation with prudential oversight. Participants emphasized the importance of building trust through transparent, explainable systems while maintaining the flexibility to adapt as AI technology evolves. The panel concluded that effective AI governance requires interdisciplinary collaboration and continuous monitoring rather than static regulatory frameworks.


Keypoints

Major Discussion Points:

Embedded AI Governance Framework: The central theme focused on integrating governance into AI systems from inception rather than as an afterthought. Speakers emphasized that AI governance must be built into the entire lifecycle – from design and data acquisition to deployment and monitoring – rather than applied as a compliance overlay after implementation.


Risk-Based vs. Emergent Technology Challenges: A significant debate emerged around traditional risk-based regulatory approaches versus the unpredictable, emergent nature of AI. Sanjeev Sanyal argued that AI’s evolving characteristics make it impossible to predict risks ex-ante, advocating instead for compartmentalized systems with clear accountability, transparency requirements, and “skin in the game” mechanisms similar to financial market regulation.


India’s Strategic AI Positioning: Discussion centered on how India should position itself globally in AI governance, leveraging its successful digital public infrastructure experience (UPI, digital identity) while avoiding the pitfalls of overly restrictive European models, laissez-faire American approaches, or state-controlled Chinese systems. The emphasis was on creating an “India-first” approach that is context-aware but globally coherent.


Trust and Inclusion in Financial AI: Panelists explored how AI can enhance financial inclusion through better credit assessment and fraud detection while ensuring fairness and transparency. The discussion highlighted the need for AI systems to be explainable “glass boxes” rather than opaque “black boxes,” particularly when affecting access to financial services.


Cybersecurity and Infrastructure Resilience: The conversation addressed how generative AI both enhances cybersecurity capabilities and creates new attack vectors. Emphasis was placed on the need for robust, compartmentalized infrastructure and the importance of maintaining human oversight while leveraging AI as a tool rather than a replacement.


Overall Purpose:

The discussion aimed to explore how governance frameworks can be embedded within AI systems used in financial services, ensuring that innovation proceeds responsibly without stifling progress. The goal was to chart a path for India’s AI governance approach that balances innovation with safety, inclusion, and systemic stability.


Overall Tone:

The discussion maintained a thoughtful, forward-looking tone throughout, characterized by cautious optimism about AI’s potential while acknowledging significant challenges. The tone was collaborative and solution-oriented, with panelists building on each other’s insights. There was a notable shift from theoretical frameworks in the opening keynote to more practical, implementation-focused discussions during the panel, culminating in specific suggestions for regulatory sandboxes and industry collaboration. The atmosphere remained professional and constructive, with speakers demonstrating mutual respect for different perspectives on this complex topic.


Speakers

Speakers from the provided list:


Moderator – Role: Discussion moderator for the AI governance panel


Ajay Kumar Chaudhary – Role: Keynote speaker; appears to be a senior policy official discussing AI governance and financial services


Priyanka Jain – Role: Panel moderator and discussion facilitator; mentioned as being from 5Money and having experience with RBI sandbox programs


Sanjeev Sanyal – Role: Economic Advisor to the Prime Minister; described as a macro thinker, historian, and strategic geopolitical analyst


Praveen Kamat – Role: Official from GIFT City IFSC (International Financial Services Centre); expertise in financial regulation and innovation


Vikram Kishore Bhattacharya – Role: Cloud service provider representative; expertise in cybersecurity and cloud infrastructure


Murlidhar Manchala – Role: RBI (Reserve Bank of India) official; expertise in AI frameworks and financial regulation


Audience – Role: Audience member asking questions; identified as Aditya, founder of First Tile (customer data platform company)


Additional speakers:


None identified beyond the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion on artificial intelligence governance in India’s financial services sector brought together senior policymakers, regulators, and industry leaders to examine how governance frameworks can be embedded within AI systems from inception rather than applied as regulatory afterthoughts. The panel explored India’s unique positioning in the global AI landscape and the critical balance between fostering innovation whilst maintaining systemic stability and public trust.


Opening Framework: Embedded Governance as Strategic Imperative

Ajay Kumar Chaudhary opened the discussion by establishing that AI governance must be embedded throughout the entire technology lifecycle rather than appended as a compliance overlay. Drawing upon Prime Minister Modi’s concept of “MANO” (humanity), Chaudhary proposed shifting from “responsible AI” to “human AI” that encompasses moral, ethical, accountable, and inclusive considerations.


Chaudhary emphasized that AI has evolved from an analytical tool to infrastructure that shapes financial outcomes, requiring treatment as systemically relevant infrastructure. He outlined key governance principles including proportionality in risk-based intensity, fairness and non-discrimination, explainability and transparency, and clearly defined accountability. The keynote highlighted India’s unique position in deploying digital public infrastructure at population scale whilst maintaining inclusion and trust, noting that AI’s adaptive characteristics require governance frameworks that evolve alongside the technology.


Challenging Conventional Regulatory Wisdom

Economic Advisor Sanjeev Sanyal fundamentally challenged prevailing approaches to AI regulation, arguing that traditional risk-based governance frameworks are inadequate for emergent technologies. Drawing historical parallels, he noted that technological dominance often goes not to inventors but to those who master and strategically deploy innovations—citing how Europeans dominated the world using Chinese-invented printing press and gunpowder alongside Indian mathematics.


Sanyal’s critique of risk-based regulation proved particularly provocative, arguing that AI’s emergent and unpredictable nature makes ex-ante risk assessment nearly impossible. He contended that European-style risk categorisation systems would either strangulate innovation through excessive stringency or fail to control risks due to their unpredictable evolution.


Central to Sanyal’s argument was deep skepticism about interconnected “AI of everything” approaches, which he characterized as potentially disastrous. He advocated for deliberately compartmentalized AI systems that solve bounded problems—using examples like chess (bounded) versus career planning (unbounded)—more safely and efficiently. This compartmentalization strategy would function like forest fire breaks, preventing systemic failures from cascading across interconnected systems.


Sanyal proposed specific mechanisms including mandatory AI audits for systems above certain thresholds, predetermined responsibility chains that assign accountability before failures occur, and circuit breaker mechanisms. His approach emphasized ex-post punishment systems rather than attempting to predict and prevent all possible risks in advance.


Regulatory Innovation and Experimentation

Praveen Kamat from Gift City IFSC provided insights into how new regulatory jurisdictions can serve as laboratories for AI governance innovation. He highlighted Gift City’s advantages as a jurisdiction built from scratch with a clean regulatory slate, enabling greater experimentation without legacy system constraints. Drawing from his SEBI experience (2008-2010), he noted how algorithmic trading evolved from being viewed with suspicion to becoming standard practice.


Kamat acknowledged the fundamental regulatory challenge of balancing innovation with stability, noting that over-regulation repels innovation whilst under-regulation repels serious long-term capital. He described existing interoperable sandbox mechanisms across regulators (RBI, SEBI, IRDAI, and IFSCA) that enable testing of cross-sector AI solutions, though legal frameworks present more significant challenges than technological or financial barriers.


Central Bank Perspective on AI Governance

Murlidhar Manchala from the Reserve Bank of India outlined the RBI’s principles-based approach to AI governance, emphasizing frameworks that build upon existing regulatory architecture. The RBI’s framework focuses on ensuring AI systems remain transparent “glass boxes” rather than opaque “black boxes,” particularly when affecting customer access to financial services.


Manchala highlighted the RBI’s recognition that AI technology is inherently probabilistic and may experience lapses despite robust governance frameworks. The central bank’s approach includes provisions for supervisory relief for entities that implement comprehensive controls, conduct proper root cause analysis, and maintain transparent incident reporting mechanisms. The framework also includes provisions for recognizing entities that demonstrate excellence in AI governance, suggesting a carrot-and-stick approach that incentivizes best practices.


Infrastructure and Cybersecurity Considerations

Vikram Kishore Bhattacharya addressed AI’s dual role in cybersecurity—simultaneously enhancing defensive capabilities whilst providing new tools for malicious actors. He emphasized that whilst generative AI has lowered barriers for threat actors, it hasn’t fundamentally changed the nature of attacks, meaning existing cybersecurity principles remain valid.


Bhattacharya advocated for a paradigm shift from “human-in-the-loop” to “AI-in-the-loop,” positioning AI as a tool that enhances human decision-making rather than replacing human oversight. His perspective emphasized trust-but-verify approaches with cloud service providers, validated through standards like ISO and NIST certifications.


Strategic Positioning and Sovereignty Concerns

The discussion extensively explored India’s strategic positioning in the global AI landscape, particularly regarding data sovereignty and supply chain resilience. Chaudhary highlighted concentration risks in the AI stack, noting that one firm controls over 90% of advanced chips, three dominate cloud capacity, and a handful command foundational models—creating potential vulnerabilities for financial stability and economic sovereignty.


Sanyal particularly emphasized India’s need to develop domestic AI processing capabilities, noting recent budget provisions including substantial tax holidays for data center development as strategic investments in the “oil rigs” of the data economy. He argued that India’s large population provides significant advantages for AI training data, but only if accompanied by domestic processing capabilities and clear data ownership rights.


A significant audience question from Aditya, founder of First Tile, addressed sovereign data assets and how India can leverage its data advantage. The panel emphasized that India must move beyond being merely a data source to controlling data processing and deriving value domestically.


Inclusion and Fairness Imperatives

Chaudhary outlined AI’s potential to deepen financial inclusion through granular, dynamic risk assessment that reduces reliance on collateral-heavy models. He cited specific examples from NPCI’s high-value payment environments where AI has reduced fraud by 25-30 percent. However, speakers acknowledged significant risks that AI could perpetuate existing inequalities if trained on historically skewed datasets.


The panel explored how AI governance frameworks must account for India’s linguistic diversity, demographic heterogeneity, and income variability—factors that heighten model risk if not properly addressed. This requires governance frameworks that are context-aware whilst remaining globally coherent.


Practical Implementation Challenges

The discussion revealed several practical challenges in implementing embedded AI governance, including the need for interdisciplinary capabilities and AI literacy at board and senior management levels. Speakers emphasized that leaders must understand model architecture, validation methodologies, vendor dependencies, and ethical implications.


Copyright and intellectual property frameworks emerged as critical areas requiring reform. Sanyal posed fundamental questions about ownership of AI-generated innovations and emphasized the need for new legal frameworks and potentially new judicial capabilities to handle AI-related disputes.


Global Regulatory Landscape and India’s Approach

The panel examined different global approaches to AI regulation, contrasting innovation-led American models, compliance-heavy European frameworks, and state-controlled Chinese systems. The discussion suggested India should chart a distinctive path that leverages its unique advantages whilst avoiding the pitfalls of other approaches.


Speakers emphasized that India’s approach should be “context-aware” rather than merely adopting international frameworks wholesale, maintaining the balance India has successfully achieved with previous digital infrastructure initiatives.


Unresolved Tensions and Future Directions

The discussion revealed several unresolved tensions, particularly the fundamental disagreement between risk-based and emergent-technology governance approaches. The balance between innovation and safety emerged as an ongoing challenge, with the panel suggesting this might be achieved through regulatory sandboxes, supervisory relief for well-governed entities, and clear accountability frameworks.


Questions about the appropriate level of AI system interconnection versus compartmentalization remain unresolved, with implications for both efficiency and safety. Similarly, ongoing debates about where in the AI value chain responsibility should be assigned continue to require attention.


Conclusion

The discussion concluded with recognition that AI governance represents both a technical challenge and a strategic opportunity for India. The panel emphasized that effective AI governance requires moving beyond theoretical frameworks to practical implementation mechanisms, including developing audit systems, accountability frameworks, and continuous monitoring capabilities that can evolve alongside the technology.


The conversation framed AI governance not as a constraint on innovation but as an enabler of sustainable, trustworthy AI deployment that can serve broader goals of financial inclusion, systemic stability, and economic sovereignty. The path forward requires continued collaboration between policymakers, regulators, and industry participants to develop governance frameworks that are both robust and adaptive to the challenges of governing emergent technologies.


Session transcriptComplete transcript of the session
Moderator

Thank you. very much in line with the overall theme of the summit. We are looking at the overall aspect of governance of AI, but not as something that will be set aside and looked at through a different lens altogether, but something that can be looked in as an embedded layer of governance that we already govern technologies with. In the interest of time that we have with us, I will request the panelists to be seated on the dais, and I will request AK Chaudhary sir to please begin his keynote.

Ajay Kumar Chaudhary

Good afternoon to everyone. Distinguished policy makers, regulators, industry leaders, members of the FinTech community, and esteemed guests. I will just very closely following last four days how and what are things happening and it was amazing the type of enthusiasm type of excitement and type of budge around AI and this summit and I believe and that whatever is there actually is a real thing which is happening possibly multiple small applications are going to come in coming days which will solve multiple issues and problems in coming days and we’ll have the real leading role actually to play as a country that is the way we look at it we also will have a great role to play on the data side particularly when we are going to train the models for that obviously when we are going to scale up entire thing then possibly there might be some run -throughs some risk also and those risks something is known, something is unknown and for unknown much cannot be done except we need to do take care of the embedding the governance part.

That is the theme of today’s talk, how we need to embed the governance actually the entire life cycle of the AI, the design of the AI. That is the way we have to look at. Yesterday I was again listening our Honorable Prime Minister and the beautiful way that he summarized the entire theme in one word that is called mano, that is called humanity. So possibly in future I am going to use that instead of responsible AI, that is possibly we can talk about human AI because it is going to touch upon moral and ethical systems, accountable governance to sovereign, national sovereignty, accessible and inclusive and valid. All the aspects what we are going to touch upon, everything is covered in this one word that is called Mano.

Now coming back to my address, proposed address, I’m coming back to this now. It’s indeed a privilege to participate in this dialogue at a defining moment in India’s digital evolution. Over the past decade, India has demonstrated how population -scale digital public infrastructure can drive inclusion, efficiency and trust. Systems built with interoperability, transparency and scale at their core have reshaped financial participation by millions. Today we stand at the next inflection point in that journey. A new tech layer is being superimposed upon this digital foundation. AI, artificial intelligence, what we know it, is not arriving in isolation. It is integrating with payment systems, credit and risk management platforms, supervisory frameworks. and cybersecurity architecture that already operate at national scale.

This convergence of scale and intelligence marks a structural shift. Unlike earlier waves of digitalization that automated existing processes, AI introduced adaptive systems, systems that learn, recalibrate, and influence outcome dynamically. In a country as large and diverse as India, such systems do not merely improve efficiency, that see access, opportunity, and systemic resilience. The question before us is not whether AI will transform finance. It already is. The more fundamental question is whether governance will evolve at the same pace as innovation and whether it will be designed into a system from inception rather than appended later as a compliance of the thought. In financial services, trust is foundational. AI system cannot function as opaque black boxes, especially when they influence access to credit or flag financial behavior.

Governance cannot be an overlay applied after innovation has already been scaled. It must be embedded by design. As Peter Drucker observed, quote, management is doing things right, leadership is doing right things, unquote. In the context of AI in finance, governance is not merely about tech correctness. It is about doing the right things at the right time in ways that preserve trust, resilience, and inclusion. Now, looking at AI as infrastructure tool, it has evolved from analytical assistance to shaping financial outcomes. In credit market, machine learning model analyze transaction histories, behavioral signals, and dynamic cash flows to generate granular borrower assessments. In fraud prevention, AI detects anomalous activities within milliseconds, processing volume beyond earlier systems. AI -enabled detection can reduce certain categories of fraud losses by up to 25 to 30 percent at this point of time in high -value payment environment, what we are witnessing in NPCI.

Compliance functions increasingly rely on automated pattern recognition, while adaptive cybersecurity models respond to emerging threats in real time. The diffusion of AI across the financial value chain enhances efficiency and precision. Yet, when models operate on a systemic scale, even marginal inaccuracy can produce material consequences. In finance, where stability and trust are public goods, the tolerance for systemic error is limited. India’s financial system adds its own complexities. Its scale of digital participation, linguistic diversity, demographic heterogeneity, and income variability are also important. heightened model risk. Although trained on narrow urban centric or historically squid data sets may inadvertently misclassify, misprice or exclude segments that digital finance is intended to integrate. It is therefore imperative that we do not view AI as a peripheral tech enhancement.

It must instead be understood as a component of financial infrastructure which is systemically relevant and should be subject to the same standard of resilience, governance and accountability what we expect of any critical financial utility. When we talk about embedded governance in AI, historically regulation in financial services have often responded to innovation after risk gets materialized. Governance in the AI era must however be embedded into systems design. Embedded governance means integrating accountability, transparency, and risk management into every stage of the AI life cycle. from conceptualization and data acquisition to model development, deployment and ongoing monitoring. It rests on several foundational pillars. I will mention four. One is proportionality, that is the governance intensity should be risk -based.

It should be risk -based intensity. Fairness and non -discrimination. Third is explainability and transparency. And fourth is accountability, which must be clearly defined. While institutions may collaborate with tech providers or leverage shared infrastructure, responsibility for outcomes cannot be outsourced. Potential vulnerability of AI systems that save their operations, board and senior management must understand that logic, limitations, et cetera. Further, and more importantly, in financial AI, algo efficiency should not compromise equitable opportunity. Now, specifically coming to the financial infrastructure, risk -based approach to AI governance, just I’ll touch upon this. A risk -based approach to AI governance acknowledges that innovation and prudence are not opposing forces. They are complementary. Financial authorities globally are converging on principles that emphasize robustness, resilience, transparency, and human oversight.

India’s regulatory thinking reflects this balance, encouraging experimentation while reinforcing institutional responsibility. The objective is not to slow innovation, but to ensure that systemic risk does not accumulate invisibly. Several risk dimensions deserve particular attention as AI becomes integral to financial systems. It may include multiple issues. I will touch upon only four. One is the model integrity. For instance, it can no longer be viewed as a one -time validation exercise. Intelligent systems must be evaluated across economic cycles. And stress against extreme but plausible scenarios. As data patterns evolve and models recalibrate, continuous oversight becomes inevitable to guard against drift, unintended bias, or reinforcing feedback loops. Second is operational concentration risk. I will detail subsequently also. It is an emerging systemic concern.

Diversification and resilience planning are essential to safeguard continuity. Data governance through data integrity, consent management, purpose limitation, and minimization principle is foundational. Financial data is not merely transactional. It reflects livelihood, behavioral choices, and economic participation. And the fourth item is cybersecurity risks that are amplified in the AI environment. As AI strengthens defense mechanisms, it can also be leveraged by adversaries. Institutions must anticipate adversarial AI and strengthen defensiveness. Detection capability accordingly. A risk -based… framework recognizes that governance cannot be static system that learn and evolve demand demand oversight that is equally dynamic as also measured proportionate and forward looking now just touching upon supervisory intelligence as ai permeates financial institution supervisory framework are also evolving supervisors increasingly leverage advanced analytics to monitor systemic pattern identify anomalies and strengthen early warning mechanism this creates a reciprocal dynamic institution embed ai in operation while oversight bodies integrate intelligence into supervision however governance cannot be regulated driven alone institution capability is critical ai literacy at the board and senior management level is no longer optional leaders must understand model architecture validation methodology vendor dependency and ethical limit implications Effective governance requires interdisciplinary capability bringing together tech, risk, compliance and legal experts as well as business leaders together Institutions that integrate AI governance into their ERM framework strengthen resilience Christian Lagarde has noted, innovation and regulations are not adversity, they are partners in progress That partnership must guide the embedding of AI within finance Coming to the inclusion part, what our Honorable Prime Minister has mentioned about the last A in MANO, that is access and inclusion India’s financial transformation has been anchored in inclusion Over the past decades, tech has lowered barriers, reduced transaction costs and brought millions into the formal financial ecosystem AI now offers an opportunity to deepen that trajectory Through granular dynamic risk assessment Thank you It can reduce reliance on collateral heavy models and static credit history.

Transition level data, cash flow analytics and behaviour indicators can provide more nuanced insight into the repayment capacity, particularly for MSME who are presently outside the traditional credit framework. India is expected to account for a significant share of global digital transition growth this decade. If harnessed responsibly, AI can convert this expanding digital footprint into broader formal access to fair financial services and adoption at scale. Yet, inclusion cannot be assumed. It must be intentionally designed. Algo, trained on historically squid dataset, risks perpetuating structural inequalities. In formal sector, income volatility. In terms of the future, of the Gender -based data gas may distort credit outcomes. Without corrective safeguards, technology may reinforce rather than reduce disparities. Inclusive AI thus requires representativeness in training datasets, periodic impact audits, and community -level feedback mechanism.

It calls for institutional mechanisms that allow individuals to seek clarification and redress where automated decisions affect their financial standing. Now coming to the sovereign and resilient AI foundation. AI governance intersects not only with the institutional risk, but with strategic resilience. Concentration in advanced chips and foundational AI models raise critical consideration for economic sovereignty, financial stability, and I can further add, the national security. Dependency on limited supply chains can create systemic vulnerability. If we may look at AI stability. I’m going to go ahead and start with the AI. more granularly. It rests on five interdependent layers. At the base are specialized semiconductor chips we all know. Above this sits the cloud and data -centric infrastructure that provides scalable processing capacity.

And these systems are fueled by vast data sets drawn from public and proprietary sources. On this foundation operate large foundation models adaptable across domain and finally at the top are application and that embed AI into financial services and everyday economic life. In this context we should be conscious of the fact that one firm controls more than 90 % of advanced chips. Three dominate cloud capacity and a handful command foundation models threatening financial stability and economic sovereignty. We must therefore diversify supply chains to the extent possible through domestic innovation and international collaboration to secure resilient AI foundations. Further, if you look at what is the pathway for ecosystem scaling possibly we have to look at the consent based data sharing, shared AI and risk infrastructure investment in AI literacy and governance at all levels including board and senior management and most importantly encouraging home grown tech and AI capable entities.

It may be appreciated that an India first approach is not inward looking. It is context aware. It ensures that governance reflects local realities while remaining global coherent. Now coming to the operationalization of embedded governance, it may involve multiple issues but I am touching upon 5 to 6 one. The life cycle based model governance institutions should embed governance checkpoints from data acquisition to deployment and post deployment monitoring. obviously clear risk classification framework based on the systemic impact that we should have to have independent review and oversight, enhanced oversight on that. It should be auditable and documentation should be there cross functional governance committee will be helpful no doubt on that and continuous monitoring and feedback loop that basically helps in periodic recalibration by way of external audit.

Consumer centric safeguards obviously by way of transparent disclosure clear appeal processes and human intervention mechanism are critical to maintain public trust. These pathway ensure that governance is not episodic but embedded women into operations DNA. Now I will just before concluding I will touch upon the role of India in AI and trust as a corner store of financial AI. Finance rest on confidence that systems are fair, stable and accountable. Deposit trust institution to safeguard asset borrowers’ trust systems to assess risk fairly, and market trust, transparency, and stability. EI has the potential to enhance this trust by improving fraud detection, accelerating compliance and broadening access and inclusion. But if governance is ineducated, EI can erode confidence rapidly.

Trust is built when systems are predictable, explainable, and accountable. Trust deepens when innovation aligns with public interest. And trust endures when leadership anticipates risk rather than reacts to failure. India stands at a pivotal moment, working across all five layers of the EI stack, and demonstrating the ability to deploy application at population scale. It is shaping a global agenda for inclusive EI. The convergence of digital infra, regulatory foresight, and entrepreneurial innovation offers a chance to show that scale and safety can coexist and governace can catalyze inovation.Coming to conclusion artificial intelligance wiil sace the next chapter of financial services. But tech alone does not determine outcomes. Institutional design does. Design choicesmgovernance framework and institutional culture will determine whether AI strenghten finance finacial resilience and inclusion or not.

Embedded governance is not regulatory burden.It is strategic imperative.It ensures that innovation is sustainable, trust is preserved and the system stabillity is protected. If we embed fairness,transparency,anccuntability and proportional oversight into the architecture of financial AI form iception, India can chart distinctive path,one that alligns tech ambition with ethical responsibility. Let as approach rhos moment not with hestitation but with disciplined forseght .Let us ensure that as our financial systems become more intelligent,our governance become more robust, our oversight becomes more anticipatory and our commitment to inclusion more resolute. In doing so, we will not only harness the power of AI ,but we also shape it to serve the broader goals of stability, oppertunity and share prospectively.Thank you.

Moderator

Thank you, sir. That was very insightful and sets the context for the panel discussion to follow. We could also request you, if you would want, you could join us in the audience. That would be great. Over to you, Priyanka, for introduction to the panelists and then taking this discussion forward.

Priyanka Jain

Thank you so much. Thank you. Our panelists need no introduction. I’m going to keep it very fast so that we can make the most of, you know, capturing their thoughts. First, I have with me Mr. Sanjeev Sanyal. Sir is the economic advisor to the Prime Minister. He’s in the Prime Minister’s office and he needs no introduction. If I actually go by what AI has given me as his persona, AI summarized it as a macro thinker, a historian, a historian of structural cycles and a strategic geopolitical lens. Fortunately, today we have the OG himself in the room. And without any further ado, I want to ask him my first question. So historically, countries that have mastered general purpose technologies, right from the steam engine, early electricity, Internet, they’ve gained outsized economic advantage.

Is AI that inflection point for India? And if so, does early well -designed self -governance accelerate trust or does it deny us of any competitive momentum?

Sanjeev Sanyal

Yes, it is important that you are engaging in it, but let me point out that it’s not always the first movers who benefit from it and it’s not the case that even those who invent these technologies know where they’re headed. I mean, just to give you an example, the European Renaissance, which led ultimately to the Western domination of the world for half a millennium, was based on three technologies. One was the printing press, the other was gunpowder, and the third was mathematics. The first two were invented by the Chinese and the third was invented by the Indians, but it is the Europeans that took it, owned it and dominated the world. So, one important thing to recognize in all of this is that do not try and necessarily guess where this is headed.

But of course, we need to engage in it. We need to engage in these technologies and build on them. Otherwise, you know, somebody will take your technology and dominate you. So it is very, very important that India does participate in this AI revolution. But again, in this context, let me say, that does not mean that we should spend time trying to work out exactly where this is headed. For example, when the social media revolution was happening 20 years ago, when Facebook and all these things came about, the marketing tool of the people at that time was, see, now everybody can talk to everybody, we will all move to the golden mean, because we will all have similar views, because we can all talk to each other, and so on.

But in fact, the algorithms went out of their way to put us in buckets and echo chambers. So in fact, we ended up, social media ended up doing exactly the opposite of what the, you know, the technology experts were telling us social media would do. now why does this apply to AI as well and here I am going to talk about this risk based thing that everybody is talking about let me tell you that you cannot actually put AI or any types of AI into any real risk bucket because this is an emergent evolving thing even more so than social media so consequently if you are saying I am going to do risk based it means that you have some assessment of where that thing will go and I am telling you that it is almost impossible to do this so for example in my view the European way in which they are going about and having you know risk they are the pioneers of risk based systems I understand it is pretty obvious that you don’t want AI to take over our nuclear buttons but other than that the risk levels of most of the other things is utterly unknown this is a bad thing because I am not saying that because I am not saying that because I am not saying that something totally innocuous might go and blow up the whole system because these things are emerging they are evolving, they are interconnecting therefore I actually do not think the risk a system that is largely based on perceptions of risk will work because it is not possible ex -ante to work out what is dangerous or for that matter what is beneficial now what should you do if you can’t tell what is going to happen I am telling you the European system is either going to be strangulate the system by being too stringent or it will open things up because it wants progress but will ultimately the risk based system will not be able to take control of it so the other model that is there is of China which is the state knows best but we know from the experience we had with the Wuhan virus that the state can very often lose control of things that are happening and it can spiral out.

The third model that is mostly the American model is to have a laissez -faire and let anybody do whatever they want. Now the dangers of that are obvious. In my view, the way they control it is through tort laws, i .e. if something goes badly wrong, you will then end up with a billion dollar fine or something like that. So in some ways it works better because it’s ex post rather than ex ante system. It depends on those who are running the system having skin in the game, i .e. your company will go down and you will be jailed and you will have a billion dollar fine on it. If things go wrong, that is how they are doing it.

It’s an ex post punishment. But as you can tell, that is some ways, is an ex post system and if something really bad goes wrong, you know, it will you’ll only find, you know you can punish the person after the horse has already bolted you are going to lock it. So all these systems have their downsides but I’m just telling you that whatever system we design in order to control this has got to be based on being agnostic to how this whole thing works going forward. Now, I know I’m taking up their time but give me a minute. There are other systems that we manage where we have no idea where they are going. Take for example the stock market.

You and I don’t know where the stock market will be in a decade’s time. It’s a complex system just like artificial intelligence but we manage it. How do we do it? Well, we do it by creating a framework which does the following thing. It first of all has institutes audits. And enforces transparency and explainability. if you can’t explain your accounts you can’t be in the stock market two it has systems of shutting things down when things go wrong so there are every stock market will have when things spiral out it shuts down three it deliberately creates systems of separation for example this you know there are the same company cannot you know be a bank as well as being a company that so there are conflict of interest so in the same way AI will need to create compartments I am personally very suspicious of any idea of the internet of everything and the AI of everything that would be a disaster I think we need to be willing to allow compartmentalized AI I think it will be more efficient anyway from an energy perspective but I think it’s also safer and most importantly you need to create skin in the game, i .e.

ex ante tell people who will be held responsible when things go wrong. So, in the case of financial markets, the directors of the company are the ones hauled up when things go wrong, or the CEO. In the case of AI, we will have situations where when things go wrong, the person who made the algorithm will blame the data, the data guy will blame the company, the guy who is the user, all kinds of things will happen. We need to ex ante decide who in the system will be hauled up when things go wrong. That will create skin in the game. But we cannot wait for something to go wrong and then this happens, we need to decide this ex ante.

So, all of these things exist in the case of financial regulation. I personally think a similar system.

Priyanka Jain

Rightly put technology moves fast but trust takes time to build and compartmentalization is a great way to de -risk in some form and also look at it with a focused agenda and attention. With that we can actually bring in Mr. Kamath. Mr. Kamath is from the GIF City IFSC, a compartmentalized global financial hub in a way that India has created and we are very fortunate to have you sir here GIF City actually operates at a unique intersection of innovation and global credibility. It competes with the likes of Singapore, Dubai, London. Can GIF City become a lab for AI governance and we wanted to know your view sir and especially a great segue from Sanjeev sir on how we can look at it differently in a compartmentalized manner.

Praveen Kamat

See if you see a Gift IFSC as a jurisdiction, it is just, it was set up in 2015, so it’s just 11 years old. We are building it up from scratch. Now, when you build something from scratch and when you have a brand new regulator, like IFSC which was created in 2020, you start with a clean slate. So that means you have more leg room and you have more space to experiment. So we don’t have baggage of the legacy systems. So if you see the way we have evolved over the last six years, IFSC, the way regulations have evolved, we have all the verticals across finance, capital markets, banking, insurance, pensions. And we have introduced new verticals, ship build, ship leasing, aircraft leasing, ancillary services and so on.

You know, in line with all of the global financial centers. So with respect to experimentation, when you use the word lab, you imply experimentation. So the appetite for experimentation and the appetite for taking risks, is much higher than other, say, domestic regulators or regulators overseas because of the absence of retail investors. so yes gift city has an immense ability to to come across as a lab uh for ai governance however building a financial center is a is is like a 45 kilometer marathon you know it’s not a 8 kilometer dream run so it will take its time uh we are on the growth trajectory on the upward trajectory and there is a certain gestation period for every financial center that that period gestation period cannot be skipped we are in that gestation period so once we reach critical mass we will we’re going to see a lot of things happening and coming out of gift ifsc.

Priyanka Jain

Thank you actually i will go murli sir and the rbi free ai report or the framework on uh you know any enablement of ethical ai i think it’s very forward looking it is actually building on existing regulatory controls and architecture to bring in you know the principal base ai ecosystem so my question to you is If a company has embedded robust controls, model inventories, bias testing, continuous monitoring, should regulators reward and discipline such companies with calibrated supervisory relief? And in other words, is there a safe harbor for somebody who’s, you know, who’s put in risk -based controls but, you know, has been a first -time defaulter?

⁠Murlidhar Manchala

Yeah. In fact, in the same report, it was suggested that the entities which put in place all the guardrails and then in case of any labs, if they are doing the root cause analysis, trying to address the problem, they should have a, the regulator should have a lenient supervisory approach. And it should be seen as a, it should be seen as an instrument. It should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a, it should be seen as a over acting risk area so that is something which we recognize so on both friends one is we understand the technology is probabilistic and then it can have lapses but once in terms of governance if you if you put in the guardrails if you put in the processes if you put in the mechanisms across the lifecycle to see that this the the customer doesn’t face the risk so that is the main focus the customer it should be transparent to the customer it should not be a black black box rather than it can be a glass box so so and and it should be understandable to the customers so once all the measures are taken take are taken into consideration by the by the entity in terms of governance as well as the processes you can see that the customer does not face the risk so that is the main focus the customer it should be transparent to the customer it should not be a black black box rather than it can be a glass box so and it should be understandable to the customers so once all the measures are taken take are taken into consideration by the by the entity in terms of governance as well as the processes because of the nature of the technology presently we understand it can lead to some to some aberrations but then as long as it is it is taken in a in a right process you have these incident reporting mechanisms you have the you will have the manual or right so the once you have these controls and the and the right approach the supervision will not should not be used as a as a systemic or or a greater risk you should rather you should allow first time lapse and then in terms of say rewarding it we also suggested that there would be an award for a in finance particularly there are specific works done in in terms

Priyanka Jain

Thank you. I think Vikram your advantage point here because you are a global infrastructure player. You are seeing regulatory trends across the US, UK, Singapore and many other markets. You heard about how the panel has been shaping right from the policy makers to international financial center and also RBI. Want to know as an infrastructure provider how are you looking at cyber security and its evolution in the age of generative AI?

Vikram Kishore Bhattacharya

Thanks so much Priyanka. I would just make one correction as a cloud scientist. I am a cloud scientist and I am a cloud scientist. I am a cloud service provider and not merely an infrastructure provider. I think one of the things is, for good or for worse, we’ve seen the benefits of generative AI, but we’re also seeing bad actors use generative AI for phishing attacks, for credential attacks, for malicious code. So, you know, with the good comes the challenges. But one of the more important elements is that while it’s serving as an accelerant to existing methods, I don’t think it’s foundationally changing the nature of the attacks. And, in fact, there was a report that came out in 2025.

It talks about how generative AI has lowered the barriers for a lot of these threat actors. But I go back to what I said. It’s because it’s not foundationally changed. The same principles and the same foundations of cybersecurity that held true before gen AI still hold true. So, you know, multi -factor authentication, strong passwords, regular updates, scanning your systems. And I think it is imperative for organizations. To fundamentally, especially in the financial services, who are always. being attacked and India is a country where not only the banks but we have a huge citizenry with different levels of financial literacy so therefore how do you use these tools to actually safeguard the financial system so I think that in that respect you know a lot of sort of kudos to the RBI for also thinking about it on you know these principal lines but also the banks for actually leveraging these technologies and I think that one of the elements that you need to always do is you know trust service providers like us but also banks should verify and that is done through standards like ISO or the NIST and you know independent third party reports validate the various controls that are there and I think that now and it’s a point that I was making a little earlier you have to become an active participant in cyber security no longer can you be a passive passenger in it because the landscape is changing and as more and more people are digitizing so are the people who are willing to and are looking to attack any vulnerability.

So GenAI does provide you with the tools, because, again, I’m also a believer in not, you know, human in the loop, but having AI in the loop. So how do you use these technologies to have faster responses? How do you automate scanning? How do you automate getting reports? To be able to make those value judgments at the right time. So that requires skilling. Again, that requires awareness, not just about something like an AWS or the cloud, but also banks and also, you know, the work that, again, regulators as well as cloud service providers are doing is having these awareness programs to make sure that the more people understand the technology, the better the framework and the groundwork will be for them to adopt.

Thank you.

Priyanka Jain

I think I also referred to our earlier discussion today afternoon, wherein rather than AI thinking, about a human in the loop, humans think AI as a loop to move forward and I think that was a great paradigm shift that we can look at. Sanjeev sir I am going to come back to you but I also want to give a backdrop to this question India has never simply adopted technology we have created it, we have adapted it, we have scaled it and we have governed it in our own way. We did it with identity, we did it with payments and we did it with digital public infrastructure If the governance frameworks around AI are beginning to emerge and they are also being divergent globally like US being innovation led, EU being compliance led, China being state led, where is the access that India is going to strategically position itself and how are you looking at it from your lens?

So I

Sanjeev Sanyal

think I will continue from what I was saying earlier Now, we need to be very, very careful that we don’t end up with a bureaucratic risk -based system. This is an emergent technology. It will evolve in all different ways, and we’ll have to be very, very creative about this. Now, there is a difference between, say, the systems as an architecture. AI is an emerging thing. It’s not just infrastructure in the sense that, say, you can think of UPI as infrastructure, for example, digital identity as infrastructure. It doesn’t in itself have emergent behaviors. AI has emergent behaviors, i .e., it evolves and interacts with other forms of AI, and which is why I said you need to be fundamentally suspicious of anybody who says that they have a very clear idea where this whole thing is going.

We don’t at all have a clear idea. Nobody on the planet has a clear idea where it’s going. So we do need some regulation. We need to be very, very careful about having humans in the loop. as I said right in the beginning you need to have systems switch off buttons you need to create what are called in finance Chinese walls which separate different tracks as I said earlier I am not a huge fan of the AI of everything I think that’s dangerous and will lead to bad outcomes however AI can be run in compartments rather well and why don’t we use that because in any case that’s less energy using in any case it is better at solving bounded problems when you give AI an unbounded problem it tends to hallucinate because unfortunately it has learnt another human trait that it doesn’t like to tell you that I don’t know it rather make up stuff so consequently I think it is better that we deal we give it bounded problems let it solve those bounded problems and get back to us going for this AI or internet of everything which everything is interconnected sounds very good but just it was last July or July before that we saw when one very small code of a Microsoft program which was by the way static it wasn’t even a fluid one it went wrong and you ended up with causing havoc in airports ATMs all kinds of things around the world now imagine the same thing happening in a system where it has emergent characteristics by the time you fix one bit of it it has flowed into some other part of the system so I personally think we need to create firewalls you know forest fire is also an emergent thing and the way we control it is not by predicting where the fire is coming from and where it will go we just have these firewalls from time to time we do that in finance all the time we don’t try to work out what the conflict of interest is, we simply ban situations where conflict of interest will emerge and the same thing is true of skin in the game I think we need to ex -ante work out where in the chain is the responsibility I personally think that it should be done at the level of where the algorithm is made public to use whoever is making it even if their data is wrong you cannot blame the data you are responsible so somebody else may disagree, whatever point of the matter is we need to have very clear points of punishment when things go wrong we need to have audit systems for explainability there is nothing very deep about this after all every company listed in the stock market has got it itself several times a year why can’t we ask major AI companies to be audited?

If you cannot explain why your results are turning up too bad, you shut it down. We do that even with relatively small companies have to go to a chartered accountant several times and chartered accountant has to sign it off. Maybe we have a chartered AI audit for anything that goes beyond some threshold. And I think given how potentially dangerous it is and lucrative it is as well, I don’t think we should be thinking about this as a problem. Rather than doing what I think many others say, okay, they understand it’s dangerous, they will say, but why don’t we have risk -based? Now, ex ante, you cannot work it out. All you will do is you will have technologies that are just, you will end up with regulations that will become just too stringent and will kill the sector.

Rather, along the way, you have a system of explainability audits. With that, let me hand

Priyanka Jain

Mr. Kamath, I’m going to come to you. Economists worry about tourists under regulation that creates instability and over -regulation that will kill dynamism. Where do you see gift city? Because, again, it’s at an intersection of local and I want to hear your views on it.

Praveen Kamat

That’s the problem facing all regulators worldwide across financial sector. Over -regulation repels innovation. Under -regulation repels serious long -term capital. So now, where do you draw the balancing equilibrium point? Let me explain it with an example, simple example. I joined SEBI, Securities and Exchange Board of India in 2008. I was posted to the surveillance department. In 2008 itself, the financial crisis was in full flow. So in our surveillance systems, which are very, very powerful systems, we noticed 1 ,000 orders being entered in a span of couple of microseconds. So we were wondering how is this possible, how can a human enter so many orders. Then we came to know algorithmic trading terminals have been deployed by certain entities in the stock market.

When we dug deeper we came to know that initially it was deployed in 2004 by one entity and then slowly slowly it was the volumes were increasing. I mean it didn’t reach a critical point but they were slowly increasing. Now in 2010 the inflection point came when it reached a critical mass. SEBI came up with guidelines to protect safeguard the retail investors and to preserve financial stability. So here is a perfect example where an innovation in the capital market which is algorithmic trading was deployed by entities for a good six years. It was not regulated, it was being used and the regulator didn’t do anything to stop it. But when the regulator issued the guidelines the necessary safeguards were put in place.

However at the same time there were no breaks applied on the rollout of the innovation. So algorithmic trading even after the guidelines grew exponentially in the Indian capital market to where it is today. So in the same manner, we hope to facilitate innovation in gift IFSE. We have sandboxes in place for startups as well as established entities. They can roll out their AI pilots in the sandbox. The goal is to cap the risk. Like sir said, it’s very difficult to identify all the risks. But whatever possible risks can be identified, let’s cap the risk without going into the technical mechanics, you know, the internal mechanics. And then see how it flows out. Based on the data that you receive in the experimentation, accordingly the regulations can be tailored.

Thank you.

Priyanka Jain

I know we are at time, but I’m going to still extend because I have such a prestigious panel by another few minutes. Could you come back to you with a quick rapid fire? If you could tell us one risk that we are underestimating when it comes to AI. No, in

⁠Murlidhar Manchala

general we would not like to talk about risk. So that is our approach. Our keynote speaker, Ajay Choudhury, was also at the helm and the department was formed. So risk is maybe underestimating the risk. That is what I can say. That can be addressed only through the governance, particularly in the present emergence of technology. Actually, I

Priyanka Jain

like what Sanjeev sir was telling us. It’s never going to be risk -free, but we’ll have to move forward. We’ll have to figure it out and we’ll have to do it in as much as possible compartmentalized manner. So any risk that we are overestimating, anybody from the panel who wants to talk about any risk that we are overestimating. Let’s give Vikram a chance. I

Vikram Kishore Bhattacharya

mean, I think the fundamental nature would be there is no zero risk. It’s how do you equip yourself. to handle risks because I think a point that Mr. Chaudhry Mr. Sanyal also made is as a regulator or a regulated environment, how do you create the tools to be nimble to adapt as the technology adapts and I think that that is the important element. Right now the tools are there, there is so much we can do that we’re not, maybe we’re not doing as well, so maybe we can focus very well in the here and the now and equip ourselves to be nimble enough to deal with anything that comes because anybody who’s telling you what’s coming with a certain amount of certainty, I take that with a pinch of salt.

I think that the future is a little unknowable at this point of time but there are so much that is known and we should be able to tackle that right now. I

Priyanka Jain

think that’s great. Sanyal sir I’m going to again come to you. One reform that India must prioritize, what is your view on it? That’s

Sanjeev Sanyal

Copyright law. Who is the owner of a particular innovation? At which point do you call it an innovation? And is that innovation owned by the person who put the prompt in? Is it owned by the person on whose data it got trained? Or it belongs to the algorithm that created that innovation? So all of these I would say that we need to begin to think of a judicial system that can deal with these kinds of problems. We already have a cloud judicial system. But do remember that these very different kinds of, and I would almost call them philosophical problems, are going to turn up at our doorstep very, very quickly. And we need to be thinking about them.

Thank you. When UPI came in, I think about a decade ago, and we have the benefit of having the NPCA chairman himself being in the room, I think it was more than payment. It was trust in an invisible system. and today AI is becoming that invisible system that is sitting quietly in our credit underwriting decisions, our onboarding flows, grievance redressal systems, even regulatory reporting and I think that’s, it was a great discussion to talk about how do we embed trust in an AI system that is fast evolving because at the end of the day we’re thinking about the theme of the summit which is people, planet and progress all in the same breath. People, how do we protect them from opaque systems or bias?

Planet, how do we scale sustainably and responsibly and progress because it doesn’t have to be only fast innovation, it has to be fair innovation. So I think a lot of great thoughts today that came in the panel discussion and I’m extremely grateful to everybody who made time to have this discussion. Sanjeev sir, we could have some closing thoughts from your side. Well, you mentioned trust. Let me say that… while it is fair to trust UPI, but as I said it is relatively speaking not an emergent system. Deliberately in fact, you don’t want the UPI to be innovating on the interface. It can innovate at the back end however much you want, but you don’t want any surprises.

I send somebody 100 rupees and he gets 120 rupees or 80 rupees or on average you will get 100 rupees. That can’t be the basis of a UPI. So in that sense the UPI based system isn’t backbone infrastructure. It is not deliberately emergent. But AI systems are emergent. It can give you different answers at different points in time depending on what it’s trained for, what is the context, what is the things you have and in fact that is the innovation. If you fix it in a box to start with then you won’t get the innovation. But on the other hand if you give it some open ended thing yes, presumably it will improve but sometimes it may deteriorate, sometimes it may lie to you so in that context what I am trying to say is that in the case of artificial intelligence we should use it but we certainly should not trust it in fact its future is based on a certain level of skepticism, healthy skepticism that we must have about its capabilities it will do amazing things but in my view we should be clear that it is probably much much better at solving bounded problems it can play chess for example very very well but I doubt it can plan your career it’s an unbounded problem so if that is how you think about AI then what you need to do is to as I said begin to think this through in terms of how you apply it in particular boxes and where it has a clear set of things that you are trying to do.

So as I said, bounded problems and even there, verify.

Priyanka Jain

With that, we have audience questions. We have one question from Aditya, the founder of First Tile.

Audience

Thank you. Good evening. That was an incredible set of points that came up. Actually made some really interesting notes about the capital markets equal in Sanjeev that you drew. I thought that was a really interesting way of looking at AI and we’ve been in so many summits. I think this is a very, very interesting way that you’ve put it about risk and ex -ante versus post -ante. I had one question for you and I had two suggestions or requests. For Praveen and Davis. From an AI stack perspective, every summit or every conversation across different countries is looking at all the different components of the stack. And there are two things that kind of come up in most of these conversations, which is around sovereign data asset and leverage that comes out of it in terms of tools and models and so on.

Where is India’s perspective in all of this from a sovereign data asset utilization, the model leverage? And I think different countries are looking at their stack as their stack in which they’re going to give you access and so on and so forth. So I think that is something that will be great to get your perspective.

Sanjeev Sanyal

So obviously, India, with its very large population, has stacks of information on all kinds of things, from health to consumer behavior, et cetera. So in some ways, this is a good place for a huge amount of data for experimentation on human behavior and so on. But of course, you know, if data is the new thing, the new oil, the new… we need to be clear that we own… those rights if it’s our data I mean I’m not even getting into the privacy issue I’m assuming here that it’s all that has been taken care of so we are using anonymized data but even then we should at least have the rights to that data and also to some part of the processing of it there’s no point in saying that you know that we have the data but we neither have the rights to it nor do we have the oil rigs to pump out or the refineries to process the new oil so this is the context in which you may have seen in the latest budget we announced almost quarter of a century sort of tax holiday for putting up data centers in this country that’s not a trivial thing to do why are we doing it well basically because as I said data centers are the oil rigs of this new kind of oil.

And then, of course, we need new companies that will process this oil. Those are the new… We have created one, EI -LLM, but frankly, everybody gets very excited about LLM. LLM is only a very limited, in my view, not even the most interesting usage of artificial intelligence. It just happens to be that it is linguistically talented and consequently, you know, we use it for that. But there are many, many more interesting uses of AI. And as I keep coming back to you and stating that we need to create an ecosystem and that ecosystem, we all say, oh, you need to have, you know, half a trillion dollars of investment to create. Actually, no. Much of where you will end up with this use of these refineries, so to speak, will be quite… bounded problems in certain spaces.

So there is more than enough space for startups with much more modest budgets to do interesting things in AI. And I’m not just talking about people building use cases on other people’s. I’m saying literally bottom -up uses of AI. So I think there’s a lot to be done here. It’s an open space. This is basically like discovering the Americas. But, you know, yes, Spain did have an initial sort of starting advantage. But the great empire in the world was actually built by Britain, which was actually a late starter. So there are many, many countries in the world who you do not think today to be a particular player in this game, who will also turn up here.

And one of them could do much, much better than the guys who you think are at the cutting edge today. So this is an emergent situation with all kinds of unintended consequences, uses, positive and negative will happen out of all of this. I think the key here is to be nimble, keep your eyes open, including on the regulatory find, and do not have set ideas where this whole thing is headed because, frankly, we don’t know.

Audience

No, thanks for that. You know, I’m the founder of First Eye, which is a customer data platform. We work with a large number of enterprises on data, all consent. And so we get a ringside view to the application of all of that that you’re saying. And this kind of leads me to the suggestion. As a supplement, we have AI course, which is a repository of data sets, which is growing. And then for the financial sector as well, we are looking to, say, aggregate, to start with synthetic data and then maybe take up, take correlated data from the regulated. entities with their concepts so that would come into use. Okay, awesome. Actually, that kind of goes towards my suggestion bit for the two of you, which is I think, you know, Praveen, when you spoke about the sandbox from an IFSA perspective, I think the ability to extend that beyond just IFSA to, you know, also the other regulators is something I think will be very, very interesting for at least folks like us because we work with a number of entities which cut across different regulators and an associated point is, you know, today there are so many regulations that come in and I think there’s a lot of, there are two opportunities that I see exist.

One is there is different interpretation of the regulations by different entities and second is as a large data processor, not a data owner, but a data processor, I think there is stakeholder, we are one of the stakeholders in that whole process and today we may not have the adequate access or a seat at the table from a regulatory interpretation standpoint. And there, there is, I think, an opportunity for us to define something which is like, you know, what is a consent -backed API for data consumption, for example, and having a regulatory definition of that with participation from a data processor like us. And we’d love to kind of see if there are processes that allow somebody like us to engage with the regulators.

Praveen Kamat

We are open to that idea, but you have to remember one thing. IFSA is a jurisdiction, you know. It has its set of rules which are different from domestic India. So there is an interoperable sandbox mechanism in place between IFSA, RBI, SEBI, and IRDAI. So a solution that spans across the four regulators can be tested within the sandbox. But the issue is not technological. It’s not fiscal or financial. It’s legal. For example, in India, INR transactions are the norm, right? In IFSA, INR transactions are not permitted. You have 16 foreign currencies that are enabled. and you have to do transaction in those 16 currencies. So if your solution is not compatible across these areas, just to give you an example, then the sandbox experimentation will not go through.

So there are a lot more nuances like this which affect the rollout of pilots within the intraoperable sandbox. So just to give you an example. With respect to movement and processing of data, I will not comment at the moment because there are certain things in works in IFSA. So I leave my RBI colleague for that.

⁠Murlidhar Manchala

So just like my colleague said, we already have an intraoperable sandbox across regulators and it is on tap. So earlier it was team -based, but then now it’s on tap. Any type of product can be tested in the sandbox. But just to clarify on the sandbox, it is only when the regulated entity feels that the existing products or services is violating one of the regulations. So there are… very few number of entities which come to the sandbox because in general they are not required to be, required to come to the sandbox if they feel they are compliant to the regulations there is no need to come to the sandbox but then we are also thinking of another sandbox where we also provide some more than in terms of monitoring the regulation we can provide, we can support the innovation in terms of say compute data or tools so that is that is also in the thought process.

Priyanka Jain

We have been one of the beneficiaries of the sandbox and the hackathon at 5Money and the process has been phenomenal the way the RBI fintech teams engaged so maybe Aditya I can share some notes with you offline. but I think thank you this has been a phenomenal panel and great discussion on embedded governance when AI is making space in all things financial services how do we make space for governance in AI that was the theme of the discussion and I am very pleased to hear the views of this panel and I am grateful for making time thank you everyone applause thank you I am actually not going to say anything more apart from the fact that thank you and we will have a quick give of the mementos from India AI mission so my my colleague Kriti will do that so starting with applause applause applause applause applause applause applause applause applause applause applause applause Thank you.

Thank you. Thank you. Thank you. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Ajay Kumar Chaudhary
10 arguments136 words per minute2451 words1075 seconds
Argument 1
Governance must be embedded by design throughout AI lifecycle rather than applied as compliance afterthought
EXPLANATION
Chaudhary argues that AI governance cannot be an overlay applied after innovation has been scaled, but must be integrated into every stage of the AI lifecycle from conceptualization and data acquisition to model development, deployment and ongoing monitoring. This embedded approach ensures accountability, transparency, and risk management are built into systems design rather than added later.
EVIDENCE
He references Peter Drucker’s quote about management vs leadership and emphasizes that in financial services where trust is foundational, AI systems cannot function as opaque black boxes, especially when they influence access to credit or flag financial behavior.
MAJOR DISCUSSION POINT
AI Governance Framework and Embedded Governance
AGREED WITH
Moderator
Argument 2
Governance should focus on proportionality, fairness, explainability, and clear accountability with defined responsibility chains
EXPLANATION
Chaudhary outlines four foundational pillars for embedded governance: proportionality (risk-based governance intensity), fairness and non-discrimination, explainability and transparency, and clearly defined accountability where responsibility for outcomes cannot be outsourced. He emphasizes that algorithmic efficiency should not compromise equitable opportunity.
EVIDENCE
He notes that while institutions may collaborate with tech providers or leverage shared infrastructure, board and senior management must understand the logic and limitations of AI systems they operate.
MAJOR DISCUSSION POINT
AI Governance Framework and Embedded Governance
Argument 3
AI has evolved from analytical tool to infrastructure that shapes financial outcomes and must be treated as systemically relevant
EXPLANATION
Chaudhary argues that AI is no longer merely an analytical assistance tool but has become infrastructure that actively shapes financial outcomes through credit assessments, fraud detection, and compliance functions. When operating at systemic scale, even marginal inaccuracies can produce material consequences, requiring the same standards of resilience and accountability as any critical financial utility.
EVIDENCE
He provides examples of AI-enabled fraud detection reducing losses by 25-30% in high-value payment environments at NPCI, and notes that machine learning models analyze transaction histories and behavioral signals for granular borrower assessments.
MAJOR DISCUSSION POINT
AI as Infrastructure and Systemic Risk
DISAGREED WITH
Sanjeev Sanyal
Argument 4
Trust in AI requires predictable, explainable, and accountable systems that align with public interest
EXPLANATION
Chaudhary emphasizes that trust in financial AI is built when systems are predictable, explainable, and accountable, and deepens when innovation aligns with public interest. He argues that trust endures when leadership anticipates risk rather than reacts to failure, and that AI has potential to enhance trust through improved fraud detection and broader access.
EVIDENCE
He notes that finance rests on confidence that systems are fair, stable and accountable, with depositors trusting institutions to safeguard assets and borrowers trusting systems to assess risk fairly.
MAJOR DISCUSSION POINT
Trust and Transparency in AI Systems
AGREED WITH
Murlidhar Manchala, Sanjeev Sanyal
DISAGREED WITH
Sanjeev Sanyal
Argument 5
India can demonstrate that scale and safety can coexist through convergence of digital infrastructure, regulatory foresight, and innovation
EXPLANATION
Chaudhary argues that India stands at a pivotal moment with the ability to work across all five layers of the AI stack and deploy applications at population scale while shaping a global agenda for inclusive AI. The convergence of digital infrastructure, regulatory foresight, and entrepreneurial innovation offers a chance to show that governance can catalyze innovation.
EVIDENCE
He references India’s past decade demonstrating how population-scale digital public infrastructure can drive inclusion, efficiency and trust through systems built with interoperability, transparency and scale at their core.
MAJOR DISCUSSION POINT
India’s Strategic Position in Global AI Landscape
AGREED WITH
Sanjeev Sanyal, Priyanka Jain
Argument 6
India should focus on sovereign data assets and domestic AI capabilities while avoiding over-dependence on foreign supply chains
EXPLANATION
Chaudhary warns that concentration in advanced chips and foundational AI models raises critical considerations for economic sovereignty, financial stability, and national security. He argues for diversifying supply chains through domestic innovation and international collaboration to secure resilient AI foundations.
EVIDENCE
He notes that one firm controls more than 90% of advanced chips, three dominate cloud capacity, and a handful command foundation models, threatening financial stability and economic sovereignty. He outlines five interdependent layers of AI stability from semiconductor chips to applications.
MAJOR DISCUSSION POINT
India’s Strategic Position in Global AI Landscape
Argument 7
AI offers opportunity to deepen financial inclusion through granular risk assessment and reduced collateral dependence
EXPLANATION
Chaudhary argues that AI can convert India’s expanding digital footprint into broader formal access to fair financial services through granular dynamic risk assessment that reduces reliance on collateral-heavy models and static credit history. This can particularly benefit MSMEs outside traditional credit frameworks.
EVIDENCE
He notes that India is expected to account for a significant share of global digital transaction growth this decade, and that transaction-level data, cash flow analytics and behavioral indicators can provide more nuanced insights into repayment capacity.
MAJOR DISCUSSION POINT
Inclusion and Fairness in AI Implementation
Argument 8
Algorithmic bias from historically skewed datasets risks perpetuating inequalities rather than reducing them
EXPLANATION
Chaudhary warns that algorithms trained on historically skewed datasets may inadvertently misclassify, misprice or exclude segments that digital finance is intended to integrate. Without corrective safeguards, technology may reinforce rather than reduce disparities, particularly affecting informal sector participants and creating gender-based data gaps.
EVIDENCE
He notes India’s complexities including scale of digital participation, linguistic diversity, demographic heterogeneity, and income variability that heighten model risk, and emphasizes that inclusion cannot be assumed but must be intentionally designed.
MAJOR DISCUSSION POINT
Inclusion and Fairness in AI Implementation
Argument 9
Inclusive AI requires representative training data, impact audits, and community feedback mechanisms
EXPLANATION
Chaudhary argues that to ensure AI promotes rather than hinders inclusion, there must be representativeness in training datasets, periodic impact audits, and community-level feedback mechanisms. He also calls for institutional mechanisms that allow individuals to seek clarification and redress where automated decisions affect their financial standing.
MAJOR DISCUSSION POINT
Inclusion and Fairness in AI Implementation
Argument 10
AI amplifies both defensive capabilities and adversarial threats, requiring strengthened detection and response systems
EXPLANATION
Chaudhary notes that while AI strengthens defense mechanisms, it can also be leveraged by adversaries, creating amplified cybersecurity risks in the AI environment. Institutions must anticipate adversarial AI and strengthen their detection capabilities accordingly.
MAJOR DISCUSSION POINT
Cybersecurity and Risk Management
AGREED WITH
Vikram Kishore Bhattacharya
S
Sanjeev Sanyal
7 arguments156 words per minute3299 words1266 seconds
Argument 1
Risk-based governance approach is flawed because AI is emergent and unpredictable, making ex-ante risk assessment impossible
EXPLANATION
Sanyal argues that AI cannot be put into real risk buckets because it is an emergent, evolving technology that interconnects in unpredictable ways. He contends that risk-based systems rely on assessments of where technology will go, which is impossible to determine for AI, making European-style risk-based regulation either too stringent or ineffective.
EVIDENCE
He compares this to social media’s unexpected development into echo chambers despite predictions of creating a ‘golden mean’ of shared views, and notes that even innocuous AI applications might blow up entire systems due to their emergent nature.
MAJOR DISCUSSION POINT
AI Governance Framework and Embedded Governance
DISAGREED WITH
Ajay Kumar Chaudhary
Argument 2
AI systems have emergent behaviors unlike static infrastructure like UPI, requiring different regulatory approaches
EXPLANATION
Sanyal distinguishes between AI systems that have emergent behaviors and evolve over time, versus infrastructure like UPI that is deliberately non-emergent and provides consistent, predictable outcomes. He argues that while UPI can be trusted because it doesn’t innovate on the interface, AI systems can give different answers at different times, making trust inappropriate.
EVIDENCE
He explains that UPI deliberately avoids surprises – you send 100 rupees and the recipient gets exactly 100 rupees, not an average of 100 rupees, which would be unacceptable for payment infrastructure.
MAJOR DISCUSSION POINT
AI as Infrastructure and Systemic Risk
DISAGREED WITH
Ajay Kumar Chaudhary
Argument 3
Compartmentalized AI systems are safer and more efficient than interconnected ‘AI of everything’ approaches
EXPLANATION
Sanyal advocates for compartmentalized AI that solves bounded problems rather than pursuing interconnected ‘AI of everything’ or ‘internet of everything’ approaches. He argues this is safer because it prevents system-wide failures and more efficient from an energy perspective, while also being better at solving specific problems.
EVIDENCE
He references the Microsoft code failure that caused havoc in airports and ATMs worldwide, noting that in an emergent AI system, problems could flow to other parts before fixes are implemented. He also notes AI tends to hallucinate when given unbounded problems.
MAJOR DISCUSSION POINT
AI as Infrastructure and Systemic Risk
Argument 4
Financial regulation model with audits, transparency, shutdown mechanisms, and separation of functions should apply to AI
EXPLANATION
Sanyal proposes applying stock market regulation principles to AI: mandatory audits for explainability, automatic shutdown mechanisms when things go wrong, deliberate separation to avoid conflicts of interest, and clear accountability chains. He suggests major AI companies should undergo regular audits similar to how listed companies must be audited by chartered accountants.
EVIDENCE
He explains how stock markets are managed despite being unpredictable complex systems through transparency requirements, circuit breakers, conflict of interest prevention, and clear director responsibility when things go wrong.
MAJOR DISCUSSION POINT
Regulatory Approaches and Innovation Balance
AGREED WITH
Ajay Kumar Chaudhary, Murlidhar Manchala
Argument 5
Unlike UPI which is deliberately non-emergent, AI systems require healthy skepticism rather than blind trust
EXPLANATION
Sanyal argues that while trust in UPI is appropriate because it’s designed to be predictable and non-emergent, AI systems should be approached with healthy skepticism. He contends that AI’s future depends on recognizing its limitations and using it appropriately for bounded rather than unbounded problems.
EVIDENCE
He notes that AI can play chess very well (bounded problem) but doubts it can plan careers (unbounded problem), and emphasizes the need to verify AI outputs even when using it for appropriate applications.
MAJOR DISCUSSION POINT
Trust and Transparency in AI Systems
DISAGREED WITH
Ajay Kumar Chaudhary
Argument 6
India’s large population provides valuable data for AI training, but rights and processing capabilities must be domestically controlled
EXPLANATION
Sanyal emphasizes that while India has vast data from its large population covering health, consumer behavior, and other areas, the country must ensure it owns the rights to this data and has domestic processing capabilities. He argues there’s no point having data without the rights or the infrastructure to process it.
EVIDENCE
He mentions the quarter-century tax holiday for data centers announced in the latest budget, describing data centers as the ‘oil rigs’ of this new kind of oil, and notes the creation of AI-LLM as an example of domestic processing capability.
MAJOR DISCUSSION POINT
India’s Strategic Position in Global AI Landscape
AGREED WITH
Ajay Kumar Chaudhary, Priyanka Jain
Argument 7
Copyright and intellectual property frameworks need reform to address AI-generated innovations and data ownership
EXPLANATION
Sanyal identifies copyright law as a critical reform priority, questioning who owns AI-generated innovations – the person providing the prompt, the owner of training data, or the algorithm creator. He emphasizes that these philosophical problems will arrive quickly and require judicial system preparation.
MAJOR DISCUSSION POINT
Inclusion and Fairness in AI Implementation
P
Praveen Kamat
4 arguments184 words per minute874 words283 seconds
Argument 1
Gift City’s clean slate approach as new jurisdiction provides more experimentation space without legacy system constraints
EXPLANATION
Kamat argues that Gift City, established in 2015 with a new regulator created in 2020, benefits from starting with a clean slate without baggage of legacy systems. This provides more leg room and space to experiment compared to established domestic regulators, enabling faster regulatory evolution across multiple financial verticals.
EVIDENCE
He notes that Gift City has introduced regulations across capital markets, banking, insurance, pensions, and new verticals like ship leasing and aircraft leasing, all developed from scratch over six years.
MAJOR DISCUSSION POINT
Regulatory Approaches and Innovation Balance
Argument 2
Gift City can serve as AI governance laboratory due to absence of retail investors and experimental regulatory appetite
EXPLANATION
Kamat explains that Gift City has immense ability to serve as a lab for AI governance because the absence of retail investors allows for higher appetite for experimentation and risk-taking compared to other regulators. However, he notes that building a financial center requires time and reaching critical mass.
EVIDENCE
He emphasizes that Gift City is in a gestation period like a ’45 kilometer marathon’ rather than an ‘8 kilometer dream run’, and once critical mass is reached, significant developments will emerge.
MAJOR DISCUSSION POINT
India’s Strategic Position in Global AI Landscape
Argument 3
Over-regulation repels innovation while under-regulation repels serious capital, requiring careful balance
EXPLANATION
Kamat identifies the fundamental challenge facing all financial regulators worldwide – finding the equilibrium point between over-regulation that kills innovation and under-regulation that drives away serious long-term capital. He illustrates this balance through the example of algorithmic trading regulation.
EVIDENCE
He describes how SEBI allowed algorithmic trading to develop from 2004-2010 without regulation, then introduced guidelines when it reached critical mass, successfully preserving both innovation and investor protection while allowing exponential growth.
MAJOR DISCUSSION POINT
Regulatory Approaches and Innovation Balance
Argument 4
Interoperable sandbox mechanisms across regulators enable cross-sector AI solution testing
EXPLANATION
Kamat explains that Gift City has interoperable sandbox mechanisms with RBI, SEBI, and IRDAI that allow solutions spanning multiple regulators to be tested. However, he notes that challenges are often legal rather than technological, such as currency restrictions between domestic India and IFSC jurisdictions.
EVIDENCE
He provides the example that while INR transactions are the norm in India, they are not permitted in IFSC which operates with 16 foreign currencies, creating compatibility issues for cross-jurisdictional solutions.
MAJOR DISCUSSION POINT
Regulatory Approaches and Innovation Balance
AGREED WITH
Murlidhar Manchala, Audience
M
Murlidhar Manchala
3 arguments0 words per minute0 words1 seconds
Argument 1
Entities implementing robust controls and governance frameworks should receive supervisory relief for first-time lapses
EXPLANATION
Manchala explains that the RBI framework suggests entities that implement proper guardrails and conduct root cause analysis when problems occur should receive lenient supervisory approach. This recognizes that AI technology is probabilistic and can have lapses, but focuses on whether proper governance processes are in place to protect customers.
EVIDENCE
He notes that the framework suggests this approach should be seen as an instrument to encourage proper governance implementation, and mentions plans for awards recognizing good AI practices in finance.
MAJOR DISCUSSION POINT
AI Governance Framework and Embedded Governance
Argument 2
AI systems should be ‘glass boxes’ rather than ‘black boxes’ with transparent processes for customers
EXPLANATION
Manchala emphasizes that AI systems should be transparent and understandable to customers rather than operating as opaque black boxes. The main focus should be ensuring customers don’t face risks through proper transparency, governance processes, and mechanisms across the AI lifecycle.
EVIDENCE
He mentions the importance of incident reporting mechanisms and manual overrides as part of the control framework to ensure customer protection.
MAJOR DISCUSSION POINT
Trust and Transparency in AI Systems
AGREED WITH
Ajay Kumar Chaudhary, Sanjeev Sanyal
Argument 3
Interoperable sandbox mechanisms across regulators enable cross-sector AI solution testing
EXPLANATION
Manchala confirms that there is already an interoperable sandbox across regulators that operates on-tap rather than time-based, allowing any type of product to be tested. He clarifies that sandboxes are primarily for situations where existing products or services might violate regulations, and notes consideration of expanded sandbox support including compute, data, and tools.
EVIDENCE
He explains that relatively few entities use the sandbox because most feel compliant with existing regulations, but RBI is considering additional innovation support beyond regulatory monitoring.
MAJOR DISCUSSION POINT
Regulatory Approaches and Innovation Balance
AGREED WITH
Praveen Kamat, Audience
V
Vikram Kishore Bhattacharya
5 arguments175 words per minute694 words236 seconds
Argument 1
Cloud service providers must maintain cybersecurity standards through third-party validation while enabling AI innovation
EXPLANATION
Bhattacharya emphasizes the importance of cloud service providers maintaining standards like ISO and NIST with independent third-party validation of security controls. He advocates for a ‘trust but verify’ approach where organizations work with trusted service providers while maintaining validation of their security measures.
EVIDENCE
He notes that AWS and other cloud providers undergo regular third-party audits and maintain various compliance certifications to validate their security controls.
MAJOR DISCUSSION POINT
AI as Infrastructure and Systemic Risk
Argument 2
Generative AI lowers barriers for threat actors but doesn’t fundamentally change attack nature, so existing security principles still apply
EXPLANATION
Bhattacharya argues that while generative AI serves as an accelerant for phishing attacks, credential attacks, and malicious code, it hasn’t fundamentally changed the nature of cyber attacks. Therefore, the same foundational cybersecurity principles that worked before generative AI still hold true.
EVIDENCE
He references a 2025 report showing how generative AI has lowered barriers for threat actors, and lists fundamental security practices like multi-factor authentication, strong passwords, regular updates, and system scanning as still being effective.
MAJOR DISCUSSION POINT
Cybersecurity and Risk Management
AGREED WITH
Ajay Kumar Chaudhary
Argument 3
Organizations must become active participants in cybersecurity rather than passive passengers
EXPLANATION
Bhattacharya emphasizes that the changing cybersecurity landscape requires organizations, especially in financial services, to actively engage in cybersecurity rather than being passive. This is particularly important in India where there’s a large citizenry with different levels of financial literacy.
EVIDENCE
He notes that as more people digitize, so do those looking to attack vulnerabilities, making active participation in cybersecurity essential.
MAJOR DISCUSSION POINT
Cybersecurity and Risk Management
Argument 4
Human-AI collaboration should position AI as tool in the loop rather than human in the loop
EXPLANATION
Bhattacharya advocates for a paradigm shift where instead of thinking about humans in the AI loop, humans should think of AI as a tool in their loop. This approach focuses on using AI technologies for faster responses, automated scanning, and automated reporting to enable better human decision-making.
MAJOR DISCUSSION POINT
Trust and Transparency in AI Systems
DISAGREED WITH
Ajay Kumar Chaudhary
Argument 5
Focus should be on equipping systems to handle known risks while building nimble adaptation capabilities
EXPLANATION
Bhattacharya argues that rather than trying to predict unknown future risks with certainty, organizations should focus on addressing known risks effectively while building capabilities to be nimble and adapt as technology evolves. He emphasizes that there is no zero risk, only risk management.
EVIDENCE
He notes that there is much that can be done with current knowledge and tools that organizations may not be doing well, suggesting focus on current capabilities before worrying about unpredictable future scenarios.
MAJOR DISCUSSION POINT
Cybersecurity and Risk Management
M
Moderator
1 argument16 words per minute145 words531 seconds
Argument 1
AI governance should be embedded as a layer within existing technology governance frameworks rather than treated as separate
EXPLANATION
The moderator emphasizes that AI governance should not be viewed through a completely different lens but should be integrated as an embedded layer within the governance frameworks already used for other technologies. This approach builds on existing regulatory infrastructure rather than creating entirely new systems.
EVIDENCE
The moderator states this aligns with the overall theme of the summit looking at governance of AI as an embedded layer of governance that we already govern technologies with.
MAJOR DISCUSSION POINT
AI Governance Framework and Embedded Governance
AGREED WITH
Ajay Kumar Chaudhary
P
Priyanka Jain
4 arguments110 words per minute1025 words555 seconds
Argument 1
Countries that master general purpose technologies gain outsized economic advantages, making AI a potential inflection point for India
EXPLANATION
Jain argues that historically, nations that have successfully adopted and mastered transformative technologies like steam engines, electricity, and the internet have achieved disproportionate economic benefits. She positions AI as potentially being such an inflection point for India’s economic development.
EVIDENCE
She references historical examples of general purpose technologies from steam engines to early electricity to the Internet that provided economic advantages to early adopters.
MAJOR DISCUSSION POINT
India’s Strategic Position in Global AI Landscape
Argument 2
India has historically created, adapted, scaled and governed technology uniquely, particularly with digital public infrastructure
EXPLANATION
Jain emphasizes India’s track record of not simply adopting technologies but transforming them through creation, adaptation, scaling and governance in distinctly Indian ways. She highlights this pattern with identity systems, payments, and digital public infrastructure as examples of India’s innovative approach.
EVIDENCE
She specifically mentions India’s work with identity, payments, and digital public infrastructure as examples of this unique approach.
MAJOR DISCUSSION POINT
India’s Strategic Position in Global AI Landscape
AGREED WITH
Ajay Kumar Chaudhary, Sanjeev Sanyal
Argument 3
AI is becoming an invisible system embedded in critical financial processes, requiring trust-building similar to UPI adoption
EXPLANATION
Jain draws parallels between AI adoption and UPI’s success, noting that AI is quietly integrating into credit underwriting, onboarding flows, grievance systems, and regulatory reporting. She argues that building trust in these invisible AI systems is crucial for their acceptance and effectiveness.
EVIDENCE
She mentions specific examples of AI integration in credit underwriting decisions, onboarding flows, grievance redressal systems, and regulatory reporting.
MAJOR DISCUSSION POINT
Trust and Transparency in AI Systems
Argument 4
Innovation must balance speed with fairness, protecting people from bias while enabling sustainable progress
EXPLANATION
Jain advocates for a balanced approach to AI innovation that considers people, planet, and progress simultaneously. She emphasizes that innovation should not only be fast but also fair, protecting individuals from opaque systems and bias while scaling sustainably and responsibly.
EVIDENCE
She references the summit theme of people, planet and progress, specifically mentioning protection from opaque systems or bias, sustainable scaling, and fair innovation.
MAJOR DISCUSSION POINT
Inclusion and Fairness in AI Implementation
⁠Murlidhar Manchala
1 argument208 words per minute813 words233 seconds
Argument 1
Regulators should avoid discussing specific risks and focus on governance frameworks to address risk underestimation
EXPLANATION
Manchala takes a cautious approach to risk discussion, preferring not to elaborate on specific risks but emphasizing that the main concern is underestimating risks in general. He argues that comprehensive governance frameworks are the primary tool for addressing this challenge, particularly given the emergent nature of AI technology.
EVIDENCE
He states ‘we would not like to talk about risk’ and notes that risk underestimation ‘can be addressed only through the governance, particularly in the present emergence of technology.’
MAJOR DISCUSSION POINT
AI Governance Framework and Embedded Governance
A
Audience
3 arguments187 words per minute555 words177 seconds
Argument 1
India needs clear strategy for sovereign data asset utilization and model leverage in the global AI stack competition
EXPLANATION
The audience member raises concerns about India’s positioning in the global AI stack, particularly regarding how the country can leverage its sovereign data assets to build competitive AI tools and models. They note that different countries are developing their own AI stacks and controlling access, making India’s strategy crucial for maintaining technological sovereignty.
EVIDENCE
The audience member mentions that ‘different countries are looking at their stack as their stack in which they’re going to give you access’ and references discussions about sovereign data assets and model leverage across different summits.
MAJOR DISCUSSION POINT
India’s Strategic Position in Global AI Landscape
Argument 2
Regulatory sandboxes should be extended beyond individual regulators to enable cross-regulatory AI experimentation
EXPLANATION
The audience member suggests that current regulatory sandboxes, while valuable, should be expanded to work across different regulatory bodies rather than being confined to individual regulators. This would enable more comprehensive testing of AI solutions that span multiple regulatory domains and stakeholder types.
EVIDENCE
The audience member mentions working ‘with a number of entities which cut across different regulators’ and suggests extending sandbox capabilities ‘beyond just IFSA to also the other regulators.’
MAJOR DISCUSSION POINT
Regulatory Approaches and Innovation Balance
AGREED WITH
Praveen Kamat, Murlidhar Manchala
Argument 3
Data processors need greater regulatory engagement and clearer definitions for consent-backed data consumption
EXPLANATION
The audience member argues that data processors, as distinct from data owners, currently lack adequate representation in regulatory discussions despite being key stakeholders. They advocate for clearer regulatory definitions around consent-backed APIs for data consumption and more inclusive processes that give data processors a seat at the regulatory table.
EVIDENCE
The audience member identifies as ‘a large data processor, not a data owner, but a data processor’ and mentions the need for ‘regulatory definition of that with participation from a data processor like us’ regarding consent-backed APIs.
MAJOR DISCUSSION POINT
Data governance
Agreements
Agreement Points
AI governance must be embedded throughout the system lifecycle rather than applied as an afterthought
Speakers: Ajay Kumar Chaudhary, Moderator
Governance must be embedded by design throughout AI lifecycle rather than applied as compliance afterthought AI governance should be embedded as a layer within existing technology governance frameworks rather than treated as separate
Both speakers emphasize that AI governance should be integrated into systems from the beginning rather than added later as a compliance measure, building on existing governance frameworks
AI systems require transparency and explainability rather than operating as black boxes
Speakers: Ajay Kumar Chaudhary, Murlidhar Manchala, Sanjeev Sanyal
Trust in AI requires predictable, explainable, and accountable systems that align with public interest AI systems should be ‘glass boxes’ rather than ‘black boxes’ with transparent processes for customers Financial regulation model with audits, transparency, shutdown mechanisms, and separation of functions should apply to AI
All three speakers agree that AI systems must be transparent and explainable to users, with Chaudhary emphasizing trust-building, Manchala advocating for ‘glass boxes’, and Sanyal proposing audit mechanisms similar to financial regulation
Regulatory sandboxes and interoperability across regulators are valuable for AI innovation
Speakers: Praveen Kamat, Murlidhar Manchala, Audience
Interoperable sandbox mechanisms across regulators enable cross-sector AI solution testing Interoperable sandbox mechanisms across regulators enable cross-sector AI solution testing Regulatory sandboxes should be extended beyond individual regulators to enable cross-regulatory AI experimentation
There is consensus that regulatory sandboxes should work across multiple regulatory bodies to enable comprehensive testing of AI solutions that span different domains
India has strategic advantages in AI development through its digital infrastructure and data assets
Speakers: Ajay Kumar Chaudhary, Sanjeev Sanyal, Priyanka Jain
India can demonstrate that scale and safety can coexist through convergence of digital infrastructure, regulatory foresight, and innovation India’s large population provides valuable data for AI training, but rights and processing capabilities must be domestically controlled India has historically created, adapted, scaled and governed technology uniquely, particularly with digital public infrastructure
All speakers recognize India’s unique position with its large population, digital infrastructure, and track record of technology adaptation as providing strategic advantages for AI development
Cybersecurity risks are amplified in AI environments requiring enhanced defensive measures
Speakers: Ajay Kumar Chaudhary, Vikram Kishore Bhattacharya
AI amplifies both defensive capabilities and adversarial threats, requiring strengthened detection and response systems Generative AI lowers barriers for threat actors but doesn’t fundamentally change attack nature, so existing security principles still apply
Both speakers acknowledge that AI creates new cybersecurity challenges by empowering both defenders and attackers, requiring enhanced but fundamentally similar security approaches
Similar Viewpoints
Both RBI representatives agree that organizations implementing proper AI governance frameworks should receive lenient treatment for initial failures, recognizing the probabilistic nature of AI technology
Speakers: Ajay Kumar Chaudhary, Murlidhar Manchala
Entities implementing robust controls and governance frameworks should receive supervisory relief for first-time lapses Entities implementing robust controls and governance frameworks should receive supervisory relief for first-time lapses
Both speakers advocate for cautious, adaptive approaches to AI rather than blind trust, emphasizing the need to focus on known capabilities while building flexibility for unknown future developments
Speakers: Sanjeev Sanyal, Vikram Kishore Bhattacharya
Unlike UPI which is deliberately non-emergent, AI systems require healthy skepticism rather than blind trust Focus should be on equipping systems to handle known risks while building nimble adaptation capabilities
Both speakers emphasize that AI development must prioritize fairness and inclusion, ensuring that technological progress doesn’t perpetuate or create new forms of discrimination
Speakers: Ajay Kumar Chaudhary, Priyanka Jain
Inclusive AI requires representative training data, impact audits, and community feedback mechanisms Innovation must balance speed with fairness, protecting people from bias while enabling sustainable progress
Unexpected Consensus
Rejection of comprehensive risk-based governance approaches
Speakers: Sanjeev Sanyal, Murlidhar Manchala
Risk-based governance approach is flawed because AI is emergent and unpredictable, making ex-ante risk assessment impossible Regulators should avoid discussing specific risks and focus on governance frameworks to address risk underestimation
It’s unexpected that both a policy advisor and an RBI representative would be skeptical of detailed risk-based approaches, with Sanyal explicitly criticizing European-style risk assessment and Manchala preferring not to discuss specific risks, instead focusing on general governance frameworks
Preference for compartmentalized rather than interconnected AI systems
Speakers: Sanjeev Sanyal, Vikram Kishore Bhattacharya
Compartmentalized AI systems are safer and more efficient than interconnected ‘AI of everything’ approaches Human-AI collaboration should position AI as tool in the loop rather than human in the loop
Both speakers, despite coming from different backgrounds (policy and technology), converge on preferring bounded, compartmentalized AI applications rather than comprehensive interconnected systems, which goes against much of the industry rhetoric about AI integration
Overall Assessment

The speakers demonstrated strong consensus on fundamental principles of AI governance including the need for embedded governance, transparency, regulatory sandboxes, and India’s strategic positioning. There was also unexpected agreement on skepticism toward comprehensive risk-based approaches and preference for compartmentalized AI systems.

High level of consensus on core governance principles with some surprising alignment on more nuanced technical and regulatory approaches. This suggests a mature understanding of AI challenges across different stakeholder groups and could facilitate more coordinated policy development in India’s AI ecosystem.

Differences
Different Viewpoints
Risk-based governance approach for AI regulation
Speakers: Ajay Kumar Chaudhary, Sanjeev Sanyal
Governance intensity should be risk-based. It should be risk-based intensity. Risk-based governance approach is flawed because AI is emergent and unpredictable, making ex-ante risk assessment impossible
Chaudhary advocates for risk-based governance intensity as one of four foundational pillars, while Sanyal fundamentally rejects risk-based systems for AI, arguing they are impossible to implement effectively because AI’s emergent nature makes ex-ante risk assessment impossible
Trust in AI systems
Speakers: Ajay Kumar Chaudhary, Sanjeev Sanyal
Trust in AI requires predictable, explainable, and accountable systems that align with public interest Unlike UPI which is deliberately non-emergent, AI systems require healthy skepticism rather than blind trust
Chaudhary emphasizes building trust in AI systems through predictability and explainability, while Sanyal argues that trust in AI is inappropriate and that healthy skepticism should be maintained due to AI’s emergent and unpredictable nature
AI as infrastructure classification
Speakers: Ajay Kumar Chaudhary, Sanjeev Sanyal
AI has evolved from analytical tool to infrastructure that shapes financial outcomes and must be treated as systemically relevant AI systems have emergent behaviors unlike static infrastructure like UPI, requiring different regulatory approaches
Chaudhary treats AI as infrastructure requiring the same standards as critical financial utilities, while Sanyal distinguishes AI from infrastructure like UPI, emphasizing that AI’s emergent behaviors make it fundamentally different from predictable infrastructure
Human-AI interaction paradigm
Speakers: Ajay Kumar Chaudhary, Vikram Kishore Bhattacharya
Embedded governance means integrating accountability, transparency, and risk management into every stage of the AI life cycle Human-AI collaboration should position AI as tool in the loop rather than human in the loop
Chaudhary focuses on embedding human oversight and governance throughout AI lifecycle, while Bhattacharya advocates for a paradigm shift where AI is positioned as a tool in the human loop rather than humans being in the AI loop
Unexpected Differences
Fundamental nature of AI governance philosophy
Speakers: Ajay Kumar Chaudhary, Sanjeev Sanyal
Governance must be embedded by design throughout AI lifecycle rather than applied as compliance afterthought Risk-based governance approach is flawed because AI is emergent and unpredictable, making ex-ante risk assessment impossible
Despite both being senior government officials focused on AI governance, they have fundamentally different philosophical approaches – Chaudhary advocates for comprehensive embedded governance while Sanyal rejects the entire premise of risk-based regulation for emergent technologies
Role of prediction in AI regulation
Speakers: Ajay Kumar Chaudhary, Sanjeev Sanyal
A risk-based approach to AI governance acknowledges that innovation and prudence are not opposing forces Do not try and necessarily guess where this is headed. But of course, we need to engage in it
Unexpectedly, the policy maker (Chaudhary) advocates for predictive risk assessment while the economic advisor (Sanyal) strongly warns against trying to predict AI’s direction, representing a reversal of typical cautious vs. progressive stances
Overall Assessment

The discussion revealed significant philosophical disagreements about AI governance approaches, particularly between embedded risk-based governance versus financial market-style regulation, and whether AI should be trusted or approached with skepticism

Moderate to high disagreement on fundamental approaches, but strong consensus on the importance of governance, transparency, and India’s strategic positioning. The disagreements reflect different schools of thought on regulating emergent technologies and could lead to conflicting policy directions if not reconciled

Partial Agreements
Both agree that governance must be built into AI systems from the beginning rather than added later, but disagree on the approach – Chaudhary favors embedded risk-based governance while Sanyal prefers financial market-style regulation with audits and accountability
Speakers: Ajay Kumar Chaudhary, Sanjeev Sanyal
Governance cannot be an overlay applied after innovation has already been scaled. It must be embedded by design. Financial regulation model with audits, transparency, shutdown mechanisms, and separation of functions should apply to AI
Both emphasize the importance of transparency and explainability in AI systems, but propose different mechanisms – Chaudhary through embedded governance pillars and Sanyal through mandatory audits similar to financial market regulation
Speakers: Ajay Kumar Chaudhary, Sanjeev Sanyal
Explainability and transparency Financial regulation model with audits, transparency, shutdown mechanisms, and separation of functions should apply to AI
Both acknowledge existing interoperable sandbox mechanisms across regulators, but Kamat emphasizes legal constraints limiting effectiveness while Manchala notes limited usage due to compliance confidence
Speakers: Praveen Kamat, Murlidhar Manchala
Interoperable sandbox mechanisms across regulators enable cross-sector AI solution testing Interoperable sandbox mechanisms across regulators enable cross-sector AI solution testing
Takeaways
Key takeaways
AI governance must be embedded by design throughout the entire AI lifecycle rather than applied as a compliance afterthought, requiring integration of accountability, transparency, and risk management from conceptualization to deployment Traditional risk-based governance approaches are inadequate for AI because of its emergent and unpredictable nature, making ex-ante risk assessment nearly impossible – governance should focus on ex-post accountability with clear responsibility chains AI should be treated as systemically relevant financial infrastructure subject to the same standards of resilience and accountability as critical financial utilities, but managed through compartmentalized systems rather than interconnected ‘AI of everything’ approaches India can leverage its large population data assets and clean-slate regulatory environments like Gift City to become a global leader in AI governance while maintaining sovereign control over data rights and processing capabilities Financial services regulation models with audits, transparency requirements, shutdown mechanisms, and separation of functions provide a viable framework for AI governance that balances innovation with prudential oversight Trust in AI systems requires predictable, explainable, and accountable operations with transparent processes for customers, positioning AI as a tool ‘in the loop’ rather than requiring humans ‘in the loop’ AI implementation must prioritize inclusion and fairness through representative training data, impact audits, and community feedback mechanisms to avoid perpetuating historical inequalities in financial services access
Resolutions and action items
Establish clear ex-ante responsibility chains defining who will be held accountable when AI systems fail, similar to how company directors are held responsible in financial markets Implement mandatory audit systems for AI explainability, potentially creating ‘chartered AI audits’ for systems above certain thresholds Create compartmentalized AI systems with deliberate separation and firewalls rather than pursuing interconnected AI architectures Develop interoperable sandbox mechanisms across regulators (RBI, SEBI, IRDAI, IFSCA) to enable cross-sector AI solution testing Provide supervisory relief and lenient regulatory treatment for entities that implement robust AI governance frameworks and conduct proper root cause analysis after lapses Reform copyright and intellectual property laws to address AI-generated innovations and clarify data ownership rights Establish domestic data center infrastructure and AI processing capabilities to maintain sovereign control over India’s data assets
Unresolved issues
How to effectively regulate AI systems that have emergent behaviors and unpredictable evolution paths without stifling innovation Where exactly in the AI value chain responsibility should be assigned – whether with algorithm creators, data providers, or system deployers How to balance the need for AI transparency and explainability with the competitive advantages that come from proprietary AI systems What specific mechanisms should be used to ensure AI systems remain fair and inclusive as they scale, particularly for underserved populations How to manage the concentration risk in AI supply chains, particularly regarding advanced chips and foundational models controlled by few global players What constitutes appropriate ‘bounded problems’ for AI applications versus dangerous ‘unbounded’ use cases How to create effective international coordination on AI governance while maintaining national sovereignty over critical AI infrastructure
Suggested compromises
Implement a balanced regulatory approach that encourages experimentation through sandboxes while maintaining institutional responsibility for outcomes Allow first-time regulatory lapses for entities with robust governance frameworks while maintaining strict accountability for repeated failures Focus on ex-post punishment systems with clear skin-in-the-game mechanisms rather than trying to predict and prevent all possible AI risks ex-ante Enable compartmentalized AI development that solves bounded problems effectively while avoiding system-wide interconnection risks Create regulatory frameworks that reward entities implementing strong AI governance with calibrated supervisory relief Develop AI governance that is ‘context-aware’ for local realities while remaining globally coherent Balance innovation promotion with prudential oversight by treating AI governance as a strategic imperative rather than regulatory burden
Thought Provoking Comments
It’s not always the first movers who benefit from it and it’s not the case that even those who invent these technologies know where they’re headed. The European Renaissance…was based on three technologies. One was the printing press, the other was gunpowder, and the third was mathematics. The first two were invented by the Chinese and the third was invented by the Indians, but it is the Europeans that took it, owned it and dominated the world.
This historical analogy fundamentally reframes the AI race narrative, challenging the assumption that first-mover advantage or invention guarantees dominance. It provides crucial perspective that technological mastery and strategic application matter more than being first to market.
This comment set the tone for the entire discussion by establishing that India doesn’t need to lead in AI invention but can still achieve dominance through strategic implementation. It shifted the conversation from anxiety about being behind to confidence about India’s potential for AI leadership.
Speaker: Sanjeev Sanyal
You cannot actually put AI or any types of AI into any real risk bucket because this is an emergent evolving thing…if you are saying I am going to do risk based it means that you have some assessment of where that thing will go and I am telling you that it is almost impossible to do this
This directly challenges the dominant regulatory paradigm of risk-based governance that most frameworks (including EU’s) rely on. It’s intellectually honest about the fundamental unpredictability of AI systems and questions whether traditional regulatory approaches can work.
This comment fundamentally shifted the discussion away from conventional risk-based regulation toward alternative governance models. It forced other panelists to defend or reconsider their approaches, leading to deeper exploration of ex-post vs ex-ante regulatory frameworks.
Speaker: Sanjeev Sanyal
There are other systems that we manage where we have no idea where they are going. Take for example the stock market…we manage it by creating a framework which…has audits and enforces transparency and explainability…systems of shutting things down when things go wrong…deliberately creates systems of separation…and creates skin in the game
This provides a practical alternative to risk-based regulation by drawing parallels to financial market regulation. It offers concrete, implementable solutions (audits, circuit breakers, compartmentalization, accountability) rather than theoretical frameworks.
This shifted the conversation from abstract governance principles to concrete regulatory mechanisms. It provided a roadmap that other panelists could build upon and influenced subsequent discussions about compartmentalization and accountability.
Speaker: Sanjeev Sanyal
I am personally very suspicious of any idea of the internet of everything and the AI of everything that would be a disaster I think we need to be willing to allow compartmentalized AI
This challenges the prevailing tech industry narrative of interconnected AI systems and proposes deliberate fragmentation as a safety measure. It’s counterintuitive to typical efficiency arguments and prioritizes safety over optimization.
This introduced the concept of deliberate compartmentalization as a governance strategy, which became a recurring theme. Other panelists, including Priyanka Jain, picked up on this concept and it influenced discussions about how Gift City could serve as a compartmentalized testing ground.
Speaker: Sanjeev Sanyal
Embedded governance means integrating accountability, transparency, and risk management into every stage of the AI life cycle. from conceptualization and data acquisition to model development, deployment and ongoing monitoring…governance cannot be an overlay applied after innovation has already been scaled. It must be embedded by design.
This articulates a fundamental shift from reactive to proactive governance, emphasizing that governance must be built into AI systems from the ground up rather than added later. It provides a comprehensive framework for thinking about AI governance across the entire development lifecycle.
This keynote comment established the central theme and framework for the entire panel discussion. All subsequent conversations referenced back to this concept of ’embedded governance,’ and panelists used it as a foundation to build their arguments about regulation, implementation, and oversight.
Speaker: Ajay Kumar Chaudhary
When you build something from scratch and when you have a brand new regulator, like IFSC which was created in 2020, you start with a clean slate. So that means you have more leg room and you have more space to experiment. So we don’t have baggage of the legacy systems.
This highlights a unique advantage that new regulatory jurisdictions have over established ones – the ability to design governance frameworks without being constrained by legacy systems. It suggests that innovation in governance itself can be a competitive advantage.
This comment introduced the concept of regulatory innovation and positioned Gift City as a potential laboratory for AI governance. It complemented Sanyal’s compartmentalization argument by providing a concrete example of how separated regulatory environments could foster innovation.
Speaker: Praveen Kamat
Rather than AI thinking about a human in the loop, humans think AI as a loop to move forward
This represents a paradigm shift from the traditional ‘human-in-the-loop’ concept to ‘AI-in-the-loop,’ suggesting that humans should remain the primary decision-makers while using AI as a tool rather than ceding control to AI systems.
This reframing influenced how the panel discussed the relationship between human oversight and AI automation, particularly in Vikram’s response about cybersecurity and the need for humans to become ‘active participants’ rather than ‘passive passengers.’
Speaker: Priyanka Jain (referencing earlier discussion)
Copyright law. Who is the owner of a particular innovation? At which point do you call it an innovation? And is that innovation owned by the person who put the prompt in? Is it owned by the person on whose data it got trained? Or it belongs to the algorithm that created that innovation?
This identifies a fundamental legal and philosophical challenge that current legal frameworks are unprepared to handle. It highlights how AI challenges basic concepts of ownership, creativity, and intellectual property that underpin modern economic systems.
This comment broadened the discussion beyond technical governance to fundamental legal and philosophical questions. It demonstrated that AI governance isn’t just about managing technology but about rethinking basic legal and economic concepts.
Speaker: Sanjeev Sanyal
Overall Assessment

These key comments fundamentally shaped the discussion by challenging conventional wisdom about AI governance and regulation. Sanyal’s historical perspective and critique of risk-based regulation established an intellectual framework that moved the conversation away from standard regulatory approaches toward more innovative, pragmatic solutions. His emphasis on compartmentalization and ex-post accountability mechanisms provided concrete alternatives that other panelists could build upon. Chaudhary’s concept of ’embedded governance’ provided the thematic foundation, while Kamat’s insights about regulatory innovation and clean-slate advantages offered practical pathways for implementation. Together, these comments created a discussion that was both philosophically grounded and practically oriented, moving beyond theoretical frameworks to actionable governance strategies. The conversation evolved from abstract policy discussions to concrete implementation mechanisms, with each key insight building upon previous ones to create a comprehensive approach to AI governance that balances innovation with responsibility.

Follow-up Questions
How do we determine ownership and copyright in AI-generated innovations – does it belong to the person who created the prompt, the data owner, or the algorithm creator?
This is a fundamental legal and philosophical question that will have significant practical implications as AI becomes more prevalent in creating content and innovations
Speaker: Sanjeev Sanyal
How can we develop a judicial system capable of handling complex AI-related disputes and copyright issues?
Current judicial systems may not be equipped to handle the unique challenges posed by AI-generated content and related disputes
Speaker: Sanjeev Sanyal
What specific mechanisms should be established for AI auditing, similar to chartered accountant audits for companies?
There’s a need to develop standardized auditing processes for AI systems to ensure explainability and accountability
Speaker: Sanjeev Sanyal
How can we establish clear ex-ante responsibility chains for AI systems to ensure accountability when things go wrong?
Currently, when AI systems fail, responsibility can be diffused among algorithm creators, data providers, and users – clear accountability frameworks are needed
Speaker: Sanjeev Sanyal
How can Gift City develop and implement AI governance frameworks while balancing innovation with regulatory compliance?
Gift City has potential to serve as a testing ground for AI governance but needs to navigate the balance between experimentation and regulation
Speaker: Praveen Kamat
What are the specific technical requirements and processes for creating consent-backed APIs for data consumption in regulated environments?
Data processors need clearer regulatory definitions and frameworks for handling consent-based data sharing across different regulatory jurisdictions
Speaker: Audience member (Aditya)
How can cross-regulatory sandbox mechanisms be improved to better accommodate solutions that span multiple regulators?
Current interoperable sandboxes face legal and jurisdictional challenges that limit their effectiveness for comprehensive AI solutions
Speaker: Audience member (Aditya)
How can India leverage its sovereign data assets to build competitive advantages in AI while maintaining data rights and processing capabilities?
India needs to develop strategies to monetize its large data resources while maintaining sovereignty and building domestic AI capabilities
Speaker: Audience member (Aditya)
What specific governance frameworks are needed for AI systems that operate across different economic cycles and stress scenarios?
AI systems in finance need to be tested and validated across various economic conditions, requiring new governance approaches
Speaker: Ajay Kumar Chaudhary
How can we develop effective mechanisms for continuous monitoring and recalibration of AI systems to prevent model drift and bias?
AI systems evolve over time and need ongoing oversight to maintain their effectiveness and fairness
Speaker: Ajay Kumar Chaudhary

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Setting the Rules_ Global AI Standards for Growth and Governance

Setting the Rules_ Global AI Standards for Growth and Governance

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on the critical need for global AI standards and the challenges of implementing them across different stakeholders. The panel, moderated by AI transformation consultant Bhushan Sethi, brought together representatives from major tech companies (Microsoft, Google DeepMind, OpenAI, Qualcomm), standards organizations (ML Commons, Bureau of Indian Standards), policy makers (Singapore government), and the Frontier Model Forum to explore why AI standards are essential and how they can be developed effectively.


The panelists agreed that AI standards serve multiple crucial purposes: building consumer and enterprise trust, enabling global cooperation, solving collective action problems, and providing a common language for risk management across the AI supply chain. They emphasized that standards help define “what good looks like” in AI development and deployment, particularly important as regulations often reference standards that don’t yet exist. The discussion highlighted three key areas where standards are most needed: testing methodologies for AI systems, transparency and disclosure practices, and incident reporting mechanisms.


A major theme was the challenge of measurement and benchmarking in AI systems. Rebecca Weiss from ML Commons explained that effective benchmarking requires developing taxonomies, datasets, and evaluator systems that can estimate uncertainty rather than provide binary safety assessments. The panelists stressed that standards must be inclusive and accessible, particularly for smaller companies that lack resources to develop their own risk management frameworks. They also addressed concerns about ensuring standards are substantive rather than performative, noting that regulatory requirements create pressure for meaningful compliance.


Looking forward, the panel emphasized the need for interoperable standards that can evolve with rapidly advancing AI capabilities while maintaining consistent processes for risk identification and management. The discussion concluded with recognition that successful AI standards require ongoing collaboration between industry, government, and civil society to address both technical challenges and diverse global needs, including language bias and cultural considerations.


Keypoints

Major Discussion Points:

The Need for AI Standards and Global Cooperation: Panelists emphasized that AI standards are essential for establishing trust, enabling global cooperation, and creating alignment on “what good looks like” across different stakeholders. Standards help solve collective action problems and provide legitimacy through open, inclusive processes that benefit companies of all sizes, not just large tech firms.


Technical Measurement and Benchmarking Challenges: The discussion highlighted the complexity of measuring AI systems, focusing on estimating uncertainty rather than binary “safe/unsafe” determinations. This involves developing taxonomies, datasets, and evaluator systems that can provide statistical guarantees about AI behavior under specific conditions, with different sectors having varying tolerance levels for uncertainty.


Standards vs. Regulation Relationship: Panelists explored how standards often fill gaps left by regulation, with some jurisdictions requiring AI frameworks without specifying their contents. Standards provide the technical details needed for regulatory compliance while offering market differentiation opportunities even without regulatory mandates.


Implementation and Future-Proofing: The conversation addressed practical challenges of implementing standards across the AI value chain, from model developers to application deployers. Emphasis was placed on creating process-oriented standards that can adapt to evolving AI capabilities while maintaining interoperability and avoiding the need to “reinvent the wheel” for each new development.


Inclusivity and Accessibility Concerns: Discussion covered ensuring standards are accessible to smaller companies and address diverse global needs, including language bias and cultural considerations. Panelists acknowledged the need for broader stakeholder participation beyond large tech companies to create truly representative standards.


Overall Purpose:

The discussion aimed to demystify AI standard-setting by bringing together diverse stakeholders (tech companies, standard-setting organizations, government representatives, and policy experts) to explore why AI standards are necessary, how they should be developed and measured, and what implementation looks like in practice. The goal was to demonstrate alignment across different sectors on the importance of collaborative, inclusive approaches to AI governance.


Overall Tone:

The discussion maintained a consistently collaborative and constructive tone throughout. Panelists demonstrated remarkable consensus on the importance of standards, with no significant disagreements or tensions. The tone was professional yet accessible, with participants building on each other’s points rather than challenging them. The atmosphere remained optimistic about the potential for global cooperation on AI standards, despite acknowledging significant technical and implementation challenges. The tone became slightly more technical during the measurement discussion but returned to broader strategic themes, maintaining engagement with the audience throughout.


Speakers

Speakers from the provided list:


Bhushan Sethi – Consultant around AI transformation, helps companies implement AI and drive return on investment in a responsible way


Lee Wan Sie – Works in Singapore government in AI governance and policy


Chris Meserole – Executive director of the Frontier Model Forum, focuses on advancing Frontier AI safety and security


Etienne Chaponniere – Vice president of technical standards at Qualcomm


Esther Tetruashvily – AI Standards Lead at OpenAI


Kshitij Bathla – Works at Bureau of Indian Standards (BIS), the National Standards Body of India, representing ISO ICJTC1 SC42


Joslyn Barnhart – Works at Google DeepMind on AI standards, governance, and policy


Amanda Craig – Leads the public policy team with AI and the Office of Responsible AI at Microsoft


Rebecca Weiss – Executive director of ML Commons, an AI benchmarking organization and engineering consortium


Audience – Multiple audience members who asked questions during the Q&A session


Additional speakers:


None – all speakers who participated in the discussion were included in the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion on AI standards brought together a diverse panel of stakeholders to explore the critical need for global cooperation in establishing frameworks for artificial intelligence governance. Moderated by AI transformation consultant Bhushan Sethi at a summit in India focused on “planet, people, and prosperity,” the panel included representatives from major technology companies (Microsoft, Google DeepMind, OpenAI, Qualcomm), standards organisations (ML Commons, Bureau of Indian Standards), policy makers (Singapore government), and the Frontier Model Forum. The conversation aimed to demystify AI standard-setting whilst demonstrating the remarkable consensus that exists across different sectors on the importance of collaborative approaches to AI governance.


The Fundamental Need for AI Standards

The discussion began with establishing why AI standards are essential in today’s rapidly evolving technological landscape. Kshitij Bathla from the Bureau of Indian Standards emphasised that standards serve as tools enabling consumer trust whilst ensuring industry quality assurance across AI ecosystems. This perspective was reinforced by Chris Meserole from the Frontier Model Forum, who articulated that standards fundamentally solve collective action problems by ensuring no single actor is disadvantaged whilst managing AI risks effectively.


Lee Wan Sie from Singapore’s government highlighted that standards create alignment on “what good looks like” in AI governance, particularly in three key areas: testing methodologies for AI systems, transparency and disclosure practices, and incident reporting mechanisms. This framework provides a common language for stakeholders across the AI value chain to communicate about risk management and quality assurance.


Amanda Craig from Microsoft explained how standards function as translation mechanisms between internal company practices and external stakeholder understanding, noting that Microsoft has developed internal responsible AI standards that align all stakeholders around common expectations.


The Regulatory Context and Global Cooperation

A particularly striking revelation emerged from Joslyn Barnhart of Google DeepMind, who observed that “regulation has gone ahead and jumped to, you know, we’ve regulated and essentially made reference to standards that do not yet exist.” This comment fundamentally reframed the discussion from theoretical benefits to practical necessity, explaining why major technology companies are suddenly prioritising standards work with unprecedented urgency.


Chris Meserole elaborated on this phenomenon, noting that multiple jurisdictions are recognising frontier AI risks and delegating standard-setting to technical bodies rather than specifying requirements directly. This approach allows governments to address citizen concerns about AI risks whilst leveraging technical expertise for implementation details.


From India’s perspective, Kshitij Bathla explained how the country’s approach aligns with the “Manav mission” whilst adapting global standards to local use cases and conditions. He emphasised that standards bodies are interconnected globally, creating collaborative rather than siloed approaches to AI governance.


Technical Measurement and Industry Implementation

Rebecca Weiss from ML Commons provided crucial insights into benchmarking methodologies, explaining that effective benchmarking consists of three essential components: a taxonomy for categorising risks and capabilities, datasets for testing, and evaluator systems for assessment. She articulated a sophisticated approach to AI evaluation: “You’re trying to provide a sense of, I’m not going to tell you that your system is, quote-unquote, safe or not. What I’m going to tell you is, under these considerations, under these conditions, under these assumptions, the estimated likelihood of a particular risky behaviour is X.”


This probabilistic approach shifts responsibility to risk management professionals, deployers, and developers to determine whether the estimated risk levels are acceptable for their specific use cases, moving beyond binary “safe” or “unsafe” determinations.


Esther Tetruashvily from OpenAI explained that standards serve multiple functions for frontier AI laboratories: translating risk management practices into language that customers can understand, creating universal language for consumer trust, and enabling interoperability. OpenAI’s recent certification under ISO 42001 exemplifies how companies are using voluntary standards adoption for market differentiation and credibility building.


Etienne Chaponniere from Qualcomm brought a unique perspective as a chipset provider, emphasising the democratising potential of standards. He noted that whilst large companies have resources to develop internal risk management systems, the numerous smaller companies entering the AI space daily lack such capabilities. Standards provide these smaller players with accessible pathways to compliance and quality assurance.


Addressing Inclusivity and Global Diversity

The discussion acknowledged significant challenges in ensuring AI standards serve diverse global populations. A computer science student raised concerns about language bias, noting India’s 22 official languages and the complexity of thinking across multiple linguistic contexts.


Esther Tetruashvily responded by describing OpenAI’s efforts to evaluate model performance across various languages and dialects, including specific testing for Indian linguistic diversity. However, she acknowledged that addressing these challenges requires collective effort and partnership with local ecosystems.


Etienne Chaponniere added that even individual users often think across multiple languages and cultural contexts, emphasising that whilst perfect coverage may be impossible, the focus should be on creating reusable software frameworks that can be adapted for different languages whilst maintaining efficiency.


Future-Proofing and Market Dynamics

Looking towards the future, Chris Meserole distinguished between process standards, which can be future-proofed, and specific evaluations and controls, which will need regular updating. He argued that robust processes for identifying, evaluating, and mitigating risks can remain stable even as specific risks and capabilities evolve.


Lee Wan Sie made a crucial observation that standards provide value even without regulatory mandates, citing voluntary certifications as evidence of market-driven demand for credible quality signals. Joslyn Barnhart explained the economic logic behind industry cooperation on AI safety standards, noting that safety mitigations for frontier AI risks can be costly, creating strong incentives for collective action. She emphasised that “the worst thing for adoption would be a safety incident.”


This market-driven approach suggests that consumer and enterprise demand for trustworthy AI systems creates natural incentives for companies to pursue credible standards compliance, enabling what Barnhart described as a “race to the top” in safety and quality.


Addressing Legitimacy and Accountability Concerns

The discussion was enriched by challenging questions from the audience that highlighted tensions between industry leadership and public accountability. One audience member raised concerns about industry-driven standards potentially serving commercial interests over public needs, questioning how governments with limited technical capacity could effectively audit sophisticated AI compliance programmes.


Jules Polonetsky from the Future of Privacy Forum added complexity by noting that AI governance encompasses broad social policy issues with significant stakeholder disagreements, raising questions about whether standards should seek minimum viable consensus or address different stakeholder priorities through alternative mechanisms.


These concerns highlighted the ongoing challenge of ensuring that technical standards development remains legitimate and serves broader public interests rather than merely facilitating industry coordination.


Conclusions and Path Forward

The discussion revealed significant consensus across diverse stakeholders on fundamental questions about AI standards. All participants agreed that standards are essential for building trust, enabling adoption, solving collective action problems, and creating common frameworks for risk management.


Key areas of convergence included the importance of process-oriented standards that can adapt to evolving capabilities, the need for uncertainty quantification rather than binary safety assessments, the value of global cooperation through interconnected standards bodies, and the necessity of inclusive participation from diverse stakeholders.


However, several significant challenges remain unresolved, including tensions around the pace of standards development, questions about ensuring industry-driven standards serve public interests, and concerns about government capacity for effective oversight. The panellists also acknowledged the complexity of operating across different jurisdictions, including differences between approaches in China and the United States.


The path forward appears to require continued collaboration between industry, regulators, and standards bodies to develop robust process standards whilst building technical capacity for measurement and evaluation. The success of this endeavour will depend on maintaining collaborative spirit whilst addressing legitimate concerns about accountability, inclusivity, and democratic participation in AI governance.


The discussion demonstrated that whilst significant technical and governance challenges remain, there exists a foundation of shared understanding and commitment to collaborative solutions across the AI ecosystem, providing a basis for continued progress on global AI standards development.


Session transcriptComplete transcript of the session
Bhushan Sethi

I’m going to provide a brief introduction and then I’ll have my panelists introduce themselves and we’ll get into the discussion. So I’m a consultant around AI transformation. I help companies implement AI, drive the return on investment in a responsible way with AI. What’s really important about this discussion is we need to demystify what we mean by standard setting. There’s been a whole lot of discussion at this week’s summit around the importance of global cooperation, that the importance of inclusion around AI, driving solutions that meet everybody’s needs. The tech CEOs spoke about it yesterday. World leaders have spoken about it. We’re here in India where it’s about planet and people and prosperity. So that’s what the discussion is going to be about.

And we are going to have time for Q &A at the end. But I’m going to have my panelists introduce themselves first in the order that they’re sitting to introduce themselves and also talk about what standards mean for them? What lens they’re looking at from a standard perspective around AI?

Rebecca Weiss

Hello, my name is Rebecca Weiss I’m the executive director of ML Commons we are an AI benchmarking organization we are an engineering consortium that focuses on that problem and so for us as a technical standards organization around benchmarking what that means for us is two things one, we want to define the methodology for measurement and two, we want to create the technical artifacts that allow for engineers to integrate this methodology into their development life cycle. So for us, when we see what’s happening in the world today, the ability to measure risk is a big barrier to adoption and that ability to understand and estimate the uncertainty around the behavior of an AI system is something where we think benchmarking can help.

So, I will actually we have a large panel so I’m going to let everyone else have a chance to talk and I’m sure more will come out in our dialogue.

Etienne Chaponniere

My name is Etienne Chaponniere I work for Qualcomm. I’m a vice president of technical standards And so what we do within that role is, effectively, we have a team going to technical standards for AI, and we actually try to coordinate where is it that we need to go, how is it that we need to make sure that we understand what it means to be compliant. I come from a world of telecom, as Qualcomm can evoke to some folks. And for us, it’s a very different thing, right? For the telecom world, you cannot ship a product unless you comply to a standard because you need it for interoperability. In the world of AI standards, it’s a bit different.

So we’re talking more about safety standards, and those typically tend to trail the products. The products are out there, and then they’re going to comply to standards at some point when the standards are available. What matters, however, what is common in all of this is that the standards need to be available at scale for everyone and in a way that engineering teams can do it easily, at least from the product side. So I think I’ll leave it at that, and, yeah, that’s it.

Lee Wan Sie

I’m Wan Lee from Singapore government. I work in AI governance and policy. So many things, but specifically for standards, what it means to us is setting norms. That means alignment globally on what good looks like. And specifically in the area of AI governance, then a lot of it has to do at this stage in terms of common methodologies and processes that we have to follow. So, but it’s still technical. It’s not a checkbox, but hopefully that helps us all align to what good looks like. Thanks.

Bhushan Sethi

And maybe before the next introduction, just so you can get a flavor, we have standard setters and measurers. We have people in industry and we have people who play in the policy and the regulatory environment. And that’s the importance around this topic.

Amanda Craig

Thank you. Hi, everyone. I’m Amanda from Microsoft. I lead the public policy team with AI. And the Office of Responsible AI at Microsoft. I think Juan C. said it well when she described standards as really, like, aligning around what good looks like. And I would offer, you know, we actually at Microsoft in our office, we define something called our responsible AI standard that applies to all of our internal kind of product groups, our engineering function, our sales function. And if you think about, like, the role of that internal standard is to align all of the internal stakeholders we have around what good looks like. Like, externally, we need the same sort of mechanism, right? And that’s the role that standards can play in the broader ecosystem.

So we want to partner with our industry colleagues, and we want to partner with governments and others around the world to be able to define what good looks like so we can all have that common language instead of expectations.

Joslyn Barnhart

Hello. Jocelyn, Google DeepMind, where I also work on issues of AI standards, governance, and policy. building on what’s been said. So I think that was an interesting point that often technical standards come first and process and safety standards often come later. In the space of AI at the moment, actually, regulation has gone ahead and jumped to, you know, we’ve regulated and essentially made reference to standards that do not yet exist. So for places like Google DeepMind who have not invested heavily in the standard space in the past, this is now of an utmost priority because we actually need this to assist with implementation and compliance. So that is a primary goal on our side.

Chris Meserole

I’m Chris Meserole,. I’m the executive director of the Frontier Model Forum. Our mission is to advance Frontier AI safety and security, and we work with many of the leading Frontier AI developers and employers, including several colleagues on the stage today, to advance, you know, best practices for risk management. For Frontier AI in particular, there’s a kind of unique and a set of unique and novel risks that over the last couple of years. the community has really started to develop and converge around a set of best practices that now I think need to start to graduate into actual formal standards, and I think that’s kind of why we’re here. That’s why we’re very interested in the standard -setting space.

Esther Tetruashvily

Hi, everyone. My name is Esther Tetruashvily, and I’m the AI Standards Lead at OpenAI. Echoing many of the things that have already been said, I think standards for us, especially as a frontier AI lab, is about translating some of our practices for risk management into the language of risk management for customers across the supply chain, and it’s also about creating a language for consumer trust and assurance. It’s also about, in the age of agents, thinking about interoperability and helping everyone benefit from this ecosystem that we’re developing here. So I’m really excited to be here and to talk about these issues with you all. Thank you.

Kshitij Bathla

Hello, everyone. I’m Kshitij Bathla from Bureau of Indian Standards, the NETS. National Standards Body of India, and here representing ISO ICJTC1 SC42, because BIS, European Standards, is a part of the SC42. and for us I would say standards are the tools which enables consumers’ trust in whatever ecosystem for which they are developed as well as enable us for the industry to get it done to ensure the quality and the consumer trust. That’s the main focus area for us. Thank you.

Bhushan Sethi

So let’s start with why we need standards. Why are we even here? Because there’s a lot of confusion between standards, regulation, legislation. Are we going to get global cooperation around these things? Maybe should it just from a standard setting perspective and then maybe from a regulatory perspective. Why are we here? What’s the problem we’re solving and for whom?

Kshitij Bathla

So I would say the problems, there are multiples in the standards domain. Specifically, it always starts with what we are tackling with. What is AI? That was the primary focus of the JTC1 and SE42 when it started. So it defined what is AI. what is generative AI now they are talking about what is agenting AI as of now talking about so I think the most of the specific points that needs to be taken care is what is coming next and to keep pace with that and apart from once it comes to that when we have kind of mentioned that what it is all about then how do we verify and validate whatever is being said that this is a system which is having AI for example I would say someone says they have an equipment call it washing machine or is equipped with AI but is it actually equipped with AI or it’s just a normal logic system so this is something that we are trying to do the standardization.

Bhushan Sethi

So it’s about trust it’s about verifying the tech firms here represented are moving very fast with the model development so it’s like we need standards there from aregulatory perspective what would you add there?

Lee Wan Sie

I think the most important thing I wouldn’t say from a regulatory perspective. Maybe in terms of why, from an AI policy perspective, we think standards are helpful. Like I said, it’s about defining alignment in what should be in, let’s say, transparency. So I think if you say what would be the top three things today that we want to think about testing, setting for standards would be one, testing. How do you do testing for AI? Whether it’s AI models or AI applications, I think that’s one area. Because then it defines what good testing can look like. Two, perhaps in transparency, what would disclosure look like? Everyone has their own way of sharing the information that they want to share.

One way is to standardize it so it’s easier for the readers, people who are consuming this information to understand. And I’m saying this in very, very broad terms. I mean, it depends on which reader you’re talking about, who’s going to consume. just in broad terms, perhaps one way of standardizing it. Maybe the third way could be in how you’re reporting or monitoring incidents. But it’s still very, very early days. But that’s where standards, again, in terms of alignment, that might be one that would be useful to find alignment in these areas.

Bhushan Sethi

So ,how do we report? How do we disclose? How do we make it credible? And so it’s not a subjective tick -the -box exercise, etc. From a standard setting, Chris and Rebecca, from a standard setting perspective, what would you add to that before we have kind of the industry view?

Rebecca Weiss

I’m happy to add to this. So I think there’s been a theme that has come across in this panel a couple of times, which is what is good enough? And I think in order to define that, a standard represents a consensus about what is good enough. The problem that we have is who contributes to that consensus. It shouldn’t probably be exclusively an industry perspective. You need to have more stakeholders or more constituencies that need to be represented in that definition. And then on top of that, what is good enough, as I think Jocelyn mentioned earlier when we were talking before this panel, there’s a scientific element to that. How do you define the characteristics of a system such that you can actually create?

the kind of uncertainty estimation that lives up to a statistical guarantee, but then there’s also the political element to that, which represents a whole set of issues that I’m actually not qualified to talk about, so I will pass it to Chris.

Chris Meserole

I think it’s worth backing up from this thing. One of the original questions was, what are standards for? Is Chris’s mind working? I was just saying, one of the things we should maybe do is back up a little bit to this question of what are standards for, and I think a big part of what standards are for is to try and solve this collective action problem. There’s a kind of unique set of risks that we are worried about. We want to make sure everyone’s on the same page so that no one kind of actor is disadvantaged or advantaged compared to others. Having standards for how we’re going to manage risks across an ecosystem are extremely useful for that, so there’s a policy dimension to it.

There’s also an adoption dimension to it, right, because people want to know that there’s kind of… of a common way across industry of handling a certain class of risk. And I think being able to set standards and have a formal standard -setting body, to one of the points that was made earlier, by definition a standard -setting body is open, right? So there’s a legitimacy and a credibility to standard -setting bodies that you don’t have if it’s just industry or just government in many cases. And I think, you know, all of those kind of factors coming together are exactly why we’re so keen on kind of pushing forward the standards discussions.

Bhushan Sethi

Yep. So maybe from a hyperscaler perspective, maybe Esther, then Jocelyn, and we can kind of like play it clear, the difference, how is this showing up kind of at your firms and how are you thinking about this?

Esther Tetruashvily

Yeah, no, that’s a great question. I think from sort of a market adoption perspective, a lot of our technology, like general purpose AI models or foundation models, are being integrated into existing ecosystems or on top of. stacks. And there’s a lot of confusion in terms of risk controls and risk management about what that means. We have our own risk management processes. They have their own risk management processes. And one of the barriers to adoption is having a common language to talk about how do you map those controls onto one another. There’s a separate challenge, I think, of who is best positioned to control a particular risk. What are the risks? What are the net new risks?

What are the risks that are already existing where we don’t need to create something net new? And so for us, it’s both an imperative in some ways to kind of translate what we’re doing in terms of managing risks into the language of upstream, downstream customers so that they can understand and map those same practices onto their controls. And then we kind of can create a universal language that can ease trust and assurance in an easy, rockable way across the market. There’s also just space for, I think several people have talked about. Regulations moving ahead. of the standards, where we are still developing methodologies, what is standardizable in what we’re doing, recognizing where the science is not cut up yet, and where we maybe are in a place of more maturity.

Bhushan Sethi

And maybe just to bring it to life for the audience, given the huge amount of subscribers you have in India, around the world, growing every day, what’s changed in the standard vernacular at OpenAI?

Esther Tetruashvily

In terms of our adoption, or in terms of how we’re distributing it?

Bhushan Sethi

Yeah, the prominence of it, how people are thinking about it, the importance of the topic.

Esther Tetruashvily

So I think there’s both an aspect of it that’s like, what does already exist that we can use that can reassure customers that we are following the best practices for the industry, say for privacy or cybersecurity. There’s an existing risk management standard, ISO 42001, that OpenAI just got certified in. And that definitely signals something to the market. And to customers. Then there’s also sort of a transparency. element, right? We have our safety frameworks, we update them, we disclose information about in our model cards performance on a variety of metrics. And then there’s certain things we do to kind of elevate and help stakeholders across the spectrum in terms of how to build evaluations. So we currently published a safety hub that gets updated regularly that kind of tells how we’re performing in a variety of metrics and what are the best methodologies and how to work with this.

Bhushan Sethi

Great. So Joslyn, can you bring to life how Google DeepMinds are thinking about standard setting in that context?

Joslyn Barnhart

Yes. I’ll take it back to what Chris was talking about in terms of collective action problems. So some of the mitigations we’re talking about associated with some of the more extreme risks that Frontier AI poses can be quite costly. And so I do think that there is just a strong industry incentive to work together to resolve this collective action problem. Again, as Chris said, doing this through standards through an open, legitimate process seems to be incredibly impactful. Again, like the… The worst… thing for adoption would be a safety incident. So again, we have a collective incentive as an industry to make sure that we raise the floor to avoid that on all of our behalves.

So I do think that that is seen, you know, I think standards at this point are seen as a very clear and important strategic play for making, you know, essentially clearing the path for rapid adoption.

Bhushan Sethi

Amanda, how do they show up at Microsoft right now? Can you hear the question? How do they, how do these standards show up at Microsoft? Amanda’s going to speak about Microsoft experience.

Amanda Craig

Thank you. Yeah, I was going to start by just thinking about, at Microsoft, at Google, at other places, it’s not a totally new kind of process that we’re going through, right, in terms of thinking about standards and the importance of standards for adoption of this technology, sufficient trust in order to have adoption and in order to really enable compliance. I mean, I think Esther made a really good point. and sort of acknowledging that, you know, especially as we are deploying this technology, we are working with customers that have their own set of standards and regulation, and part of the challenge that we find ourselves, like, facing right now in AI governance is we have a lot of high -level norms and expectations that, again, are not so different from the patterns we’ve seen before.

Basically, we want to know how AI providers are managing risk, but we are in the early days of defining really what that means in practice in a really detailed way, especially, like, across the AI value chain. So what are model developers really responsible for doing for risk management? What are application developers really responsible for doing? How does that dock in to what deployers of those applications that are oftentimes implementing existing standards and meeting existing regulatory requirements? How does all that fit together? And, again, you know, we’ve done this with other digital technologies as well, like software, like cloud services, where we’re ultimately trying to define in practice what are the challenges that we’re facing right now.

is everyone responsible for doing? How do we have a common language to be able to talk to each other among sort of providers or the supply chain of technology and those that are ultimately deploying it? But we actually really do need the standards to support that, right? Because otherwise we are stuck at the sort of like high level conversation about norms around we want to evaluate risk. We want to figure out what the kind of right transparency practices are. Or we can find ourselves in this sort of deep technical weeds but like sort of having a place in between that is really at the level of standards, of technical standards, really helps drive that kind of common set of expectations so that you can have trusted.

Bhushan Sethi

So we need them. They’re important. We’ve got to drive adoption. There’s a collective action agreement here. From a Qualcomm perspective, SCM, bring to life the business model, how you use this in engineering your products.

Etienne Chaponniere

Yeah, so I think there’s one thing that I’d like to note. I think there’s one thing that I’d like to note here. As Qualcomm, we basically provide chipsets, right? We’re not building chipsets. We’re not building chipsets. We’re not building big models. What matters to us still is the fact and the reason why we’re engaged in those standards, whether it’s in ISO, Sentinelic for Europe, ML Commons, when it’s other type of standards, is effectively the fact that it provides scale in the sense of providing scale not only across the globe but also allows any different type of companies to benefit from it. I mean, let’s be clear, right? If you look at the companies who have the type of resources to either set up their own standards and risk management systems internally, they’re typically pretty big companies.

Now, the thing with AI is that there’s a huge amount of companies who are being created every day, and they don’t have the resources to put this together. And so there’s two conditions for making sure that the type of standards that are being put together are, one, inclusive, is that they’re open, as Rebecca, you were alluding to before. And so, whether it’s ML Commons, which has a very open governance model, or ISO, or Sentinelic in Europe, there needs to be an opportunity for everyone to participate. So that’s the first step. However, we know, and that’s the reality, that not everyone has the means to participate. Because they’re like super focused, they need to bring up their own LLM for that particular use case or maybe very general use case, and they just don’t have the resources to do this.

So from that standpoint, having the standard as effectively a mechanism for them to go directly to product and know that they’re going to comply with what the, effectively, world or the community has set up is really important. So from Qualcomm, the reason why we want to participate is to enable this type of accessibility to companies which are not always the biggest one.

Bhushan Sethi

Yep. So agreement that we need them. Before we go into how we set standards, how we measure and benchmark them, and Rebecca will bring that to life, a wildcard question is, there could be a lot of people listening to this to say, the world is not connected and cooperating around this. We don’t have global regulations on AI. But yet we have… industry leaders, standard setters, vehemently agreeing. How should the audience think about that? Is there a disconnect there or would anyone like to comment on that?

Chris Meserole

I would actually put, so part of one of the reasons why I think we’re all so interested in standards is one of the things you have, one of the things you’re seeing is multiple jurisdictions saying some version of we think that there are new risks with frontier AI. We as the government are concerned on behalf of our citizens that we are kind of attending to those risks across industry. Those risks and how to manage those risks are probably best left to be developed or kind of managed through the standard setting process, but they aren’t always setting the standards. So in the United States, there’s a couple of different states, for example, within the United States that have passed requirements for frontier AI developers.

to have a frontier AI framework, but they don’t specify what should actually be in the framework. They kind of offload some of that to the standards process, which is why I think it’s so important to have these standards in place. Like, there’s a clear kind of policy and regulatory interest in there being mechanisms by which some of the risks that may come with frontier AI are managed, but we need to kind of color in the lines a little bit exactly, like, you know, how we’re all going

Bhushan Sethi

And before we go to Rebecca, just from an India perspective, PM Modiji talked about Manav yesterday and the AI vision. Through there, there was a lot of focus on validity and governance, so standards were implied there. Do you want to just bring to life kind of how India thinks about this before we go to Rebecca and talk about measurement?

Kshitij Bathla

So I would say the Manav mission, it’s welfare, human -centric, and all those aspects are there. And from the governance perspective, also what is going on is that the government is not going to be able to do anything about it. we as of now the India AI governance guidelines are there. This is providing you a framework that these are the things that you should look into. Just providing a reference to. So in this direction the Indian government as of now is moving into. Coming into the from the perspective of standardization and at the national level as well as the ISO level I am adding to the question that you asked previously. That standards bodies are interconnected with each other.

The ISO there is a license mechanisms. We have the ML Commons as the license there. The IEEE is there. All bodies are there. So they are all interconnected there and whatever is coming as of out of these bodies is an outcome which is based on the studies. but done by various forums it’s not only the one I would say just the ISO body or not so in this direction the Indian standards that we are working on we are developing are also in the direction because here is something which is global we can’t have cells was specifically for India there could be the risks there could be specific use cases that are India specific for that those we need to have some specific guidance but more or less everything is the global thing that we are trying to look

Bhushan Sethi

into and then adapt those with the specific use cases that we need to right so we need global we need to adapt that to kind of local kind of conditions and use cases so let’s get a bit more technical Rebecca like why is this hard how do we measure it like how does it compare to benchmarking maybe Rebecca and then and then from a regulatory perspective did you want to make

Lee Wan Sie

I just want to respond to Chris comment and your question about you know if there’s no regulations then why do we care about standards right I mean, sure, I think there will be regulators who will say, yes, turn to the technical standards to define the expectations, which I think is a fair point that Chris made. But even when there’s no regulations, I think the standards still are useful. I mean, Esther just mentioned that OpenAI is certified for 42 ,001. You didn’t need to do that, but why did you do it, right? And Entropy has done that as well. And I think the idea is that perhaps there’s also a way to differentiate for organizations, for enterprises. And it doesn’t have to be the frontier model labs only.

It could be app developers and so on. A way to differentiate themselves and say that, look, I’m adhering to a global standard. I’m demonstrating that I have actually implemented something that’s good enough. I’ve addressed a risk in this way. I think that’s one good…

Bhushan Sethi

Do you want to make a quick comment? Yes, do you want to make a response to everything we’re getting to? Sorry, Rebecca. Please.

Lee Wan Sie

I just want to respond to Chris’ comment and your question about, you know, if there’s no regulations, then why do we care about standards, right? I mean, sure, I think there will be regulators who will say, yes, turn to the technical standards to define the expectations, which I think is the fair point that Chris made. But even when there’s no regulations, I think the standards still are useful. I mean, Esther just mentioned that OpenAI is certified for 42 ,001. You didn’t need to do that, but why did you do it, right? And Entropy has done that as well. And I think the idea is that perhaps there’s also a way to differentiate for organizations, for enterprises. And it doesn’t have to be the frontier model labs only.

It could be app developers and so on. A way to differentiate themselves and say that, look, I’m adhering to a global standard. I’m demonstrating that I have actually implemented something that’s good enough. I’ve addressed it. I’ve risen this way. I think that’s one good… reason for standards, even if there’s no regulatory cover. So the certification assurance part is helpful. Yeah, I just wanted to add that as a little bit of colour just to give some benefits to the standards community that is still kind of very…

Bhushan Sethi

Thank you. Bringing the regulatory perspective and kind of the Singapore experience. So let’s get into measure. And the fellow panellists, if you want to respond to anything, just give me the signal. We’re going to make this an interactive conversation. So Rebecca, how do we measure this?

Rebecca Weiss

Well, solve all the problems in one definition. No, I’m kidding. But as I said earlier, benchmarking consists of two things. It consists of a methodology, at least from our perspective, the way that we do benchmarking consists of a measurement methodology, and it consists of reference builds, implementations of that methodology so that engineers can use that. And the definition of a benchmark, as we’ve been trying to operationalize this in places like ISO and others, is a taxonomy, a data set, and an evaluator system. And the point of all of that construct is, as Etienne pointed out, this allows for you to scale this kind of approach towards the type of deployments that we’re expecting to see in these types of AI settings.

The challenge behind all of this is that what you’re really trying to do is estimate uncertainty. Uncertainty. You’re trying to provide a sense of, I’m not going to tell you that your system is, quote -unquote, safe or not. What I’m going to tell you is, under these considerations, under these conditions, under these assumptions, the estimated likelihood of a particular risky behavior is X. And then it is up to you as a risk management professional, a deployer, a developer, it’s up for you to decide, is that enough? Is that good enough for your needs? And I don’t think it’s going to be the same for different sectors. I think sometimes. Sectors will have a much higher bar for the amount of uncertainty.

that needs to be estimated, and then other sectors will probably be like, that’s good enough for me. I don’t necessarily need to get much further than what you are offering right off the date. So we can go into all of the different questions that are made open, but those particular areas related to developing that taxonomy, developing those data sets, and developing those evaluators, the best practices and the standards to make it clear that this is the best in the industry, this is the way that it is, that’s what we need to get better at.

Bhushan Sethi

Yeah, so what I’m hearing is we need clarity. Clarity of the taxonomy, clarity of what we’re measuring, and it needs to be verifiable and credible. From an industry perspective, would anyone like to pick up, like, how’s that going to work? What’s in place now? What some of the challenges might be? How do you get organizational buy -in? Anything to add from an industry? Amanda, do you want to start us off?

Amanda Craig

Sure. I mean, I think there’s work to do across all the elements that Rebecca just laid out, and it’s really a reason why we are really invested in working with M .L. Cummins, because I think we need places that are bringing industry and and and civil society and stakeholders together to actually work through these problems and resolve these hard questions in ways that are really going to be sort of valid and reliable broadly. And so I think that’s really the work still ahead, but I think we are also making good progress, right? And thanks to ML Commons for helping to facilitate that. My thought on this is that we’ve been talking for years now about how nascent this field is and that actually to judge if we are actually making progress, this too could be standardized, right?

Like we don’t have common ways of assessing are we still in a nascent stage? What levels of uncertainty do we have? So to Rebecca’s point, I think this is absolutely essential so we can all align exactly on have we made some progress? We’ve made sufficient progress to start relying on these things. To what degree can we rely on them for important decision -making around deployments?

Esther Tetruashvily

yeah I think I’ll just add if we take this back down to the basics I think whether you’re an enterprise customer or you’re a consumer of our products you just want to know is this thing going to be accurate can I rely on this thing is this going to get me into trouble if I incorporate this in my workflows am I going to carry some sort of liability and at the core of standards is figuring out a way to have a common mechanism to provide an answer of reassurance you can trust us here’s a measurement certified by somebody else that this thing is reliable that this thing is accurate that I can rely on this thing and I can use this thing and I think we’re in this moment where we’re still trying to figure out as an industry and as a community about what that’s going to look like and so whether it’s advancing the measurement science because we currently don’t have enough of that in order to make sure that we can give an estimate of what is accurate what is reliable what is safe for specific risks or on the other side, what are the risks that we care about?

I think some risks might be some countries, some jurisdictions might have one list of risks. Other countries might have a different list of risks. And then there’s going to be a question of, like, how do you control for that, right? And that’s kind of what Rebecca Nemel -Commons and many others are working on, is how do you provide some sort of mechanism of credibility that says we’ve measured this, this thing is safe, that can then be certified, could be, you know, understood in the same way for everyone. So at the end of the day, in order for us to really unlock the value of this new technology that is transformative, I think many of us who are here today for the Indian Impact Summit recognize that potential.

We all also need to kind of answer those questions, and standards are the way you facilitate it.

Bhushan Sethi

Yeah, and so there’s a theme of trust that’s going through this. So maybe, Chris, add to that, and then I’ll add to that into a comment from a quote,

Chris Meserole

Yeah, just briefly, I think I also just want to situate how kind of benchmarking standards and some of the scientific questions we’ve been talking about fit in. Like there’s I think we’ve been talking a lot about different types of standards. I just want to clarify that there’s like a kind of broader, high -level set of process standards where you kind of say, all right, for this class of risk, what we’re going to do is we’re going to identify what the risk is. We’re then going to evaluate what that risk might actually be. And then we’re going to put in place certain kinds of mitigations and controls. Those are kind of, it’s a process for how you’re going to walk through risk management for something.

That absolutely needs to be standardized. But then even within that, once we get to, all right, once we have agreed on what the risk is that we’re trying to evaluate, how do we actually do that? And that’s where the standards come in for the benchmarks that we want to see developed. And that’s where some of these scientific questions, I think, really come into play because we need to have, you know, those kind of credible scientific evaluations and tests for the whole kind of broader risk management effort to hang together. And it’s, you know, again, critical, I think, for this whole process.

Bhushan Sethi

Yes, this has got to live next to the risk. Risk management, identification, mitigation strategy in any company. Go ahead, Jocelyn.

Joslyn Barnhart

I just had briefly. I think the possibility for comparison across models is also something that’s super important here. I think there’s an important safety dimension there. If we actually are all measuring the same thing and can give consumers some relative assessment of safety, of quality, this is actually going to potentially contribute to a race to the top as opposed to the bottom. And so we’re solving it.

Bhushan Sethi

So that’s the question of who we’re solving for. Two of the panelists have mentioned consumers. It’s not just about enterprise. It’s not just about government. It’s all about consumer trust. Essie, what would you add?

Etienne Chaponniere

What I wanted to add is the fact that here when we’re talking in general about trying to create standards to resolve the type of safety risk that we’re going to see, it’s just also to reassure the audience that it’s not that we’re trying to solve every single risk that happens. There is a huge amount of existing standard bodies, whether it’s in ISO and SensenELEC and other places, where they already have identified risk for their particular verticals or their particular… not silos, but the particular industries, those are already at work, right? So how they’re going to use AI, how the AI is going to be effectively, the AI safety is going to be translated to their own processes.

Those things are already happening, right? So it’s not only the people on this panel who are working on this, the entire community of standards, whether it’s in automotive, radio equipment directive, everything is already, everybody’s already looking at that, right? In the end, the difficult part is going to be to make sure that there is a commonality in terms of the type of techniques that we’re using whenever there’s an automated technique that we can use. Because from an industry standpoint, what is really useful, in particular if you’re a smaller company, is to make sure that you can run something efficiently and it addresses as much as the use cases that you run as possible. So that is an important thing that we need to keep in mind when we’re doing this.

So it’s why, I mean, from Qualcomm, obviously, we don’t address every single thing, but we want to make sure that at least in the areas we’re involved, there’s going to be as much as a commonality in terms of the measurement techniques that we’re going to use.

Bhushan Sethi

So consensus around the need to do it, consensus around the fact that it’s hard, but it’s important for consumers and business and investors. But Jocelyn made a point that we’ve been talking about how this is a nascent topic, et cetera. I want to look forward. What over the next two years does this look like? What have we got to get right? The models are changing. There could be regulation that changes. There could be changes around China, U .S. operating in different ways. What does this topic look like? How do we make sure we stay the course on this topic? Anyone want to offer a perspective as we look forward? And then we’ll start wrapping up.

And thinking about questions so we can get questions from the audience. I’ll take a crack at it. So at least from my perspective, there are a couple of things that I hope to see over the next couple of years. One is that I think this idea of benchmarks and other standards representing consensus, we should be seeing more things like certification that represent more types of consensus. If benchmarking represents consensus around how to estimate and measure a thing, certification could end up representing agreement. A definition of what is good enough deserves some form of certification. I don’t know necessarily what that’s going to look like today, but I have to imagine that those sort of represent truces, the temporary agreements about this is good enough for my industry, this is good enough for my deployment, this is good enough for my use case.

So that’s what I’m hoping we start to see over the next two years. Anyone else want to add to that? Because, I mean, Chris, jump in, but we’ve seen some of these disclosures in the past, and people commit to environmental goals or DEI goals or other set of standards or disclosures. Stakeholder capitalism was a big deal, and now it’s more about shareholders. So I’d love to understand our perspective on how do we stay the course.

Chris Meserole

Yeah, I might distinguish a little bit between how do we future -proof these standards and then how do we kind of ensure that they’re implemented over time. And I think the way that we future -proof them is to some extent to go back to the point I was making earlier about process standards, right? The process is somewhat agnostic to the actual kind of, you know, AI system itself and the capabilities it has. If you have a good process for identifying risks, evaluating risks, that process can kind of be a bit future -proofed. The specific evals you run are probably going to have to be updated over time to account for the greater capabilities of models as they advance, right?

And I think… similar with some of the controls that might need to be kind of used to manage some of the risks if there’s certain thresholds or kind of if the evaluations kind of indicate a certain level of risk, right? So the subcomponents of it might need to be evaluated. The overarching framework hopefully can kind of have some legs behind it over time in terms of future -proofing it. So we must commit to a process. We can’t future -proof because we can’t predict the future, but the process is so important. Even a good example of this would be something like the, I think, 40 ,001 has come up a few times. Like there’s a certain class of AI that 40 ,001 is very kind of tailored to, but even that AI has changed over time.

But 40 ,001 is still a very good kind of standard for managing those kinds of risks for those kind of applications of AI across a broad array of machine learning algorithms. But the other point that I would make in terms of, you know, you alluded to some of the kind of implementation of standards over time and making sure that they have the same currency to them. And there, I think we can rely on some of the incentives and the need, again, for there to be collective action on this that we’ve talked about before. Some of the incentive to make sure that there’s a collective action problem is going to rest with policymakers, which is why you’ve seen some regulatory activity.

Even in areas where there’s not, to Juan C.’s point, there’s a clear market need for these standards to be developed and implemented over time because consumers want to see, you know, they want to trust that the, you know, whether it’s individual consumers or enterprise, they want to trust that the model is actually safe and secure to use. And so I don’t see kind of the standards, the importance of standards diminishing over time. In fact, if anything, as the capabilities advance, consumers and enterprises are going to be more and more interested in making sure that they

Bhushan Sethi

Yes, it’s going to be consumer -driven. Juan C., just from a regulatory perspective, any thoughts? Chris mentioned implementation. Which is the hard stuff of where lots of this stuff gets stuck. Any perspective on implementation or from your experience as a regulator to add here?

Lee Wan Sie

Implementation of standards? Yes. I mean, Chris put it very well, right? One, regulators could say, I expect you to comply with certain requirements and this is how you do it. And that’s where the standards set on how you do it. Or regulators can don’t provide certain requirements or certain expectations. And the market sets out these requirements and these expectations. If you do it, then we will buy your product, for example. So I think from an implementation point of view, I think there will be some momentum, either from the market or from regulations, to move standards. But I think where I think, back to your original question, what’s going to happen in two years, I hope we can actually move faster on standards in terms of the definitions of standards.

I think that would be super useful. We’re leading some work on testing, well, benchmarking and rate teaming, primarily methodology definition. But… Yeah. We hope that in the next one year that can be done and sorted and accepted within the ISO process. But the experience has shown us that it takes a while. So in the next few years, hopefully we will find a way in which we can move to standards faster.

Bhushan Sethi

So we need to move with speed from a regulatory perspective. Amanda is going to have the last word and then we’re going to go to questions. So please prepare them. Amanda?

Amanda Craig

I didn’t realize that. No, the one thing I wanted to add in terms of like a goal for where we can find ourselves two years from now is thinking about like a system of standards that are interoperable where we have a sort of modular approach, right, where across like general purpose technology and, for example, in different sort of deployment scenarios, different use cases, different sectors, we actually can get some efficiency from, you know, these standards are all going to need to continuously evolve and improve and we’re going to learn from the science. And we’re going to keep evolving the benchmarks and the kind of methodology around the evaluations. But we don’t want to like keep starting from scratch with every piece of that, you know, puzzle.

And so we need to figure out a way to actually ensure that. like where we are making progress on the evaluation science and how we are doing this in the context of like evaluating AI models or systems and then how we are evaluating AI and deployment in like critical sectors, for example, we actually have some synergy built into the standards ecosystem so that we are making kind of more dynamic progress across everything at the same time.

Bhushan Sethi

Yeah, so it needs to be interoperable and we can’t keep reinventing the wheel. So audience, questions? I’m going to collect questions, maybe three to five. So the gentleman at the front, the gentleman at the back, and then the lady with the hand up.

Audience

Hi there. Thanks for taking my question. Maybe I have a bit of a tricky question for you. You know, on the panel, obviously, we have a lot of commercial interests. My question is this. How do we know in your assurance program or whatever you’re proposing that it’s going to be done since it’s driven primarily by industry, how do we know that you’re not just going to create something that cheaply satisfies the industry in front of… of us versus what the public actually needs. And assuming you do have a program that you’re going to talk about, how does a government or external agency audit such a program, given the skill gap to create such a very sophisticated compliance program, how can world governments come?

Because I’ve been on a lot of panels this week. The fear, uncertainty, and doubt is not only just the policy gap. It’s actually the technical gap, the inability of world governments to audit properly whatever you have. Thank you.

Bhushan Sethi

Thank you. So keep the questions brief. Thank you for that. So that’s about, like, how do we make it real? How do we make it not performative? I’m going to collect two other questions, and then we’ll throw them to the panelists. So keep your hands raised. We have a gentleman at the back. And I think there was a lady or a gentleman with a tie. Yeah, hi.

Audience

So… As a recent computer science student, I’m interested in building AI for India. Specifically with such a distinguished panel, I thought I’d shoot my shot. I’m a little nervous, so I apologize about that. I want to talk specifically about language bias. Being in India, there are 22 official languages, and I’m constantly thinking in two to three different languages. And when I utilize tools, such amazing tools built by everybody here, I’m wondering how you guys would go about tackling language bias and building guardrails around that to ensure that, you know, a small model like a student like me is making does not go haywire. Yeah, great

Bhushan Sethi

question about language. Thank you, sir. And then, gentleman with a tie. Which doesn’t mean, like, more gentlemen wear ties, but, yes, please. Hi, Jules

Audience

Polonetsky at the Future of Privacy Forum and our AI Governance Center. The standards always say… seem to be an easier path when they are more technical than… and challenging social policy, and AI governance seems to capture the most broad potential collections of social policy. And given that there’s a lot of disagreement and some debate over whether one should even measure certain areas, do you imagine that we’re talking about minimum viable consensus with the broadest number of stakeholders, or is there a path to in some way address some issues that some stakeholders see as absolutely necessary and others don’t want on the table? Yep. All

Bhushan Sethi

right. Soundbite responses panel. Like how do we make it real? How do we deal with the skills gap? How do we deal with the MVP? Anyone? Go on, Jocelyn. On the

Joslyn Barnhart

performative question, I think now that standards have been referred to within actual regulation, I think to the extent that we want to use these standards as evidence of conformity with those particular regulations, that’s set up a lot of the work that we’re doing. that’s a kind of minimum bar at the very least, because I think if we make these things too high level, too abstract, or too essentially lowest common denominator, I don’t think regulators are going to look at those standards as evidence of conformity. So I think there is that kind of interlocking pressure created by the regulation itself for some sort of degree of quality. Thank you.

Bhushan Sethi

And Esther, do you want to comment on the language perspective and how you’re thinking about that at OpenAI? Thank you.

Esther Tetruashvily

Yes, we do a series of evaluations like MMLU for determining how well our models perform on a variety of languages. We also have a specific test actually in QA. There’s also a specific test in QA that we also kind of test our models on that has a variety of dialects within India. So I think the short answer is that this is an area where we need more participants. And I believe ML Commons is playing an active role in helping further our capacity building. And I think working with local ecosystems to help clean and collect good data so that we can do this appropriately. This is another area, right, just like we’ve been saying, where we need to work in partnership to figure out how do we both collect the type of information, how do we measure this stuff, how do we build the evaluations, and then how do we build an industry standard where all of the actors are kind of held to that standard.

And it’s going to have to be a collective effort. Yeah. Okay.

Etienne Chaponniere

Just to add a little bit on the question regarding the language. In the end, I don’t think there’s like a – there’s no silver bullet solution, right? There’s going to be a need to have this type of – Either safety test or safety prompt. which are required for different type of languages. And you’re not going to be able to address every single thing because there’s just a huge amount of diversity. I mean, take me. I’m French from cultural background. I speak English and think in French and English all the time. There’s weird stuff that I say that will not be captured by a model that’s only for American English, right? So there’s going to be a need for more than one language which are captured, and probably a lot of them, but this is where the community of basically everybody needs to come and say, hey, this is what I want to capture for my type of language.

What matters to make sure that there is scale and that it still remains efficient is that hopefully the tool and the software framework around it can be reused. And that’s really a big advantage for that. Thank you.

Bhushan Sethi

So in summary, and thank you, dear panelists, for the great discussion. So you heard today that standards are important. This is a fast -moving world. We’ve got to be designing for consumers, for business people. There’s a commitment. There’s a commitment here around measurement. It’s both art and science. We need to have the process that’s consistent. But across regulators, across standard -setters, around policymakers, and the business and the tech community, there’s a consistent understanding. So it’s going to be an emerging topic, which I know we’ll continue to discuss. Thank you, panelists, and thank you to the audience. Thank you. Thank you. Thank you. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
K
Kshitij Bathla
3 arguments149 words per minute526 words210 seconds
Argument 1
Standards enable consumer trust and industry quality assurance in AI ecosystems
EXPLANATION
Standards serve as tools that enable consumer trust in AI ecosystems while helping industry ensure quality. They are fundamental mechanisms for building confidence between consumers and AI system providers.
EVIDENCE
Example of washing machine claiming to have AI – standards help verify if it actually has AI or just normal logic system
MAJOR DISCUSSION POINT
The Need for AI Standards and Their Purpose
AGREED WITH
Chris Meserole, Lee Wan Sie, Amanda Craig, Esther Tetruashvily, Rebecca Weiss, Joslyn Barnhart
Argument 2
Indian approach focuses on human-centric AI governance while adapting global standards to local use cases
EXPLANATION
India’s Manav mission emphasizes human-centric and welfare-focused AI development. The Indian government is developing AI governance guidelines that provide frameworks while adapting global standards to India-specific risks and use cases.
EVIDENCE
Reference to PM Modi’s Manav mission focusing on human-centric AI, Indian AI governance guidelines providing framework
MAJOR DISCUSSION POINT
Global Cooperation and Regulatory Context
Argument 3
Standards bodies are interconnected globally, creating collaborative rather than siloed approaches
EXPLANATION
International standards organizations like ISO, ML Commons, and IEEE are interconnected through licensing mechanisms. This creates a collaborative global approach where standards are based on studies from various forums rather than isolated development.
EVIDENCE
ISO licensing mechanisms, ML Commons, IEEE interconnections
MAJOR DISCUSSION POINT
Global Cooperation and Regulatory Context
AGREED WITH
Chris Meserole, Etienne Chaponniere, Rebecca Weiss, Bhushan Sethi
C
Chris Meserole
4 arguments204 words per minute1311 words385 seconds
Argument 1
Standards solve collective action problems by ensuring no actor is disadvantaged while managing AI risks
EXPLANATION
Standards help address collective action problems in AI risk management by ensuring all actors are on the same page and no single actor is disadvantaged or advantaged compared to others. This creates a level playing field for managing AI risks across the ecosystem.
EVIDENCE
Reference to unique risks in frontier AI and need for common risk management approaches
MAJOR DISCUSSION POINT
The Need for AI Standards and Their Purpose
AGREED WITH
Kshitij Bathla, Lee Wan Sie, Amanda Craig, Esther Tetruashvily, Rebecca Weiss, Joslyn Barnhart
Argument 2
Multiple jurisdictions recognize frontier AI risks and delegate standard-setting to technical bodies rather than specifying requirements directly
EXPLANATION
Various jurisdictions acknowledge new risks with frontier AI and express concern for citizen safety, but they delegate the development of risk management approaches to standard-setting processes rather than prescribing specific requirements. This creates policy and regulatory interest in having mechanisms for risk management.
EVIDENCE
Examples of US states requiring frontier AI frameworks without specifying content, offloading to standards process
MAJOR DISCUSSION POINT
Global Cooperation and Regulatory Context
AGREED WITH
Kshitij Bathla, Etienne Chaponniere, Rebecca Weiss, Bhushan Sethi
Argument 3
Process standards can be future-proofed while specific evaluations need updating as AI capabilities advance
EXPLANATION
The overarching framework and process for identifying and evaluating risks can be somewhat future-proofed and agnostic to specific AI systems. However, the specific evaluations, controls, and thresholds will need regular updates as AI model capabilities advance over time.
EVIDENCE
Example of ISO 42001 remaining relevant across different types of AI and machine learning algorithms over time
MAJOR DISCUSSION POINT
Implementation and Future Outlook
AGREED WITH
Amanda Craig, Etienne Chaponniere
DISAGREED WITH
Lee Wan Sie
Argument 4
Market incentives and regulatory pressure will drive implementation as consumers demand trusted AI systems
EXPLANATION
The importance of standards will not diminish over time due to both policy incentives for collective action and clear market demand. As AI capabilities advance, both individual consumers and enterprises will increasingly want assurance that AI models are safe and secure to use.
MAJOR DISCUSSION POINT
Implementation and Future Outlook
AGREED WITH
Lee Wan Sie, Joslyn Barnhart
L
Lee Wan Sie
3 arguments171 words per minute917 words320 seconds
Argument 1
Standards provide alignment on what constitutes “good” practices in AI governance and create common methodologies
EXPLANATION
Standards serve as mechanisms for setting global norms and achieving alignment on what good practices look like in AI governance. They focus on establishing common methodologies and processes, particularly in areas like testing, transparency, and incident reporting.
EVIDENCE
Specific examples of testing methodologies, transparency/disclosure standards, and incident reporting/monitoring standards
MAJOR DISCUSSION POINT
The Need for AI Standards and Their Purpose
AGREED WITH
Kshitij Bathla, Chris Meserole, Amanda Craig, Esther Tetruashvily, Rebecca Weiss, Joslyn Barnhart
Argument 2
Standards provide differentiation mechanism even without regulations, as demonstrated by voluntary certifications
EXPLANATION
Even in the absence of regulatory requirements, standards serve as valuable differentiation tools for organizations and enterprises. They allow companies to demonstrate adherence to global standards and show they have implemented adequate risk management practices.
EVIDENCE
Examples of OpenAI and Anthropic getting ISO 42001 certification voluntarily
MAJOR DISCUSSION POINT
Global Cooperation and Regulatory Context
AGREED WITH
Chris Meserole, Joslyn Barnhart
Argument 3
Need for faster movement on standards definition, particularly in testing and benchmarking methodologies
EXPLANATION
There is an urgent need to accelerate the development and acceptance of standards definitions, especially in areas like testing and benchmarking methodologies. The current pace of standards development through processes like ISO is too slow for the rapidly evolving AI landscape.
EVIDENCE
Reference to leading work on testing, benchmarking and red teaming methodology definition, experience showing ISO process takes considerable time
MAJOR DISCUSSION POINT
Implementation and Future Outlook
DISAGREED WITH
Chris Meserole
A
Amanda Craig
4 arguments180 words per minute984 words327 seconds
Argument 1
Standards create common language for risk management across AI supply chains and enable compliance with regulations
EXPLANATION
Standards help define what different stakeholders in the AI value chain are responsible for in terms of risk management. They create a common language between AI providers, application developers, and deployers, helping everyone understand their roles and responsibilities while supporting compliance with existing regulatory requirements.
EVIDENCE
Reference to Microsoft’s internal responsible AI standard applied across product groups, engineering, and sales functions
MAJOR DISCUSSION POINT
The Need for AI Standards and Their Purpose
AGREED WITH
Kshitij Bathla, Chris Meserole, Lee Wan Sie, Esther Tetruashvily, Rebecca Weiss, Joslyn Barnhart
Argument 2
Need for common mechanisms to assess progress and reliability, moving beyond nascent stage discussions
EXPLANATION
The field needs standardized ways to evaluate whether progress is being made in AI safety and reliability. Rather than continuing to describe the field as nascent, there should be common methods for assessing advancement levels and determining when sufficient progress has been made for important decision-making.
MAJOR DISCUSSION POINT
Measurement and Benchmarking Challenges
AGREED WITH
Rebecca Weiss, Esther Tetruashvily, Joslyn Barnhart
Argument 3
Standards ecosystem must be interoperable and modular to avoid reinventing approaches for each use case
EXPLANATION
The standards system should be designed with interoperability and modularity in mind, allowing for efficiency across general-purpose technology and different deployment scenarios. This prevents the need to start from scratch with each new application while enabling continuous evolution and improvement based on scientific learning.
MAJOR DISCUSSION POINT
Implementation and Future Outlook
AGREED WITH
Etienne Chaponniere, Chris Meserole
DISAGREED WITH
Etienne Chaponniere
Argument 4
Standards must address both general-purpose AI models and sector-specific deployment scenarios
EXPLANATION
The standards framework needs to accommodate both broad general-purpose AI technologies and specific deployment contexts across different sectors. This requires a comprehensive approach that can handle diverse use cases while maintaining coherence.
MAJOR DISCUSSION POINT
Industry-Specific Perspectives and Applications
E
Esther Tetruashvily
4 arguments180 words per minute1072 words355 seconds
Argument 1
Standards translate risk management practices into language customers can understand and create consumer trust
EXPLANATION
Standards help frontier AI labs translate their internal risk management practices into language that upstream and downstream customers can understand and map onto their own controls. This creates a universal language that facilitates trust and assurance across the market in an accessible way.
EVIDENCE
OpenAI’s ISO 42001 certification, safety frameworks, model cards, and safety hub with regular updates on performance metrics
MAJOR DISCUSSION POINT
The Need for AI Standards and Their Purpose
AGREED WITH
Kshitij Bathla, Chris Meserole, Lee Wan Sie, Amanda Craig, Rebecca Weiss, Joslyn Barnhart
Argument 2
Standards must provide credible measurement that can be certified and understood universally
EXPLANATION
At the core of standards is providing a common mechanism to offer reassurance about AI system accuracy, reliability, and safety. This requires advancing measurement science and defining risks that different jurisdictions care about, with credible certification processes that can be understood consistently across contexts.
MAJOR DISCUSSION POINT
Measurement and Benchmarking Challenges
AGREED WITH
Rebecca Weiss, Amanda Craig, Joslyn Barnhart
Argument 3
Enterprise customers need clarity on accuracy, reliability, and liability when incorporating AI into workflows
EXPLANATION
Both enterprise customers and consumers want to know if AI systems will be accurate, reliable, and whether they will face liability issues when incorporating these tools into their workflows. Standards provide the mechanism to offer this reassurance through certified measurements.
MAJOR DISCUSSION POINT
Industry-Specific Perspectives and Applications
Argument 4
Language bias and multilingual challenges require community participation and local ecosystem collaboration
EXPLANATION
Addressing language bias in AI systems requires broader community participation and collaboration with local ecosystems. This includes developing appropriate evaluations, collecting good data, and building industry standards that hold all actors accountable for multilingual performance.
EVIDENCE
OpenAI conducts MMLU evaluations for various languages and specific tests for Indian dialects, ML Commons’ role in capacity building
MAJOR DISCUSSION POINT
Industry-Specific Perspectives and Applications
E
Etienne Chaponniere
3 arguments194 words per minute1066 words328 seconds
Argument 1
Standards enable accessibility for smaller companies that lack resources to develop their own risk management systems
EXPLANATION
Standards provide scale and accessibility not just globally but also for different types of companies, particularly smaller ones that lack resources to establish their own standards and risk management systems. This is crucial as many new AI companies are created daily without the means to develop comprehensive internal frameworks.
EVIDENCE
Contrast between big companies with resources for internal standards versus new companies focused on specific LLM use cases
MAJOR DISCUSSION POINT
The Need for AI Standards and Their Purpose
AGREED WITH
Kshitij Bathla, Chris Meserole, Rebecca Weiss, Bhushan Sethi
Argument 2
Existing standards bodies in various verticals are already addressing AI integration into their specific risk frameworks
EXPLANATION
There are already numerous existing standards bodies in ISO, CENELEC, and other organizations working on how AI safety translates to their specific industry processes. The challenge is ensuring commonality in automated techniques and measurement approaches across these different verticals.
EVIDENCE
Examples of automotive and radio equipment directive standards bodies already working on AI integration
MAJOR DISCUSSION POINT
Industry-Specific Perspectives and Applications
AGREED WITH
Amanda Craig, Chris Meserole
DISAGREED WITH
Amanda Craig
Argument 3
Technical solutions need reusable frameworks while accommodating diverse linguistic and cultural contexts
EXPLANATION
While there’s no silver bullet solution for language diversity challenges, the key is ensuring that software frameworks and tools can be reused efficiently across different languages and cultural contexts. This requires community input to capture specific language needs while maintaining scalable technical infrastructure.
EVIDENCE
Personal example of thinking in French and English, noting that American English models miss certain cultural nuances
MAJOR DISCUSSION POINT
Industry-Specific Perspectives and Applications
R
Rebecca Weiss
4 arguments205 words per minute679 words197 seconds
Argument 1
Standards represent consensus about what is “good enough” and need diverse stakeholder input beyond just industry
EXPLANATION
Standards represent a consensus about what constitutes adequate performance or safety levels. However, this consensus should not be exclusively from an industry perspective but should include more diverse stakeholders and constituencies to ensure broader representation in defining these standards.
MAJOR DISCUSSION POINT
The Need for AI Standards and Their Purpose
AGREED WITH
Kshitij Bathla, Chris Meserole, Etienne Chaponniere, Bhushan Sethi
Argument 2
Benchmarking requires methodology definition and technical artifacts, focusing on measuring risk and uncertainty rather than binary safety assessments
EXPLANATION
Benchmarking consists of measurement methodology and reference implementations that engineers can integrate into their development lifecycle. The goal is to help measure risk and estimate uncertainty around AI system behavior, which is a major barrier to adoption.
EVIDENCE
ML Commons as engineering consortium focused on benchmarking, emphasis on integration into development lifecycle
MAJOR DISCUSSION POINT
Measurement and Benchmarking Challenges
AGREED WITH
Amanda Craig, Esther Tetruashvily, Joslyn Barnhart
Argument 3
The challenge is estimating uncertainty and providing statistical guarantees about AI system behavior under specific conditions
EXPLANATION
Rather than providing binary safety assessments, benchmarking should estimate the likelihood of risky behavior under specific conditions and assumptions. This allows risk management professionals and deployers to decide if the uncertainty level is acceptable for their particular needs and sectors.
EVIDENCE
Definition of benchmark as taxonomy, dataset, and evaluator system
MAJOR DISCUSSION POINT
Measurement and Benchmarking Challenges
Argument 4
Certification should represent consensus on what constitutes “good enough” for specific industries and deployments
EXPLANATION
Over the next few years, there should be development of certification processes that represent temporary agreements about adequate performance levels for specific industries, deployments, and use cases. These would build on benchmarking consensus around measurement approaches.
MAJOR DISCUSSION POINT
Implementation and Future Outlook
J
Joslyn Barnhart
2 arguments188 words per minute459 words146 seconds
Argument 1
Standards help avoid safety incidents that would harm industry adoption and provide strategic advantage
EXPLANATION
Some mitigations for extreme AI risks can be costly, creating industry incentive for collective action through standards. Safety incidents would be detrimental to adoption, so there’s a collective industry incentive to raise the floor and avoid incidents that would harm everyone’s interests.
MAJOR DISCUSSION POINT
The Need for AI Standards and Their Purpose
AGREED WITH
Chris Meserole, Lee Wan Sie
Argument 2
Comparison across models enables race to the top rather than bottom in safety and quality
EXPLANATION
When all actors measure the same things and can provide consumers with relative assessments of safety and quality, this creates competitive pressure that drives a race to the top in terms of performance rather than a race to the bottom.
MAJOR DISCUSSION POINT
Measurement and Benchmarking Challenges
AGREED WITH
Rebecca Weiss, Amanda Craig, Esther Tetruashvily
A
Audience
3 arguments159 words per minute387 words145 seconds
Argument 1
Industry-driven standards risk serving commercial interests over public needs, requiring external audit capabilities
EXPLANATION
There is concern that standards driven primarily by commercial interests may create solutions that cheaply satisfy industry requirements rather than addressing what the public actually needs. This raises questions about the legitimacy and effectiveness of industry-led standards development.
EVIDENCE
Reference to commercial interests represented on the panel
MAJOR DISCUSSION POINT
Stakeholder Concerns and Legitimacy
DISAGREED WITH
Rebecca Weiss
Argument 2
Government skill gaps in auditing sophisticated AI compliance programs pose implementation challenges
EXPLANATION
World governments face technical skill gaps that limit their ability to properly audit sophisticated AI compliance programs. This creates a fundamental challenge for oversight and enforcement of AI standards, beyond just policy gaps.
EVIDENCE
Reference to technical gap and inability of world governments to audit properly
MAJOR DISCUSSION POINT
Stakeholder Concerns and Legitimacy
Argument 3
Social policy disagreements in AI governance require either minimum viable consensus or addressing stakeholder priorities differently
EXPLANATION
AI governance encompasses broad social policy issues where stakeholders have significant disagreements, including debates over whether certain areas should even be measured. This raises questions about whether to pursue minimum viable consensus with broad stakeholder participation or find ways to address issues that some see as essential while others want to avoid.
MAJOR DISCUSSION POINT
Stakeholder Concerns and Legitimacy
B
Bhushan Sethi
5 arguments110 words per minute1735 words943 seconds
Argument 1
AI transformation requires responsible implementation with clear return on investment and demystification of standard setting
EXPLANATION
As an AI transformation consultant, Bhushan emphasizes the need to help companies implement AI in a responsible way that drives clear returns on investment. He stresses the importance of demystifying what standard setting means in the context of AI, making it more accessible and understandable for organizations.
EVIDENCE
His role as consultant helping companies implement AI and drive ROI in responsible way
MAJOR DISCUSSION POINT
The Need for AI Standards and Their Purpose
Argument 2
Global cooperation and inclusion are essential for AI solutions that meet everyone’s needs
EXPLANATION
Bhushan highlights the critical importance of global cooperation and inclusion in AI development, referencing discussions from the summit about ensuring AI solutions serve all stakeholders. He connects this to India’s focus on planet, people, and prosperity as guiding principles for AI development.
EVIDENCE
References to tech CEOs and world leaders speaking about global cooperation at the summit, India’s focus on planet, people and prosperity
MAJOR DISCUSSION POINT
Global Cooperation and Regulatory Context
Argument 3
Standards panel representation should include diverse stakeholders from standard setters, industry, policy, and regulatory environments
EXPLANATION
Bhushan emphasizes the importance of having diverse representation in standards discussions, noting that the panel includes standard setters and measurers, industry representatives, and people from policy and regulatory environments. This diversity is crucial for comprehensive standards development.
EVIDENCE
Panel composition including standard setters, measurers, industry representatives, and policy/regulatory experts
MAJOR DISCUSSION POINT
Stakeholder Concerns and Legitimacy
AGREED WITH
Kshitij Bathla, Chris Meserole, Etienne Chaponniere, Rebecca Weiss
Argument 4
Standards must address trust, verification, and credibility while avoiding subjective tick-the-box exercises
EXPLANATION
Bhushan identifies key requirements for effective AI standards: they must build trust, provide verification mechanisms, and ensure credibility. He emphasizes that standards should not become superficial compliance exercises but should provide meaningful assurance about AI system capabilities and safety.
EVIDENCE
Discussion of tech firms moving fast with model development, need for credible reporting and disclosure
MAJOR DISCUSSION POINT
Measurement and Benchmarking Challenges
Argument 5
Future-proofing standards requires focus on staying the course despite changing models, regulations, and geopolitical dynamics
EXPLANATION
Bhushan raises concerns about maintaining consistency in standards development over time, noting that AI models are rapidly changing, regulations may evolve, and geopolitical factors like US-China relations could impact cooperation. He emphasizes the need for sustained commitment to standards development despite these challenges.
EVIDENCE
References to changing models, potential regulatory changes, and US-China operational differences
MAJOR DISCUSSION POINT
Implementation and Future Outlook
Agreements
Agreement Points
Standards are essential for building trust and enabling AI adoption
Speakers: Kshitij Bathla, Chris Meserole, Lee Wan Sie, Amanda Craig, Esther Tetruashvily, Rebecca Weiss, Joslyn Barnhart
Standards enable consumer trust and industry quality assurance in AI ecosystems Standards solve collective action problems by ensuring no actor is disadvantaged while managing AI risks Standards provide alignment on what constitutes “good” practices in AI governance and create common methodologies Standards create common language for risk management across AI supply chains and enable compliance with regulations Standards translate risk management practices into language customers can understand and create consumer trust Standards represent consensus about what is “good enough” and need diverse stakeholder input beyond just industry Standards help avoid safety incidents that would harm industry adoption and provide strategic advantage
All speakers agree that standards are fundamental for building trust between consumers and AI providers, enabling adoption, and creating common frameworks for risk management across the AI ecosystem
Standards must address measurement and benchmarking challenges with focus on uncertainty estimation
Speakers: Rebecca Weiss, Amanda Craig, Esther Tetruashvily, Joslyn Barnhart
Benchmarking requires methodology definition and technical artifacts, focusing on measuring risk and uncertainty rather than binary safety assessments Need for common mechanisms to assess progress and reliability, moving beyond nascent stage discussions Standards must provide credible measurement that can be certified and understood universally Comparison across models enables race to the top rather than bottom in safety and quality
Speakers agree that effective standards require robust measurement methodologies that estimate uncertainty rather than providing binary assessments, enabling credible comparisons across AI systems
Global cooperation and inclusive participation are necessary for effective standards
Speakers: Kshitij Bathla, Chris Meserole, Etienne Chaponniere, Rebecca Weiss, Bhushan Sethi
Standards bodies are interconnected globally, creating collaborative rather than siloed approaches Multiple jurisdictions recognize frontier AI risks and delegate standard-setting to technical bodies rather than specifying requirements directly Standards enable accessibility for smaller companies that lack resources to develop their own risk management systems Standards represent consensus about what is “good enough” and need diverse stakeholder input beyond just industry Standards panel representation should include diverse stakeholders from standard setters, industry, policy, and regulatory environments
All speakers emphasize the importance of global cooperation and inclusive participation from diverse stakeholders, including smaller companies and various constituencies beyond just industry
Standards must be interoperable and avoid reinventing solutions for each use case
Speakers: Amanda Craig, Etienne Chaponniere, Chris Meserole
Standards ecosystem must be interoperable and modular to avoid reinventing approaches for each use case Existing standards bodies in various verticals are already addressing AI integration into their specific risk frameworks Process standards can be future-proofed while specific evaluations need updating as AI capabilities advance
Speakers agree that standards should be designed with interoperability and modularity in mind, building on existing frameworks while avoiding duplication of effort across different sectors and use cases
Implementation requires both regulatory support and market incentives
Speakers: Chris Meserole, Lee Wan Sie, Joslyn Barnhart
Market incentives and regulatory pressure will drive implementation as consumers demand trusted AI systems Standards provide differentiation mechanism even without regulations, as demonstrated by voluntary certifications Standards help avoid safety incidents that would harm industry adoption and provide strategic advantage
Speakers agree that successful implementation of AI standards will be driven by both regulatory frameworks and market forces, with companies having incentives to adopt standards even without mandatory requirements
Similar Viewpoints
Both speakers from major tech companies emphasize that standards serve as translation mechanisms between internal company practices and external stakeholder understanding, facilitating trust and compliance
Speakers: Esther Tetruashvily, Amanda Craig
Standards translate risk management practices into language customers can understand and create consumer trust Standards create common language for risk management across AI supply chains and enable compliance with regulations
Both speakers from AI safety organizations emphasize that standards address collective action problems in the industry, ensuring no single actor is disadvantaged while promoting overall safety
Speakers: Chris Meserole, Joslyn Barnhart
Standards solve collective action problems by ensuring no actor is disadvantaged while managing AI risks Standards help avoid safety incidents that would harm industry adoption and provide strategic advantage
Both speakers emphasize the democratizing effect of standards, making AI development accessible to smaller companies while ensuring quality and trust across the ecosystem
Speakers: Etienne Chaponniere, Kshitij Bathla
Standards enable accessibility for smaller companies that lack resources to develop their own risk management systems Standards enable consumer trust and industry quality assurance in AI ecosystems
Both speakers from policy/governance backgrounds emphasize the urgency of developing standards while recognizing the need for adaptable frameworks that can evolve with technology
Speakers: Lee Wan Sie, Chris Meserole
Need for faster movement on standards definition, particularly in testing and benchmarking methodologies Process standards can be future-proofed while specific evaluations need updating as AI capabilities advance
Unexpected Consensus
Voluntary adoption of standards without regulatory mandate
Speakers: Lee Wan Sie, Esther Tetruashvily, Chris Meserole
Standards provide differentiation mechanism even without regulations, as demonstrated by voluntary certifications Standards translate risk management practices into language customers can understand and create consumer trust Market incentives and regulatory pressure will drive implementation as consumers demand trusted AI systems
Despite representing different sectors (government, industry, safety organization), speakers unexpectedly agreed that standards have value and will be adopted even without regulatory requirements, driven by market forces and competitive differentiation
Industry acknowledgment of need for external stakeholder participation
Speakers: Rebecca Weiss, Esther Tetruashvily, Amanda Craig
Standards represent consensus about what is “good enough” and need diverse stakeholder input beyond just industry Language bias and multilingual challenges require community participation and local ecosystem collaboration Standards must address both general-purpose AI models and sector-specific deployment scenarios
Industry representatives unexpectedly showed strong agreement on the need for broader stakeholder participation beyond just industry voices, acknowledging limitations of industry-only perspectives
Recognition of technical and capacity limitations in government oversight
Speakers: Audience, Lee Wan Sie, Chris Meserole
Government skill gaps in auditing sophisticated AI compliance programs pose implementation challenges Need for faster movement on standards definition, particularly in testing and benchmarking methodologies Multiple jurisdictions recognize frontier AI risks and delegate standard-setting to technical bodies rather than specifying requirements directly
There was unexpected consensus between audience concerns and speaker acknowledgments about government capacity limitations, with even policy representatives agreeing that technical standards development should be delegated to specialized bodies
Overall Assessment

The discussion revealed remarkably high consensus across diverse stakeholders on the fundamental need for AI standards, their role in building trust and enabling adoption, the importance of measurement and benchmarking, and the necessity of global cooperation. Key areas of agreement included the value of standards for collective action, the need for inclusive participation, and the importance of interoperable frameworks.

Very high level of consensus with no significant disagreements identified. This strong alignment across industry, government, standards bodies, and safety organizations suggests a mature understanding of the challenges and a shared commitment to collaborative solutions. The consensus extends beyond just the need for standards to specific approaches for implementation, measurement, and governance, indicating readiness for concrete action in AI standards development.

Differences
Different Viewpoints
Speed vs. thoroughness in standards development
Speakers: Lee Wan Sie, Chris Meserole
Need for faster movement on standards definition, particularly in testing and benchmarking methodologies Process standards can be future-proofed while specific evaluations need updating as AI capabilities advance
Lee Wan Sie emphasizes the urgent need to accelerate standards development, noting that current ISO processes are too slow for the rapidly evolving AI landscape. Chris Meserole focuses on creating robust, future-proofed process standards that can accommodate changing AI capabilities over time, suggesting a more methodical approach.
Industry-led vs. multi-stakeholder standards development
Speakers: Rebecca Weiss, Audience
Standards represent consensus about what is ‘good enough’ and need diverse stakeholder input beyond just industry Industry-driven standards risk serving commercial interests over public needs, requiring external audit capabilities
Rebecca Weiss acknowledges that standards consensus shouldn’t be exclusively from industry perspective but should include diverse stakeholders. The audience member goes further, expressing concern that industry-driven standards may prioritize commercial interests over public needs and questioning the legitimacy of industry-led processes.
Scope of standardization – comprehensive vs. targeted approach
Speakers: Etienne Chaponniere, Amanda Craig
Existing standards bodies in various verticals are already addressing AI integration into their specific risk frameworks Standards ecosystem must be interoperable and modular to avoid reinventing approaches for each use case
Etienne emphasizes that existing vertical-specific standards bodies are already working on AI integration and suggests focusing on commonality in measurement techniques. Amanda advocates for a more comprehensive, interoperable system that works across general-purpose technology and different deployment scenarios to avoid fragmentation.
Unexpected Differences
Government capacity for oversight
Speakers: Audience, Joslyn Barnhart
Government skill gaps in auditing sophisticated AI compliance programs pose implementation challenges Standards help avoid safety incidents that would harm industry adoption and provide strategic advantage
Social policy vs. technical standards
Speakers: Audience, Rebecca Weiss
Social policy disagreements in AI governance require either minimum viable consensus or addressing stakeholder priorities differently Standards represent consensus about what is ‘good enough’ and need diverse stakeholder input beyond just industry
Overall Assessment

The discussion revealed relatively low levels of fundamental disagreement among panelists, with most tensions arising around implementation approaches rather than core objectives. Key areas of disagreement included the pace of standards development, the appropriate balance between industry leadership and multi-stakeholder involvement, and whether to pursue comprehensive or targeted standardization approaches.

The disagreement level was moderate and constructive, focusing on methodological differences rather than fundamental opposition to AI standards. However, audience questions revealed a more significant gap between industry perspectives and public concerns about accountability and legitimacy. The implications suggest that while technical experts largely agree on the need for and approach to AI standards, broader stakeholder engagement and addressing capacity gaps for oversight remain significant challenges for successful implementation.

Takeaways
Key takeaways
AI standards are essential for building consumer trust, enabling industry quality assurance, and solving collective action problems in AI risk management Standards should focus on process frameworks that can be future-proofed rather than specific technical requirements that will quickly become outdated Measurement and benchmarking must estimate uncertainty rather than provide binary safety assessments, requiring consensus on what constitutes ‘good enough’ for different use cases Global cooperation on AI standards is achievable through interconnected standards bodies, even without unified global AI regulations Standards serve multiple purposes: regulatory compliance, market differentiation, risk management translation across supply chains, and enabling smaller companies to access best practices Implementation requires interoperable and modular standards ecosystems to avoid reinventing approaches for each sector or use case Language bias and multilingual challenges require community participation and collaboration with local ecosystems to ensure inclusive AI development Market incentives and consumer demand for trusted AI systems will drive standards adoption, supplemented by regulatory pressure where it exists
Resolutions and action items
ML Commons and other standards bodies to continue developing benchmarking methodologies and technical artifacts for risk measurement Industry participants to work collectively on certification mechanisms that represent consensus on ‘good enough’ standards for specific deployments Standards organizations to accelerate the pace of standards definition, particularly in testing and benchmarking methodologies Continued collaboration between industry, regulators, and standards bodies to develop process standards that can accommodate advancing AI capabilities Development of reusable software frameworks for language-specific safety testing while accommodating diverse linguistic and cultural contexts
Unresolved issues
How to ensure industry-driven standards serve public needs rather than just commercial interests, and how governments can develop audit capabilities given technical skill gaps How to balance minimum viable consensus with addressing stakeholder priorities that some groups see as essential while others resist How to handle disagreements over social policy aspects of AI governance within technical standards frameworks How to scale standards development to accommodate the vast diversity of languages, dialects, and cultural contexts globally How to maintain standards relevance and implementation as AI capabilities rapidly advance and new risks emerge How to coordinate between existing vertical industry standards and new AI-specific standards to avoid conflicts or gaps
Suggested compromises
Focus on process standards that are capability-agnostic while allowing specific evaluations and controls to be updated as technology advances Develop modular, interoperable standards systems that can be adapted across different sectors and use cases without starting from scratch Use regulatory references to standards as a quality floor while allowing market forces to drive higher standards through competitive differentiation Combine global standards frameworks with local adaptations for specific use cases, languages, and cultural contexts Balance technical measurement capabilities with statistical uncertainty estimation rather than demanding absolute safety guarantees Create open governance models in standards bodies while providing accessible implementation tools for smaller companies with limited resources
Thought Provoking Comments
In the space of AI at the moment, actually, regulation has gone ahead and jumped to, you know, we’ve regulated and essentially made reference to standards that do not yet exist. So for places like Google DeepMind who have not invested heavily in the standard space in the past, this is now of an utmost priority because we actually need this to assist with implementation and compliance.
This comment reveals a critical paradox in AI governance – that regulations are being written that reference non-existent standards, creating an urgent need for industry to catch up. It highlights the cart-before-horse nature of current AI regulation and explains why major tech companies are suddenly prioritizing standards work.
This comment fundamentally reframed the discussion from ‘why do we need standards?’ to ‘we urgently need standards because regulations already assume they exist.’ It shifted the conversation from theoretical benefits to practical necessity and helped explain the sudden industry urgency around standards development.
Speaker: Joslyn Barnhart
The problem that we have is who contributes to that consensus. It shouldn’t probably be exclusively an industry perspective. You need to have more stakeholders or more constituencies that need to be represented in that definition… there’s a scientific element to that… but then there’s also the political element to that.
This comment cuts to the heart of legitimacy in standards-setting by identifying the tension between technical expertise and democratic representation. It acknowledges that defining ‘good enough’ isn’t purely technical but involves political and social value judgments.
This comment introduced crucial complexity to the discussion by highlighting that standards aren’t neutral technical artifacts but involve political choices about acceptable risk. It prompted deeper consideration of governance and representation in standards bodies, moving beyond purely technical discussions.
Speaker: Rebecca Weiss
You’re trying to provide a sense of, I’m not going to tell you that your system is, quote-unquote, safe or not. What I’m going to tell you is, under these considerations, under these conditions, under these assumptions, the estimated likelihood of a particular risky behavior is X. And then it is up to you as a risk management professional, a deployer, a developer, it’s up for you to decide, is that enough?
This comment fundamentally reframes AI safety from binary safe/unsafe determinations to probabilistic risk assessment with contextual decision-making. It clarifies that standards provide information for decision-making rather than making the decisions themselves.
This shifted the entire framing of the discussion from seeking absolute safety guarantees to understanding uncertainty quantification and risk management. It helped other panelists align on what standards can and cannot do, leading to more nuanced discussions about implementation across different sectors with different risk tolerances.
Speaker: Rebecca Weiss
If you look at the companies who have the type of resources to either set up their own standards and risk management systems internally, they’re typically pretty big companies… there’s a huge amount of companies who are being created every day, and they don’t have the resources to put this together… having the standard as effectively a mechanism for them to go directly to product and know that they’re going to comply with what the world or the community has set up is really important.
This comment highlights a critical equity issue in AI development – that without accessible standards, only large companies can afford proper risk management, potentially creating barriers to entry for smaller innovators. It reframes standards as democratizing tools rather than bureaucratic burdens.
This comment broadened the discussion beyond big tech companies to consider the broader AI ecosystem, including startups and smaller players. It added an inclusion and accessibility dimension to the standards conversation and helped justify why open, accessible standards are crucial for innovation equity.
Speaker: Etienne Chaponniere
Some of the mitigations we’re talking about associated with some of the more extreme risks that Frontier AI poses can be quite costly. And so I do think that there is just a strong industry incentive to work together to resolve this collective action problem… The worst thing for adoption would be a safety incident.
This comment reveals the economic logic behind industry cooperation on AI safety standards – that safety measures are expensive and a major incident would hurt everyone. It explains why competitors are willing to collaborate on standards despite competitive pressures.
This comment helped explain the seemingly paradoxical situation of competitors collaborating on standards by revealing the shared economic incentives. It shifted the discussion from viewing standards as regulatory compliance to understanding them as collective risk management, making the business case for cooperation clear.
Speaker: Joslyn Barnhart
Even when there’s no regulations, I think the standards still are useful… perhaps there’s also a way to differentiate for organizations, for enterprises… A way to differentiate themselves and say that, look, I’m adhering to a global standard. I’m demonstrating that I have actually implemented something that’s good enough.
This comment challenges the assumption that standards are primarily about regulatory compliance by highlighting their market differentiation value. It shows how standards can create competitive advantages and consumer trust even without regulatory mandates.
This comment expanded the discussion beyond regulatory compliance to include market dynamics and competitive positioning. It helped explain why companies like OpenAI pursue certifications voluntarily and added a business strategy dimension to the standards conversation.
Speaker: Lee Wan Sie
Overall Assessment

These key comments fundamentally shaped the discussion by introducing critical tensions and complexities that moved the conversation beyond surface-level agreement. Joslyn Barnhart’s observation about regulations preceding standards created urgency and explained industry motivation. Rebecca Weiss’s comments about consensus-building and uncertainty quantification provided technical depth while highlighting political dimensions. Etienne Chaponniere’s equity concerns broadened the scope to include smaller players, while Lee Wan Sie’s market differentiation point showed standards’ value beyond compliance. Together, these comments transformed what could have been a superficial discussion about the need for standards into a nuanced exploration of legitimacy, technical challenges, economic incentives, and democratic participation in AI governance. The discussion evolved from ‘why standards?’ to ‘how do we create legitimate, accessible, and effective standards that serve diverse stakeholders while managing unprecedented technological risks?’

Follow-up Questions
How do we define the characteristics of a system such that you can actually create the kind of uncertainty estimation that lives up to a statistical guarantee?
This addresses the scientific challenge of creating reliable measurement methodologies for AI systems that can provide statistically valid uncertainty estimates, which is fundamental to trustworthy AI standards.
Speaker: Rebecca Weiss
What are the risks that we care about and how do different jurisdictions prioritize different lists of risks?
This highlights the need to understand how different countries and regions may have varying risk priorities for AI systems, which affects global standardization efforts.
Speaker: Esther Tetruashvily
What are the net new risks versus existing risks where we don’t need to create something new?
This is important for avoiding duplication of effort and focusing standardization work on genuinely novel AI-specific risks rather than rehashing existing risk management approaches.
Speaker: Esther Tetruashvily
Who is best positioned to control a particular risk across the AI supply chain?
This addresses the critical question of responsibility allocation between model developers, application developers, and deployers in managing AI risks.
Speaker: Esther Tetruashvily
How do we have common ways of assessing whether we are still in a nascent stage and what levels of uncertainty do we have?
This would help the field objectively measure progress in AI safety and standards development rather than relying on subjective assessments.
Speaker: Amanda Craig
How do we create a system of interoperable standards that work across different deployment scenarios, use cases, and sectors?
This is crucial for creating efficiency in standards development and avoiding the need to start from scratch for each new application area.
Speaker: Amanda Craig
How can world governments audit sophisticated AI compliance programs given the technical skill gap?
This addresses a critical implementation challenge where regulatory bodies may lack the technical expertise to effectively oversee AI standards compliance.
Speaker: Audience member
How do we tackle language bias and build guardrails for multilingual AI systems, particularly for countries with many official languages like India?
This highlights the need for inclusive AI development that works across diverse linguistic contexts, which is essential for global AI deployment.
Speaker: Computer science student (audience)
How do we address social policy disagreements in AI governance standards when stakeholders disagree on what should even be measured?
This addresses the challenge of building consensus on AI standards when there are fundamental disagreements about values and priorities among stakeholders.
Speaker: Jules Polonetsky (audience)
How do we ensure standards are not just performative but actually serve public needs rather than just satisfying industry requirements?
This questions the legitimacy and effectiveness of industry-driven standards processes and highlights the need for genuine public benefit.
Speaker: Audience member
How do we move faster on standards development while maintaining quality and consensus?
This addresses the tension between the rapid pace of AI development and the typically slower pace of standards development processes.
Speaker: Lee Wan Sie
How do we future-proof AI standards as models and capabilities continue to evolve rapidly?
This is essential for ensuring that standards remain relevant and effective as AI technology continues to advance at a rapid pace.
Speaker: Implied by multiple speakers

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Regional Leaders Discuss AI-Ready Digital Infrastructure

Regional Leaders Discuss AI-Ready Digital Infrastructure

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on AI infrastructure development and opportunities for the Global South, featuring perspectives from government officials, international organizations, and development banks. Dr. Saurabh Garg opened by emphasizing four key elements for AI-ready data: discoverability through proper metadata structures, trustworthiness via quality assessment frameworks, interoperability through unique identifiers, and usability across systems through common standards and classifications. He also questioned whether current AI models are too infrastructure-heavy, noting that while AI systems require gigawatts of power, humans operate on just 100 watts.


The panel discussion revealed diverse national strategies for AI development. Uzbekistan is investing $300 million in AI development, including $200 million for government data centers with NVIDIA GPUs and a $5 billion energy-efficient data center project with Saudi partners, aiming for $1.5 billion in AI-related exports by 2030. Indonesia faces a “triple deficit” in data infrastructure, compute capacity, and AI talent, with ambitious plans to train 12 million AI professionals by 2030 through their Korika Academy and pentahelix platform involving government, industry, academia, civil society, and media.


The World Trade Organization highlighted that AI could increase global trade by 40% by 2040, but emphasized the need for digital infrastructure, skills, and policy readiness to realize these opportunities. Regional cooperation emerged as crucial for smaller economies that cannot achieve the scale needed for major AI investments alone. The discussion concluded that while AI presents significant opportunities for economic growth and development, success requires balanced approaches addressing infrastructure, skills development, and appropriate regulatory frameworks tailored to each country’s specific context and needs.


Keypoints

Major Discussion Points:

AI-Ready Data Infrastructure: Dr. Garg emphasized four critical elements for making data AI-ready: discoverability through proper metadata structures, trustworthiness through quality assessment frameworks, interoperability with unique identifiers for data communication, and usability across systems through common standards and classifications.


Digital Infrastructure Gaps and Investment Strategies: Multiple panelists discussed the “triple deficit” facing developing nations – inadequate data/compute infrastructure, shortage of AI-skilled talent, and limited connectivity. Countries like Uzbekistan are investing $300 million USD in AI development, while Indonesia targets training 12 million AI talents by 2030.


Regional Cooperation vs. National Sovereignty: The discussion explored balancing collaborative approaches to share infrastructure costs and expertise while respecting data sovereignty. The WTO representative highlighted how regional frameworks like ASEAN’s AI policies and trade agreements can provide economies of scale for smaller nations.


Private-Public Partnership Models: Panelists shared various approaches to mobilizing capital, from Uzbekistan’s partnerships with Chinese companies like Huawei, to Indonesia’s collaboration with hyperscalers like Microsoft, demonstrating different models for attracting international investment and expertise.


Contextual AI Solutions and Skills Development: The conversation emphasized that AI applications must address specific local needs rather than adopting one-size-fits-all approaches. Examples included Indonesia’s focus on climate-health nexus for disaster-prone areas and the recognition that employment concerns may outweigh automation benefits in some contexts.


Overall Purpose:

The discussion aimed to examine digital infrastructure challenges and opportunities for the Global South in AI adoption, covering the full spectrum from foundational compute infrastructure to skills development, policy frameworks, and practical implementation strategies across different national contexts.


Overall Tone:

The discussion maintained a consistently optimistic yet pragmatic tone throughout. Panelists were enthusiastic about AI’s potential while being realistic about implementation challenges. The conversation was collaborative and solution-oriented, with participants sharing specific strategies and investment figures. The tone remained constructive even when addressing significant gaps and constraints, focusing on actionable approaches rather than dwelling on obstacles.


Speakers

Speakers from the provided list:


Dr. Saurabh Garg – Secretary (specific ministry/department not mentioned), focuses on AI-ready data, works with ministries and governments across the country


Arndt Husar – Moderator/Host of the fireside chat discussion on digital infrastructure


Johanna Hill – World Trade Organization (WTO) representative, works on AI and trade policy


Zuhriddin Shadmanov – Ministry of Digital Technology, Uzbekistan, works at the center of development of AI and digital economy


Hamam Riza – Professor, Co-chair of the National AI Roadmap Indonesia 2030, President of the Collaborative Research and Industrial Innovation in Artificial Intelligence


Mio Oka – Asian Development Bank (ADB) Country Director for India


Additional speakers:


None – all speakers mentioned in the transcript are included in the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This panel discussion at an AI summit in India brought together government officials and development finance representatives to examine AI infrastructure challenges and opportunities for developing economies. Moderated by Arndt Husar, the conversation explored practical approaches to building AI capabilities while addressing real-world constraints facing the Global South.


Foundational Infrastructure Requirements

Dr. Saurabh Garg opened the discussion by outlining four essential elements for AI-ready data infrastructure. First, discoverability requires well-defined metadata structures that enable data to be easily found and understood across systems. Second, trustworthiness depends on comprehensive quality assessment frameworks that ensure data credibility. Third, interoperability relies on unique identifiers that allow different datasets to communicate effectively. Finally, usability across systems requires common definitions and standards to prevent confusion and ensure consistency.


Dr. Garg also raised concerns about the energy efficiency of current AI models, noting the stark contrast between AI infrastructure requiring gigawatts of power while human intelligence operates on merely 100 watts. This comparison questions whether the industry is pursuing the right technological path and highlights the need for more efficient approaches.


National Strategies: Indonesia’s Comprehensive Approach

Hamam Riza, co-chair of Indonesia’s National AI Roadmap 2030 and president of the Collaborative Research and Industrial Innovation in Artificial Intelligence, outlined Indonesia’s multi-faceted strategy for AI development. He referenced a “triple deficit” facing developing nations, though technical issues with the audio made his detailed explanation unclear.


Indonesia’s approach emphasizes culturally aligned AI development, recognizing the need for large language models that reflect local contexts rather than relying solely on foreign-developed systems. The country is focusing on climate-health applications, particularly predicting climate-sensitive infectious diseases like malaria and dengue, which are critical challenges for disaster-prone regions.


The Indonesian strategy involves collaboration with international partners, including Microsoft and other major technology companies, to build domestic capabilities while leveraging global expertise. Their approach includes comprehensive skills development programs aimed at creating sustainable capacity building across the country’s vast population.


Uzbekistan’s Investment and Partnership Model

Zuhriddin Shadmanov from Uzbekistan’s Ministry of Digital Technology outlined his country’s significant financial commitment to AI development. The government has allocated $300 million for AI advancement, with $200 million specifically designated for government data centers equipped with NVIDIA GPUs. The country aims to attract $1 billion in AI-related infrastructure investment by 2030.


Uzbekistan’s strategy demonstrates comprehensive ecosystem thinking, combining domestic investment with strategic international partnerships. Their collaboration with Huawei encompasses AI infrastructure development and network advancement. Additionally, partnership with the UAE’s training programs has registered over one million participants, showing how international cooperation can rapidly scale skills development.


The country offers attractive tax incentives and customs exemptions for investors willing to build data centers worth over $100 million, demonstrating how policy frameworks can mobilize private capital for infrastructure development. Their approach includes establishing data lakes that collect government sector data, making it available to SMEs and startups at low cost to stimulate innovation.


Trade and Economic Opportunities

Johanna Hill from the World Trade Organization highlighted AI’s potential economic impact, projecting that AI and trade integration could increase global trade by 40% by 2040 – what she termed the “40 by 40 effect.” However, she emphasized that realizing these opportunities requires comprehensive digital infrastructure, skills development, and policy readiness rather than simply technological deployment.


Hill noted that regional cooperation is particularly crucial for smaller economies that cannot achieve the scale needed for major AI investments independently. Regional frameworks and trade agreements can help countries develop coordinated approaches and achieve economies of scale.


Development Finance Perspective

Mio Oka from the Asian Development Bank provided a pragmatic view of AI infrastructure investment, emphasizing that service-level applications in agriculture, water supply, and irrigation may offer more immediate impact given the scale of populations in the Global South. The ADB’s strategy focuses on mobilizing private capital through integrated approaches that combine infrastructure development with AI considerations.


Oka shared a revealing anecdote about proposing AI-based fish feeding systems for aquaculture development. The negotiations ended abruptly when government officials prioritized employment concerns over technological efficiency. This experience highlights the critical importance of understanding local development priorities and ensuring that AI solutions align with broader socio-economic objectives rather than pursuing technology for its own sake.


Skills Development Across Society

The discussion revealed sophisticated approaches to skills development that extend beyond technical training. Uzbekistan’s strategy covers all levels of society, from students and professionals to public servants, recognizing that successful AI adoption requires broad-based understanding rather than concentrating solely on the technology sector.


Indonesia’s training programs emphasize “train the trainers” approaches aimed at creating sustainable capacity building that can scale across large populations. The skills challenge extends beyond individual capacity to institutional readiness, with both countries emphasizing the importance of preparing government institutions for AI implementation.


Balancing Innovation with Employment Concerns

A significant theme emerged around balancing AI’s potential for economic transformation with legitimate concerns about employment displacement. Oka’s aquaculture example illustrates how technological solutions that seem beneficial from an efficiency perspective may conflict with development priorities focused on job creation and poverty reduction.


This tension requires nuanced approaches that consider AI’s role in economic development rather than simply technological advancement. The emphasis on human-centered AI development in national strategies reflects recognition that successful AI adoption must enhance rather than replace human capabilities, particularly in economies where employment generation remains critical.


The Path Forward

The discussion highlighted that AI infrastructure development must be understood as part of broader development strategies rather than as a standalone technological challenge. Success requires integrated approaches that address infrastructure, skills, governance, and practical implementation simultaneously while remaining sensitive to local contexts and development priorities.


Moderator Arndt Husar referenced the ITU’s framework of “3 S’s” – solutions, standards, and skills – as essential elements for AI development. He also noted the ADB’s partnership with the summit and their working group on democratization of AI compute, representing efforts to address shared infrastructure challenges for smaller economies.


The conversation demonstrated mature understanding of AI development challenges, moving beyond simple technology adoption to comprehensive ecosystem thinking. The emphasis on human-centered approaches, regional cooperation, and context-sensitive strategies suggests that developing nations are creating sophisticated frameworks for AI adoption that balance technological advancement with local development needs.


Session transcriptComplete transcript of the session
Dr. Saurabh Garg

models or talent, how we can ensure that it works in a federated manner. I think I’ll just, I was discussing and maybe I’ll just focus on one piece, which is on AI -ready data, if I can focus on that and leave it for the esteemed panelists on the large number of issues. Some of the elements that we are focusing on include, one is on how to make it more discoverable. That would be a very basic point to ensure that it’s discoverable. Second is how to ensure that the data sets are trustworthy, and that would be the second element. The third would be on the interoperability, and the fourth would be on the usability across systems.

So on discoverability. On discoverability, the metadata of that structure is extremely important, so that would help to, that’s a first. element of having a metadata structure which is understandable and well defined and can be used across the second on the trustworthy part would be the quality assessment so we’ve developed as kind of a quality assessment framework which focuses on the quality of the data so that to ensure credibility on the data interoperability a lock would depend on whether data can talk to each other what is the unique identifiers that we have which will ensure that the different data sets whether are they talking about the same thing or different we are able to identify that and the fourth would be on the usability across systems would be based on the standards and classifications that we have whether it’s a common definitions and common standards so that two sets of data don’t refer to the same thing and I suppose this really forms the bedrock of making a data AI ready and that That’s something that we’re working with ministries and governments and state governments across the country.

And given the importance of data sets in the AI infrastructure, it has an important part to play. The other aspect on data is also on its dissemination and access, on how we are able to ensure that data sets in themselves have value beyond AI and what kind of dissemination and mechanisms can be there which will make it usable for people to leverage them for business while preserving the privacy aspects of individual data. One other thing, since we are talking about and the panel will be having discussions on AI infrastructure, I just wanted to focus on one thing that I think discussed, over the past couple of days. has also come up that the existing models seem to be extremely data infrastructure, infrastructure heavy, whether compute infrastructure, data infrastructure.

And every time a new query is put out to the model, is it necessary for the billions of bytes to be again run through again and the gigawatts of power that we need? And are there alternative mechanisms available? And I just want to highlight yesterday one comment which stays with me, is what Vishal Sikka had made, that when we talk in terms of AI infrastructure, we talk in terms of gigawatts of power. Compared to that, a human being requires 2 ,000 calories, which is only 100 watts. So are we missing something out there in the infrastructure? And perhaps a greater focus on the models going forward is there. So I’ll stop here. Thank you for inviting me. Thank you.

Arndt Husar

Thank you so much, Secretary. And I’m now going to join the fireside chat here. The discussion that we have planned will cover various different aspects of digital infrastructure. So when you hear digital infrastructure, you might be first thinking of the data centers and the compute. But we actually want to have a conversation that also encompasses the solution side, the skill side, so that we really look at the whole spectrum of infrastructure, even standards. So these three S were introduced yesterday by ITU’s head, the three S of solutions, standards, and skills. Kind of a nice way to open up to the panel. We have different perspectives here today. And we’re going to try to stick to time.

But let me introduce you to this panel by asking the first question. And I would request that each of the panelists then quickly states their name and their institution to shorten the time. What we would like to hear from each of you is that from your vantage point, what do you see as the most critical gap, the most exciting opportunity for the global south in generating positive impact through AI? So we’ve asked each of them to think about a concern or opportunity and then also to maybe link it to strategy or vision. So maybe I’ll first go to the lady on my right from the WTO. May I request you for your perspective on the big challenge or opportunity?

Johanna Hill

Thank you so much to the Asian Development Bank for the invitation and the organization to this interesting conversation. My name is Johanna Hill. And I… I am with the World Trade Organization. And let me start out with the opportunity side of the equation. We really are seeing that AI and trade, when they work together, can offer important opportunities for developing countries and low -income economies. Our projections at the Secretariat have led us to believe that by the year 2040, trade could grow by almost 40%. So that would be the 40 by 40 effect. But then here come the caveats, right? For that to happen, for those opportunities to really be realized, one element that is really important is the digital infrastructure, the skills that you mentioned, and policy readiness.

You know, we’ve heard throughout this conference and before the important opportunities and applications in different sectors, in agriculture, health care, new services being developed as we speak, new services and goods that are becoming more AI -related, more tradable, and we are also seeing that that can have important opportunities. for the smaller firms in developing economies and in the big economies also. We did a survey with the ICC that we published last year on the opportunities for businesses and for small and medium enterprises. And of those respondents, many of them were saying that they’re already using AI, of course, from bigger companies, more developed economies. But even the smaller firms are also seeing opportunities in areas like market intelligence.

So we do see that it can be a game changer.

Arndt Husar

Fantastic. And one of the things that I’ve been hearing a lot at this summit is that specifically the SMEs, the technology has moved so fast that there’s a huge adoption gap and understanding of how they can actually integrate the AI into their business models, into their little shop that takes a picture of a product and uploads it quickly. AI can be super helpful in this but hasn’t yet reached that audience. Maybe I’ll turn it over to you. Maybe I’ll turn to the other side and request our colleague from Uzbekistan to share his perspective.

Zuhriddin Shadmanov

Thanks for the question. Thanks for having me here. Let me talk about the gaps which exist in our country. I think the first one is unequal access to compute capacities and I think advanced AI digital skills. So in that sense, these foundations play a crucial role because if you don’t bridge those gaps, many countries, nations will be just the consumers of AI rather than creators of AI value. So in that sense, Uzbekistan is advancing strategic ideas. First one is developing human skills. So in that sense, Uzbekistan is advancing strategic ideas. First one is developing human skills. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas.

So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. So in that sense, Uzbekistan is advancing strategic ideas. all stratus of our nation, starting from students, professionals, and public servants. So we are not concentrating on the tech sector, but also we try to cover all the spheres of our nation.

And secondly, we are developing our infrastructure. For that reason, our government is allocating around 200 million USD. So to create our own government data center with supercomputers, GPUs, acquiring from NVIDIA. And also we are working with DataVault, a Saudi Arabian company, to create an energy -efficient data center, which is based on renewable energy. It is a very big project. It’s around 5 billion USD. And the data center will be… put into operation within two, three years. hopefully and also we are trying to develop our government strategy we adopted a strategy 2030 last year and this by by the year to 2030 we’re trying to get there early reach the export of AI related products by five 1 .5 billion USD so

Arndt Husar

fantastic so either by coincidence or planning you touched on the 3s the solutions the skills and the standards the policies fantastic thank you so a very comprehensive view with multi -pronged strategy that you didn’t introduce yourself so I just say you with the Ministry of Digital Technology and an institution quite focus the center of the development of AI and the digital economy Fantastic. Okay, let me turn back to this side. So from Indonesia, we have someone who’s actually in this skills domain. Would you like to share with us what you, from your vantage point, perceive as the key opportunity or challenge?

Hamam Riza

All right, thank you. Hi, everyone. I am Professor Hamam Riza. I am the co -chair of the National AI Roadmap Indonesia 2030. And also I am the president of the Collaborative Research and Industrial Innovation in Artificial Intelligence, the organization that was founded in 2020 when we launched our first national AI strategy towards Indonesia 2045. That is the vision. And I think AI will take us there, really. So from my vantage point, I think… We are… we have no we need to move beyond numbers even though we understood that AI economy will create millions of jobs and also potential economy of up to 1 trillion so from my vantage point from the Indonesian perspective global south they are basically triple deficit in terms of what we are going to it is the most challenging one the first one is that certainly about the data and infrastructures, the compute infrastructures we are still lacking the connectivity the networks but as I have marked down here in order for us to smooth up all the AI use cases for public services for health services, for agriculture, and many other things, you need to basically solve this triple deficit.

And that is also regarding how you need to develop the AI talents. There is a significant lack and scarcity of high -quality localized data centers tailored to Indonesia, as well as shortage of AI skilled talents that limits the capacity of long -term innovations capabilities. So our government is addressing these gaps through the national roadmap that I co -chaired. And our primary concern is how the digital divide and how the AI divide, which is the digital divide. Which is created by this. generatively I didn’t think I towards many of the public sectors in Indonesia in an in general in the global self that we can tackle so that while there is nine to two percent of our skill knowledge worker but we are still using you know a very basic AI tools and needs to be aware of all the and all the risks you know applied to the output of this AI tools so those those things are that I think will be my point of view towards closing the gaps for the global south and especially for Indonesia

Arndt Husar

thank you so much and it’s of course one of the most populous countries in Southeast Asia It’s a very young workforce. 270 million. Yes, and startup buzz in Indonesia is also palpable, so lots of potential in Indonesia. Last but not least, I want to go to our Asian Development Bank Country Director for India. Mio, can I request you to share your views?

Mio Oka

Thank you. From ADB’s perspective, of course, foundation is important. We need to have a power supply, stable power supply, and the devices that people have access to, and reliable broadband, even in our office. So that’s a foundation. But we are in India. Do we expect India to put so much money on foundation to have a ground -level impact? But India has a scale. So what we need to focus is, as others already said, is a service. So we work on agriculture sector, water supply, and even irrigation sector where AI is widely applied. Because of the scale of the people that we have in the global south, while we work on the foundational infrastructure, at the same time we really have to work on how AI can be applied at the service level.

And this is where ADB would like to support. Thanks.

Arndt Husar

Thank you, Mio. So we have, as you can see, different perspectives at the same space of how do we get at grappling with this massive development opportunity that AI represents. For this first round of questions after the opening, let’s go into the foundations a little bit. I want to go to Uzbekistan again. And you already mentioned you have… ambitions on infrastructure, policy and skills. Now, how do you actually balance this in terms of priority setting? Can you go for all of them all at once? How do you finance it? Does this keep you up at night, how you balance these three different strategic objectives?

Zuhriddin Shadmanov

It’s a tough question because we are a developing nation and money is always a scarce element for us. So anyway, our government is trying to allocate enough resources so we can cover all the aspects of AI development to create the AI ecosystem. First one, as I already mentioned, it’s a strategy. 2030, which sets our priorities, which is human -centered AI. And secondly, with the government trying to allocate enough resources overall now the government announced about 300 million USD for development of AI and the money goes to first of all implementing projects in the government sector in the social sphere healthcare, education, transportation cyber security and etc and also government is trying to provide necessary infrastructure building data centers acquiring GPUs and also we are now creating a data lake which will be collecting the data of the government sector so SMEs, startups and other who wants the data they can use those data for free or for some money usually free and anyway we’re trying to work with other countries as I already said that we are we have a good project five million AI leaders so United Arab Emirates they helping us to helped us to build this program and it was launched now over 1 million people already registered and go to training certifications there so also we are trying to attract foreign foreign investments and now government announced very good in tax incentives and other incentives to for example if you are want to invest in a you know in Uzbekistan and try want to build a data center which costs over 100 million USD you will get very cheap and take the intensive and customs exams and etc so going trying to balance with cautiously but still providing necessary conditions for the to build a ecosystem

Arndt Husar

very impressive and since I had the opportunity to chat with somebody else from Uzbekistan this week I also know that in your KPI as a public servant AI roll it out has entered that KPI space so that’s always going to make a difference let’s go back to Indonesia now I’m gonna you know of course skills is your comfort zone but can I ask you about the infrastructure side I know that the hyperscalers the big cloud companies international companies have invested significantly in Indonesia now how do you see that now moving into the AI age and is that a big step forward for you? is there a lot of activity on additional infrastructure build out what do you see happening in that

Hamam Riza

yes so I see these questions and I’m really eager to answer this because suddenly our infrastructure is undergoing a transformation really to meet the demands of the AI demand and certainly with the ability of many of these new infrastructures coming out of the government and also from the business I think benchmarking with many other countries including you know in the regional ASEAN take for example the presence of the global hyperscaler in the country have established actually multiple cloud regions in Indonesia. But certainly this needs to be amplified because as you know 10 years in many other technologies, one year in AI, right? That’s what they are saying. So how do you can fulfill this demand of AI compute massive data for training because you need to build up our own for example large language model that can align with our cultures.

So those hyperscalers needs to move beyond just being a single a host for this you know many of the AI models from outside of the country right so and the infrastructure readiness is also being federated by our chief toward the sovereign AI we are now preparing the presidential regulation actually to push forward the innovations the investment and we need to collaborate with many of the hyperscalers and we are ensuring that the physical infrastructures like the GPU data center and localized edge computing yeah is going to be present in the country. And one thing that the Vice Minister of Communication and Digital Affairs mentioned to me yesterday that we are struggling building up the ecosystem. That means there will be special economic zone for these hyperscalers and new data centers being brought forward in order to align and be part of our national AI roadmap, AI journey in Indonesia.

And we are going to prepare ourselves in this AI transformation so that our data… digital consumer… is going to be part of our transformation. The technology is accessible for all. Even, you know, what we are right now, you know, participating in the India AI Summit says about democratizing AI for all. So I think that is a very significant theme that is also part of our national AI roadmap. Thank you.

Arndt Husar

Fascinating. And, again, I think you as a large economy, you have that opportunity similar to how India is also portraying it this week of really wanting to develop your own, you know, language models and really playing in that league. However, there are many countries. also countries we work with who don’t have that kind of scale and who need to look at it quite differently. So the different nuanced strategy that you mentioned of investing into the big AI, the small AI, the edge AI, all these different pieces, very interesting. With this dynamic, can I turn to WTO? How do you see trade competitiveness evolve? That’s really your space where you are at. What are those interesting approaches that are emerging which could help support maybe the cross -border collaboration while you also, of course, respect data sovereignty?

Countries will need to collaborate, right? There’s not enough money to go around for everybody to play in that top league. So trade competitiveness, what do you see there?

Johanna Hill

So I was talking about the opportunities of trade growing by the use of AI. Okay. And if you think about… That growth comes from the lowering of trade costs. It comes from powering AI -enabled goods and services crossing borders. And it’s… Also, new products and services that are going to be invented are being invented by AI. And when you talk to business, when we asked through the survey, some of the constraints that they are having and doubts in the use of AI have to do with competing regulations and having a high cost in trying to comply. And fragmentation is actually an area of data, for example, that can become a problem. And so we developed and published last year in the World Trade Report what the Secretariat calls the AI Trade Policy Openness Index to help regions measure how they’re doing in that space.

And in there, you can see, for example, that some of the lower income economies can seem quite open in that space. But it might be because of the lack of regulation. And when you talk about AI, I think what a lot of countries and customers are saying is that AI is not a good thing. What customers are looking for is, you know, it’s AI. that is responsible, you know, trading AI with trust. So just not having regulation can also be a disadvantage to your competitiveness. So starting to look at those things that way. And then in the part of the solution side, definitely the regional approaches are important, those collaborations, and sharing infrastructure, for example.

When you don’t have those economies of scale, those huge investments come in your way. And then not every single company or every single country is looking to be on the edge of things necessarily, but we do want to adopt AI to boost our economy and our competitiveness.

Arndt Husar

Well, thank you so much. And I don’t know whether people heard about this new initiative that the working group on the democratization of AI, of compute, has come up with. ADB is actually supporting that. Really, this is… This has not yet evolved, right? This collaboration on the infrastructure. How do you share that properly across borders? It’s still new territory and very interesting to see. Can I turn to my colleague, Mio, and request her to talk a little bit about the engagement that we’ve had with member countries. What does demand to ADB actually look like in this space?

Mio Oka

try to invest in the township planning and the implementation. Also, we can have a water supply road project that can be connected to the industrial parks so that private sector can invest in the digital -related facilities. So mobilization of private capital is one. And the second is it’s an application across sectors. We just don’t look at the single sector project. As I said, we can work on road and water at the same time. And while we work on the Agri -AI project, we work with the building capacity of that institution as well so that they can handle the AI. And the third is the knowledge. As I said, we support quite a bit of this master planning or the strategy development at the municipality or the state or even at the regional level.

We see India. And you’ve been coding on science. India. And of course we always bring in the international experts so that India can learn and also this is a good opportunity for India to expand their capacity to outside countries. Thank you.

Arndt Husar

Thanks, Miyu. I actually had a follow -up question for you that would have touched on this de -risking and catalyzing investment topic. Maybe I’ll let you ask you to repeat that now, but let me just add our digital sector office being fairly new. They are getting a lot of demand for guess what? Data centers. And, you know, we welcome that. We have conversations with government but I’m truly impressed with the conversations at the summit here. Earlier this morning I attended one where the state of Telangana was sharing what they’re thinking about and they’re really quite cognizant of the kids to school not many kids fit into a Ferrari milk doesn’t make sense so we need to look at what type of compute is needed for what and I think we in ADB are also learning more and more how to engage in these conversations properly we’re learning alongside everybody else in this room probably and that’s an important distinction to make because it will influence the financing bit how much do we need, what do we actually need and when and how do we make that investment sustainable just wanted to add that it’s an insight from this morning that I couldn’t not retail.

Let me go back to Indonesia and ask you about cutting -edge skills because you’re in that space. I found it very interesting that you’re actually, you said co -president or co -chairing this platform where you bring together private sector, education sector, government. And as you are looking at that, how is your organization doing it in practice? How do you bring these people together and get them into action mode? How do you do that?

Hamam Riza

Okay, thank you. Very important question here, I think. So I would like to say in three pillars that what we are doing, especially that we chair the AI ecosystem in Indonesia where the government, the industry are involved in the AI ecosystem. Within an academia as well as the… civil society as well as media we call this pentahelix platform we discuss about I think three pillars first one is the talent certainly second one is infrastructures and the third one is how basically we can articulate use cases towards all the services public services and businesses as well so Indonesia for talent we have set our target quite ambitious that we want to have at least 12 million talent by 2030 and for us this is something that are uh fairly challenging, considering that we are still lacking around 3 to 5 million talent as of now, right?

So what we are trying to achieve together with the whole ecosystem is to establish an academy, the Korika Academy, where we promote to not only upskilling and reskilling some of the civil servants and other workers, but we are also looking at how we can train the trainers. We work with several of our friends. I will note here that Elevate Indonesia, for example, part of the… Microsoft and many others big tech that are there works together with our ministry to establish this program for Thailand

Arndt Husar

and it’s a digital academy or is it a physical?

Hamam Riza

it’s a digital academy with the LMS learning management system and many other things we also established the Kodika chat actually it’s a chatbot for this training and upskilling program that we do with the government beyond Thailand basically we are aggressively looking at how we can nurture this talent to work in data centers, in many others startups and incubators as well as to establish some of the most diverse demanding use cases So the third one is we try to work on climate health nexus in establishing how we counter and predict the climate sensitive infectious disease such as malaria and dengue. And we have established for the past three years the Climate Smart Indonesia which have attracted many of the universities as well as NASA pollution and air quality programs to look into these use cases.

So we can basically reach out to many of the areas where the… …the health, the disaster prone area because Indonesia is a supermarket for disaster. You can have the hydromelectorological disaster, you can have ecological, you can have many things. So you need to…

Arndt Husar

I’m not buying any of them.

Hamam Riza

Of course, we don’t want to be shopping.

Arndt Husar

So really amazing this focus also on the use cases, right? And prioritizing those that match with your country needs. Yes, thank you. Give the highest impact, right? Super. I’ll turn back to Uzbekistan and just wanted to ask you to elaborate a little bit in terms of private sector capital mobilization. You have all these ambitions you shared across the board really in terms of infra, in terms of skills and so on and so forth. Uzbekistan as an economy has still… a good chunk of traditional economy but also has a very active startup sector that I’m learning more and more about how dynamic people are around the region, Central West Asia going and finding scalable solutions but these are the still growing companies for mobilizing capital for your infra you’re going to need the big ones or you’re not going to need the international partners or what are you thinking about this private capital mobilization, what’s your strategy there?

Zuhriddin Shadmanov

First of all I should mention that according to the documents adopted by year 2030 we are planning to attract around 1 billion USD for investments for creating AI related digital infrastructure and part of this goes to creating data centers and we’re going to need to And also we’re working with our Chinese partners also. It’s the biggest IT company, Huawei. So they’re also involved in creating AI ecosystem in Uzbekistan. Mainly, first one is upskilling public servants to help them to adopt AI adoption and also creating the necessary training programs for the specialists and also creating the AI infrastructure like data centers, data lakes. And also we need to get, we are transferring to 5 .5G and also working on 6G also with Huawei.

So, yes. And also, as you mentioned already about startups, we are developing our own. startup ecosystem and we established many venture funds funds of funds and also there are many emerging private funds so they are now trying to invest in startups attracting private funds, private investments so currently we have allocated around 50 million USD for AI startups so they are already providing services both for public and for businesses so trying to balance and attract all the stakeholders of the ecosystem

Arndt Husar

Fantastic, so you’re mixing also your public funds that you invest for example in the fund of funds and then bringing in more investment domestically from your investors but also from abroad That’s amazing. And then having large industry partners that are interested in the market, bringing them in like Indonesia did with some of the hyperscalers. You are bringing in Huawei and Chinese partners. So basically it’s a mix of different strategies you mentioned. Also, that’s fascinating. Again, Uzbekistan being one of the larger countries in Central West Asia and Indonesia, both fairly large in their region. And then, of course, again, I want to come back to this point about diversity of country context. That’s both a challenge but also an opportunity.

I mean, for us at ADB, it adds, of course, complexity because we need to respond to these different needs. But from the perspective of WTO, is there like a specific area such as maybe interoperability standards or AI talent mobility? Or the shared data set? joint research, where do you see regional cooperation making the biggest difference? I

Johanna Hill

think that it’s a bit of a matter of context, right? At the regional level and at the national level. We’ve talked about the divide in the digital divide and how do we overcome that and the role of infrastructure and skills and the rest. And at the WTO Secretariat, we’ve been very concerned on this issue. And so we partnered with the World Bank and we did a study called Digital Trade in Africa, a general one, and then we did some country pilot studies to look at the situation. And we did see that some of the regional work, like the ACFDA and the digital protocol, really made a difference in how it helped bring them along and to set a certain standard in many of the countries that we studied.

Then we did a similar study with the World Bank in Latin America and the U .S. The Caribbean and the Inter -American Development Bank partnered with us. And we saw there that the situation was a bit different, more diversity in terms of regulation and trade policy, infrastructure needs. So there’s basically not one size fits all. But we have seen regional banks playing a very important role in helping countries that want to go in the regional way. I know ASEAN has done important work in AI policy, for example, and other regions are also working in that sense. And I do think that that brings economies of scale to a certain extent. It helps you resolve questions on electricity sometimes.

And so I think there’s a lot of opportunity and further work to be done at the regional level.

Arndt Husar

Thank you. And I think with the regional cooperation integration agenda being also top of mind for ADB, I just ask my colleague also to… tell me a little bit about her perspective. Of course, she represents ADB in a very large economy in South Asia, but we do have regional cooperation happening around the region. Mio, what do you see as opportunities with regards to regional cooperation integration on this digital infrastructure space?

Mio Oka

Right, thank you. So again, ADB, we support India. I’m in India. Our office covers India. So we are here to support Vixie Bharat so that India can grow at the pace to become a developed country by 2047, and the AI is a necessary means to do that. But again, as everybody knows, we are the regional bank. Nobody around us should be left alone or left behind. So ADB, through this kind of forum, has to be a catalyst. A catalyst for the global south. So we are here. Of course, there are many countries who cannot invest in scale. What are the solutions? So we are here to support the solutions, and also we support big tigers like India to support those countries too.

That’s number one. And number two is the balance approach. When we talk about regional cooperation or the work in a small country, I was quite shocked about five years ago. I went to the small neighbor country here, and I was working in the agriculture sector, and I was proudly introducing, I want to introduce aquaculture using the AI -based fish feeding system. And my negotiation ended in three seconds because the government said, no, we are interested in employment. What are you talking about? What AI -based feeders will just reduce the people who are going to work there? So that is a big lesson learned for me. We need an ecosystem, but even we talk about AI, the solution may be elsewhere.

so as you introduced the skill is super important and since that understanding again going back to India we’ve invested more than like 5 billion in the skill including the PM set and working over 10 states and now AI based skill is the big part of it so we are always mindful that the regional cooperation and we should not forget should not leave any country to be left behind but solution again may not be as direct as we expect thank you

Arndt Husar

thank you Mio and we have one minute left on the clock that throws a spanner into my closing with the thought that AI may not be the solution for everything but I think it’s a fair ending looking for a name We need to understand the problems and see how AI, if it can be deployed, if it can make a difference, how it should be supported through skills development, infrastructure investments, regulation. So I want to thank my panel for a very interesting tour de force of this topic. Also thought I’d take the opportunity to thank the audience and India for hosting this amazing summit. As ADB, we’ve been proud to be a partner of it, and it’s been truly fascinating, and we’re quite proud to have been part of this journey.

Thank you all for attending, and thanks to the panel. Let’s give them a round of applause for sharing their views. Thank you. Thank you. Thank you very much. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Saurabh Garg
3 arguments143 words per minute569 words238 seconds
Argument 1
Four key elements for AI-ready data: discoverability through metadata structure, trustworthiness through quality assessment, interoperability through unique identifiers, and usability through common standards and classifications
EXPLANATION
Dr. Garg outlines a comprehensive framework for making data AI-ready, emphasizing the importance of structured metadata for discoverability, quality assessment frameworks for trustworthiness, unique identifiers for interoperability between datasets, and common standards to ensure consistent definitions across different data sources.
EVIDENCE
Mentions developing a quality assessment framework and working with ministries and governments across the country to implement these standards
MAJOR DISCUSSION POINT
AI-Ready Data Infrastructure and Standards
AGREED WITH
Zuhriddin Shadmanov, Hamam Riza, Arndt Husar
Argument 2
Data dissemination and access mechanisms needed while preserving privacy aspects of individual data
EXPLANATION
Dr. Garg emphasizes the need to create mechanisms that allow data to be valuable beyond AI applications while ensuring individual privacy is protected. This involves balancing accessibility for business use with privacy preservation.
MAJOR DISCUSSION POINT
AI-Ready Data Infrastructure and Standards
Argument 3
Current AI models are extremely infrastructure-heavy, requiring gigawatts of power compared to human brain’s 100 watts, suggesting need for alternative mechanisms
EXPLANATION
Dr. Garg questions the efficiency of current AI infrastructure, highlighting the massive power consumption required for AI models compared to human intelligence. He suggests this indicates a need to explore more efficient alternatives.
EVIDENCE
References Vishal Sikka’s comment that AI infrastructure requires gigawatts of power while humans need only 2,000 calories (100 watts)
MAJOR DISCUSSION POINT
AI-Ready Data Infrastructure and Standards
DISAGREED WITH
Mio Oka
J
Johanna Hill
3 arguments161 words per minute861 words319 seconds
Argument 1
AI and trade working together could grow trade by almost 40% by 2040, but requires digital infrastructure, skills, and policy readiness
EXPLANATION
Hill presents the significant opportunity for trade growth through AI adoption, projecting a 40% increase by 2040. However, she emphasizes that realizing this potential depends on having adequate digital infrastructure, skilled workforce, and appropriate policy frameworks in place.
EVIDENCE
WTO Secretariat projections showing the ’40 by 40 effect’ and survey with ICC showing businesses already using AI for market intelligence
MAJOR DISCUSSION POINT
Critical Gaps and Opportunities for Global South in AI
Argument 2
AI Trade Policy Openness Index developed to help regions measure performance, showing lower income economies can appear open but may lack necessary regulation
EXPLANATION
Hill explains that the WTO developed an index to measure how open countries are to AI trade policies. She notes that some lower-income economies may appear open simply because they lack regulation, which can actually be a disadvantage since customers want responsible AI with trust.
EVIDENCE
Publication of the AI Trade Policy Openness Index in the World Trade Report and findings about competing regulations creating high compliance costs
MAJOR DISCUSSION POINT
Trade Policy and Regional Cooperation
AGREED WITH
Mio Oka, Arndt Husar
Argument 3
Regional approaches important for sharing infrastructure and collaboration when lacking economies of scale for huge investments
EXPLANATION
Hill advocates for regional cooperation as a solution for countries that cannot make large-scale investments individually. She emphasizes that regional collaboration can help achieve economies of scale and share infrastructure costs.
EVIDENCE
Studies with World Bank on Digital Trade in Africa showing ACFDA digital protocol made a difference, and similar study in Latin America showing more diversity in needs
MAJOR DISCUSSION POINT
Trade Policy and Regional Cooperation
AGREED WITH
Mio Oka, Arndt Husar
Z
Zuhriddin Shadmanov
6 arguments118 words per minute902 words457 seconds
Argument 1
Unequal access to compute capacities and advanced AI digital skills creates risk of nations becoming AI consumers rather than creators
EXPLANATION
Shadmanov identifies the fundamental challenge facing developing countries in AI development – the gap in access to computing resources and skilled talent. He warns that without addressing these gaps, countries will remain dependent on AI created elsewhere rather than developing their own capabilities.
MAJOR DISCUSSION POINT
Critical Gaps and Opportunities for Global South in AI
Argument 2
Government allocating $300 million USD for AI development, focusing on human-centered AI with projects in healthcare, education, transportation, and cybersecurity
EXPLANATION
Shadmanov outlines Uzbekistan’s comprehensive AI strategy with significant government investment. The approach emphasizes human-centered AI applications across critical sectors including healthcare, education, transportation, and cybersecurity.
EVIDENCE
AI Strategy 2030 adoption and specific allocation of $300 million for AI development projects
MAJOR DISCUSSION POINT
National AI Strategies and Investment Approaches
AGREED WITH
Dr. Saurabh Garg, Hamam Riza, Arndt Husar
Argument 3
Comprehensive approach covering all strata from students to professionals and public servants, not just tech sector
EXPLANATION
Shadmanov emphasizes that Uzbekistan’s AI skills development strategy goes beyond the technology sector to include all levels of society. This holistic approach aims to ensure widespread AI literacy and adoption across different professional domains.
MAJOR DISCUSSION POINT
Skills Development and Talent Building
AGREED WITH
Hamam Riza, Mio Oka, Arndt Husar
DISAGREED WITH
Hamam Riza
Argument 4
Planning to attract $1 billion USD by 2030 for AI-related digital infrastructure, working with Chinese partners like Huawei for ecosystem development
EXPLANATION
Shadmanov outlines Uzbekistan’s ambitious investment targets for AI infrastructure development. The strategy involves partnerships with major technology companies like Huawei to build comprehensive AI ecosystems including data centers and advanced telecommunications infrastructure.
EVIDENCE
Specific target of $1 billion investment by 2030 and partnership with Huawei for 5.5G and 6G development
MAJOR DISCUSSION POINT
Infrastructure Development and Private Sector Collaboration
Argument 5
Partnership with UAE’s ‘5 million AI leaders’ program, with over 1 million people already registered for training and certifications
EXPLANATION
Shadmanov highlights Uzbekistan’s international collaboration for AI skills development through partnership with the UAE. The program has achieved significant scale with over 1 million participants already engaged in AI training and certification programs.
EVIDENCE
Over 1 million people registered for the UAE’s ‘5 million AI leaders’ program
MAJOR DISCUSSION POINT
Skills Development and Talent Building
Argument 6
Building $5 billion energy-efficient data center with Saudi Arabian company DataVault, plus $200 million government data center with NVIDIA GPUs
EXPLANATION
Shadmanov describes Uzbekistan’s major infrastructure investments including a massive $5 billion energy-efficient data center project with international partners and a government-funded data center with advanced computing capabilities. These projects represent significant commitments to AI infrastructure development.
EVIDENCE
Specific partnerships with DataVault (Saudi Arabian company) for $5 billion project and $200 million government allocation for NVIDIA GPU-equipped data center
MAJOR DISCUSSION POINT
Infrastructure Development and Private Sector Collaboration
H
Hamam Riza
5 arguments92 words per minute1186 words771 seconds
Argument 1
Triple deficit exists: data and compute infrastructure gaps, connectivity issues, and shortage of AI skilled talents limiting long-term innovation capabilities
EXPLANATION
Riza identifies three critical deficits facing Indonesia and the global south in AI development: inadequate data and computing infrastructure, poor connectivity networks, and a significant shortage of AI-skilled professionals. These interconnected challenges limit the capacity for sustained innovation and AI adoption.
EVIDENCE
Mentions significant lack of high-quality localized data centers and shortage of AI skilled talents
MAJOR DISCUSSION POINT
Critical Gaps and Opportunities for Global South in AI
AGREED WITH
Dr. Saurabh Garg, Zuhriddin Shadmanov, Arndt Husar
DISAGREED WITH
Zuhriddin Shadmanov
Argument 2
National AI roadmap targeting 12 million AI talents by 2030, establishing Korika Academy for upskilling and reskilling programs
EXPLANATION
Riza outlines Indonesia’s ambitious talent development strategy with a target of training 12 million AI professionals by 2030. The Korika Academy serves as the primary vehicle for upskilling and reskilling programs, addressing the current shortage of 3-5 million AI talents.
EVIDENCE
Current shortage of 3-5 million AI talents and establishment of Korika Academy with learning management system and AI chatbot
MAJOR DISCUSSION POINT
Skills Development and Talent Building
AGREED WITH
Zuhriddin Shadmanov, Mio Oka, Arndt Husar
Argument 3
Pentahelix platform bringing together government, industry, academia, civil society, and media around three pillars: talent, infrastructure, and use cases
EXPLANATION
Riza describes Indonesia’s collaborative governance model that brings together five key stakeholders (government, industry, academia, civil society, and media) to work on AI development. The platform focuses on three core areas: talent development, infrastructure building, and practical use case implementation.
EVIDENCE
Establishment of the pentahelix platform and focus on climate health nexus including Climate Smart Indonesia program working with universities and NASA
MAJOR DISCUSSION POINT
National AI Strategies and Investment Approaches
Argument 4
Hyperscalers establishing multiple cloud regions in Indonesia, with government preparing presidential regulation for sovereign AI and special economic zones
EXPLANATION
Riza explains how major cloud service providers are expanding their presence in Indonesia while the government is developing regulatory frameworks for sovereign AI. The strategy includes creating special economic zones to attract hyperscalers and data center investments.
EVIDENCE
Multiple cloud regions established by global hyperscalers and upcoming presidential regulation for sovereign AI with special economic zones
MAJOR DISCUSSION POINT
Infrastructure Development and Private Sector Collaboration
Argument 5
Digital academy with learning management system and AI chatbot for training programs, working with Microsoft and other big tech companies
EXPLANATION
Riza describes the technical infrastructure of Indonesia’s AI education system, which includes a comprehensive digital learning platform with AI-powered assistance. The program involves partnerships with major technology companies to deliver training at scale.
EVIDENCE
Korika Academy with LMS, Kodika chatbot, and partnerships with Elevate Indonesia (Microsoft) and other big tech companies
MAJOR DISCUSSION POINT
Skills Development and Talent Building
M
Mio Oka
4 arguments143 words per minute664 words278 seconds
Argument 1
Foundation infrastructure important but service-level AI applications in agriculture, water supply, and irrigation offer more immediate impact given the scale
EXPLANATION
Oka argues that while foundational infrastructure like power supply and broadband are necessary, the real impact for large-scale economies like India comes from applying AI at the service level in sectors like agriculture and water management. This approach leverages existing scale to create immediate benefits.
EVIDENCE
ADB’s work in agriculture, water supply, and irrigation sectors where AI is widely applied
MAJOR DISCUSSION POINT
Critical Gaps and Opportunities for Global South in AI
AGREED WITH
Johanna Hill, Arndt Husar
DISAGREED WITH
Dr. Saurabh Garg
Argument 2
ADB focuses on mobilizing private capital through township planning, cross-sector applications, and knowledge sharing for strategy development
EXPLANATION
Oka explains ADB’s approach to supporting AI development through integrated planning that connects infrastructure projects with private sector investment opportunities. The strategy involves working across multiple sectors simultaneously and providing knowledge support for strategic planning.
EVIDENCE
Investment in township planning, connecting road and water projects to industrial parks, and working across sectors while building institutional capacity
MAJOR DISCUSSION POINT
Infrastructure Development and Private Sector Collaboration
Argument 3
ADB serves as catalyst for global south, supporting both large economies like India and ensuring smaller countries aren’t left behind
EXPLANATION
Oka describes ADB’s role as a regional development bank that must balance supporting large economies with significant AI potential while ensuring smaller countries in the region are not excluded from AI development opportunities. This requires different approaches for different country contexts.
EVIDENCE
Experience in small neighbor country where AI-based aquaculture was rejected due to employment concerns, leading to focus on skills development
MAJOR DISCUSSION POINT
Trade Policy and Regional Cooperation
AGREED WITH
Johanna Hill, Arndt Husar
Argument 4
ADB invested over $5 billion in skills development including AI-based skills as part of regional cooperation
EXPLANATION
Oka highlights ADB’s significant financial commitment to skills development across the region, with AI-based skills becoming an increasingly important component. This investment spans multiple countries and focuses on ensuring regional cooperation in capacity building.
EVIDENCE
Over $5 billion investment in skills including PM set program working across 10 states in India
MAJOR DISCUSSION POINT
Skills Development and Talent Building
AGREED WITH
Zuhriddin Shadmanov, Hamam Riza, Arndt Husar
A
Arndt Husar
5 arguments130 words per minute1910 words875 seconds
Argument 1
Digital infrastructure encompasses not just data centers and compute, but also solutions, standards, and skills – the three S’s introduced by ITU
EXPLANATION
Husar emphasizes that digital infrastructure should be viewed holistically, including not only the physical computing infrastructure but also the software solutions, technical standards, and human skills needed to make AI effective. This comprehensive view ensures all components of the AI ecosystem are addressed.
EVIDENCE
References ITU’s head introducing the three S’s of solutions, standards, and skills
MAJOR DISCUSSION POINT
Comprehensive Digital Infrastructure Framework
AGREED WITH
Dr. Saurabh Garg, Zuhriddin Shadmanov, Hamam Riza
Argument 2
SMEs face a huge adoption gap in understanding how to integrate AI into their business models despite technology advancing rapidly
EXPLANATION
Husar identifies a critical challenge where small and medium enterprises are being left behind in AI adoption. While the technology has advanced quickly, there’s a significant gap in understanding how SMEs can practically integrate AI tools into their daily operations and business processes.
EVIDENCE
Example of a small shop that could benefit from AI for product photography and uploading but hasn’t yet reached that level of adoption
MAJOR DISCUSSION POINT
AI Adoption Challenges for SMEs
AGREED WITH
Zuhriddin Shadmanov, Hamam Riza, Mio Oka
Argument 3
Countries need to balance different types of compute infrastructure based on actual needs rather than pursuing high-end solutions universally
EXPLANATION
Husar advocates for a nuanced approach to AI infrastructure investment, suggesting that not all applications require the most advanced computing power. Countries should assess what type of compute capacity is actually needed for their specific use cases rather than defaulting to the most expensive options.
EVIDENCE
Reference to Telangana state’s approach of recognizing that ‘not many kids fit into a Ferrari’ – meaning different solutions for different needs
MAJOR DISCUSSION POINT
Strategic Infrastructure Investment Approaches
AGREED WITH
Johanna Hill, Mio Oka
Argument 4
Regional cooperation and infrastructure sharing is still new territory with initiatives like the working group on democratization of AI compute being supported by ADB
EXPLANATION
Husar highlights that cross-border collaboration on AI infrastructure is an emerging area that hasn’t fully evolved yet. He mentions new initiatives focused on democratizing access to AI compute resources through regional cooperation, though the mechanisms for sharing infrastructure across borders are still being developed.
EVIDENCE
Mentions ADB’s support for the working group on democratization of AI compute
MAJOR DISCUSSION POINT
Regional Cooperation in AI Infrastructure
AGREED WITH
Johanna Hill, Mio Oka
Argument 5
ADB is learning alongside others in the AI space, particularly around understanding what type of compute is needed and how to make investments sustainable
EXPLANATION
Husar acknowledges that development banks like ADB are also in a learning phase when it comes to AI infrastructure investments. They are working to understand the nuances of different compute requirements and how to structure financing that ensures long-term sustainability of AI investments.
EVIDENCE
Mentions receiving demand for data center projects and learning from conversations with governments about different infrastructure needs
MAJOR DISCUSSION POINT
Development Bank Learning and Adaptation
Agreements
Agreement Points
Critical importance of skills development and capacity building for AI adoption
Speakers: Zuhriddin Shadmanov, Hamam Riza, Mio Oka, Arndt Husar
Comprehensive approach covering all strata from students to professionals and public servants, not just tech sector National AI roadmap targeting 12 million AI talents by 2030, establishing Korika Academy for upskilling and reskilling programs ADB invested over $5 billion in skills development including AI-based skills as part of regional cooperation SMEs face a huge adoption gap in understanding how to integrate AI into their business models despite technology advancing rapidly
All speakers emphasized that skills development is fundamental to AI success, requiring comprehensive programs that go beyond just technical sectors to include all levels of society and business
Need for comprehensive infrastructure development beyond just compute power
Speakers: Dr. Saurabh Garg, Zuhriddin Shadmanov, Hamam Riza, Arndt Husar
Four key elements for AI-ready data: discoverability through metadata structure, trustworthiness through quality assessment, interoperability through unique identifiers, and usability through common standards and classifications Government allocating $300 million USD for AI development, focusing on human-centered AI with projects in healthcare, education, transportation, and cybersecurity Triple deficit exists: data and compute infrastructure gaps, connectivity issues, and shortage of AI skilled talents limiting long-term innovation capabilities Digital infrastructure encompasses not just data centers and compute, but also solutions, standards, and skills – the three S’s introduced by ITU
Speakers agreed that AI infrastructure requires a holistic approach including data governance, standards, connectivity, and human capacity, not just computing power
Importance of regional cooperation and collaboration for AI development
Speakers: Johanna Hill, Mio Oka, Arndt Husar
Regional approaches important for sharing infrastructure and collaboration when lacking economies of scale for huge investments ADB serves as catalyst for global south, supporting both large economies like India and ensuring smaller countries aren’t left behind Regional cooperation and infrastructure sharing is still new territory with initiatives like the working group on democratization of AI compute being supported by ADB
All speakers recognized that regional cooperation is essential for countries that cannot achieve economies of scale individually, particularly for infrastructure sharing and ensuring no country is left behind
Need for balanced approach considering different country contexts and capabilities
Speakers: Johanna Hill, Mio Oka, Arndt Husar
AI Trade Policy Openness Index developed to help regions measure performance, showing lower income economies can appear open but may lack necessary regulation Foundation infrastructure important but service-level AI applications in agriculture, water supply, and irrigation offer more immediate impact given the scale Countries need to balance different types of compute infrastructure based on actual needs rather than pursuing high-end solutions universally
Speakers agreed that AI strategies must be tailored to different country contexts, capabilities, and actual needs rather than applying one-size-fits-all approaches
Similar Viewpoints
Both countries are pursuing ambitious infrastructure development strategies involving international partnerships with major technology companies and creating special economic frameworks to attract investment
Speakers: Zuhriddin Shadmanov, Hamam Riza
Planning to attract $1 billion USD by 2030 for AI-related digital infrastructure, working with Chinese partners like Huawei for ecosystem development Hyperscalers establishing multiple cloud regions in Indonesia, with government preparing presidential regulation for sovereign AI and special economic zones
Both countries are leveraging international partnerships and digital platforms to scale AI education and training programs, working with major technology companies to build capacity
Speakers: Zuhriddin Shadmanov, Hamam Riza
Partnership with UAE’s ‘5 million AI leaders’ program, with over 1 million people already registered for training and certifications Digital academy with learning management system and AI chatbot for training programs, working with Microsoft and other big tech companies
Both speakers emphasized the importance of having proper regulatory frameworks and governance mechanisms in place, recognizing that lack of regulation can be as problematic as over-regulation
Speakers: Dr. Saurabh Garg, Johanna Hill
Data dissemination and access mechanisms needed while preserving privacy aspects of individual data AI Trade Policy Openness Index developed to help regions measure performance, showing lower income economies can appear open but may lack necessary regulation
Unexpected Consensus
Energy efficiency concerns in AI infrastructure
Speakers: Dr. Saurabh Garg, Zuhriddin Shadmanov
Current AI models are extremely infrastructure-heavy, requiring gigawatts of power compared to human brain’s 100 watts, suggesting need for alternative mechanisms Building $5 billion energy-efficient data center with Saudi Arabian company DataVault, plus $200 million government data center with NVIDIA GPUs
It was unexpected to see both a technical expert and a government official from a developing country both emphasizing energy efficiency in AI infrastructure, showing environmental concerns are becoming mainstream in AI planning
Recognition that AI may not be the solution for everything
Speakers: Mio Oka, Arndt Husar
ADB serves as catalyst for global south, supporting both large economies like India and ensuring smaller countries aren’t left behind ADB is learning alongside others in the AI space, particularly around understanding what type of compute is needed and how to make investments sustainable
Unexpected consensus from development finance perspective that AI adoption must be carefully considered against actual needs and employment impacts, rather than pursuing AI for its own sake
Overall Assessment

Strong consensus emerged around the need for comprehensive, multi-faceted approaches to AI development that include skills, infrastructure, governance, and regional cooperation. All speakers recognized that successful AI adoption requires more than just technical infrastructure.

High level of consensus with complementary perspectives rather than conflicting views. This suggests a mature understanding of AI development challenges and opportunities, with implications for coordinated policy approaches and international cooperation frameworks.

Differences
Different Viewpoints
Infrastructure investment priorities and approaches
Speakers: Dr. Saurabh Garg, Mio Oka
Current AI models are extremely infrastructure-heavy, requiring gigawatts of power compared to human brain’s 100 watts, suggesting need for alternative mechanisms Foundation infrastructure important but service-level AI applications in agriculture, water supply, and irrigation offer more immediate impact given the scale
Dr. Garg questions the efficiency of current AI infrastructure and suggests exploring alternative mechanisms due to massive power consumption, while Mio Oka argues that despite foundational infrastructure being important, the focus should be on service-level applications for immediate impact
Scale and scope of AI development strategies
Speakers: Zuhriddin Shadmanov, Hamam Riza
Comprehensive approach covering all strata from students to professionals and public servants, not just tech sector Triple deficit exists: data and compute infrastructure gaps, connectivity issues, and shortage of AI skilled talents limiting long-term innovation capabilities
Shadmanov emphasizes a broad societal approach covering all sectors beyond technology, while Riza focuses on addressing specific technical deficits in infrastructure and skills that limit innovation capabilities
Unexpected Differences
Role of employment considerations in AI adoption
Speakers: Mio Oka, Other panelists
ADB serves as catalyst for global south, supporting both large economies like India and ensuring smaller countries aren’t left behind Various arguments about AI development and adoption
Mio Oka’s anecdote about AI-based aquaculture being rejected due to employment concerns reveals an unexpected disagreement about whether AI adoption should prioritize efficiency or employment preservation, which other speakers didn’t directly address in their technology-focused approaches
Overall Assessment

The main areas of disagreement center around infrastructure investment priorities (efficiency vs. immediate impact), the scope of AI development strategies (broad societal vs. technical focus), and approaches to international cooperation (regional frameworks vs. bilateral partnerships vs. sovereign development)

Moderate level of disagreement with significant implications – while speakers share common goals of AI development for the Global South, their different approaches could lead to fragmented strategies and inefficient resource allocation if not coordinated properly

Partial Agreements
All speakers agree on the importance of international partnerships and regional cooperation for AI development, but they disagree on the specific mechanisms – Hill emphasizes regional trade policy frameworks, Shadmanov focuses on bilateral partnerships with specific countries, and Riza emphasizes sovereign AI with special economic zones
Speakers: Johanna Hill, Zuhriddin Shadmanov, Hamam Riza
Regional approaches important for sharing infrastructure and collaboration when lacking economies of scale for huge investments Planning to attract $1 billion USD by 2030 for AI-related digital infrastructure, working with Chinese partners like Huawei for ecosystem development Hyperscalers establishing multiple cloud regions in Indonesia, with government preparing presidential regulation for sovereign AI and special economic zones
Both agree on the need for massive government investment in AI skills development, but disagree on approach – Shadmanov focuses on human-centered AI across government sectors, while Riza emphasizes ambitious talent targets through dedicated academy structures
Speakers: Zuhriddin Shadmanov, Hamam Riza
Government allocating $300 million USD for AI development, focusing on human-centered AI with projects in healthcare, education, transportation, and cybersecurity National AI roadmap targeting 12 million AI talents by 2030, establishing Korika Academy for upskilling and reskilling programs
Both agree on the importance of balancing data accessibility with protection, but disagree on focus – Dr. Garg emphasizes technical mechanisms for data access while preserving privacy, while Hill focuses on regulatory frameworks and trade policy measures
Speakers: Dr. Saurabh Garg, Johanna Hill
Data dissemination and access mechanisms needed while preserving privacy aspects of individual data AI Trade Policy Openness Index developed to help regions measure performance, showing lower income economies can appear open but may lack necessary regulation
Takeaways
Key takeaways
AI infrastructure development requires a balanced approach across four key pillars: AI-ready data (with proper metadata, quality assessment, interoperability, and standards), compute infrastructure, skills development, and policy frameworks Global South countries face a ‘triple deficit’ in data/compute infrastructure, connectivity, and AI skilled talent, risking becoming AI consumers rather than creators without strategic intervention AI and trade integration could grow global trade by 40% by 2040, but success depends on digital infrastructure readiness, skills development, and appropriate policy frameworks Current AI models are extremely energy-intensive (requiring gigawatts vs. human brain’s 100 watts), suggesting need for more efficient alternative approaches Regional cooperation and collaboration are essential for smaller economies to share infrastructure costs and avoid being left behind, though approaches must be context-specific Private sector collaboration and mixed financing strategies (combining government investment, international partnerships, and private capital) are crucial for scaling AI infrastructure Skills development must be comprehensive, covering all sectors and skill levels rather than focusing only on technical specialists Service-level AI applications in sectors like agriculture, healthcare, and water supply may offer more immediate impact than foundational infrastructure investments for developing countries
Resolutions and action items
Uzbekistan committed to allocating $300 million for AI development and targeting $1 billion in AI infrastructure investment by 2030 Indonesia established target of training 12 million AI talents by 2030 through the Korika Academy platform Uzbekistan launched partnership with UAE’s ‘5 million AI leaders’ program with over 1 million people already registered Indonesia preparing presidential regulation for sovereign AI development and special economic zones for hyperscalers ADB committed to continuing support for regional cooperation as catalyst for Global South AI development WTO Secretariat developed AI Trade Policy Openness Index to help regions measure their performance in AI trade readiness
Unresolved issues
How to effectively balance competing priorities of infrastructure, skills, and policy development with limited financial resources Specific mechanisms for cross-border collaboration on shared AI infrastructure and compute resources remain underdeveloped The challenge of ensuring AI adoption doesn’t reduce employment opportunities, particularly in agriculture and traditional sectors How to develop more energy-efficient AI models that don’t require massive compute and power resources Standardization and interoperability challenges across different national AI strategies and regional approaches How smaller economies without scale advantages can meaningfully participate in AI value creation rather than just consumption Balancing data sovereignty concerns with the need for cross-border data sharing and collaboration
Suggested compromises
Mixed financing strategies combining government investment, international partnerships, and private sector capital to address resource constraints Regional cooperation approaches that respect national sovereignty while enabling shared infrastructure and standards Phased implementation focusing on service-level applications while building foundational infrastructure over time Pentahelix collaboration model (government, industry, academia, civil society, media) to ensure inclusive stakeholder participation Differentiated AI strategies that don’t require every country to compete at the cutting edge but enable meaningful participation Focus on human-centered AI development that considers employment impacts alongside technological advancement
Thought Provoking Comments
When we talk in terms of AI infrastructure, we talk in terms of gigawatts of power. Compared to that, a human being requires 2,000 calories, which is only 100 watts. So are we missing something out there in the infrastructure?
This comment fundamentally challenges the current paradigm of AI infrastructure development by highlighting the massive inefficiency compared to human intelligence. It questions whether the industry is pursuing the right technological path and suggests there might be alternative approaches that are more energy-efficient.
This observation set a critical tone for the entire discussion, introducing the concept that current AI infrastructure might be fundamentally flawed. It influenced subsequent speakers to consider efficiency and sustainability in their infrastructure strategies, moving beyond just scaling up compute power.
Speaker: Dr. Saurabh Garg (quoting Vishal Sikka)
For that to happen, for those opportunities to really be realized, one element that is really important is the digital infrastructure, the skills that you mentioned, and policy readiness… just not having regulation can also be a disadvantage to your competitiveness.
This comment introduces a nuanced perspective that challenges the common assumption that less regulation equals more competitiveness. It suggests that in the AI space, appropriate regulation actually enhances competitiveness by building trust and enabling responsible AI trade.
This shifted the discussion from viewing regulation as a barrier to seeing it as an enabler of competitive advantage. It influenced how other panelists discussed their national strategies, emphasizing the importance of balanced policy frameworks rather than just infrastructure investment.
Speaker: Johanna Hill
We need to move beyond numbers… global south they are basically triple deficit in terms of… data and infrastructures, the compute infrastructures… AI talents… and digital divide and AI divide
This comment reframes the challenge from simple economic metrics to a more complex, interconnected set of deficits. The ‘triple deficit’ concept provides a structured way to understand why the Global South faces unique challenges in AI adoption, moving beyond surface-level solutions.
This conceptual framework influenced how other speakers approached their responses, leading them to address multiple dimensions simultaneously rather than focusing on single solutions. It elevated the discussion from tactical to strategic thinking about comprehensive ecosystem development.
Speaker: Hamam Riza
Do we expect India to put so much money on foundation to have a ground-level impact? But India has a scale. So what we need to focus is… service… Because of the scale of the people that we have in the global south, while we work on the foundational infrastructure, at the same time we really have to work on how AI can be applied at the service level.
This comment challenges the conventional wisdom of building infrastructure first, then applications. It suggests that for large-scale economies, parallel development of services alongside infrastructure might be more effective, leveraging existing scale advantages.
This perspective shifted the discussion toward more pragmatic, parallel approaches rather than sequential development strategies. It influenced other speakers to consider how their countries could leverage existing advantages while building foundational capabilities.
Speaker: Mio Oka
I was proudly introducing, I want to introduce aquaculture using the AI-based fish feeding system. And my negotiation ended in three seconds because the government said, no, we are interested in employment… So that is a big lesson learned for me. We need an ecosystem, but even we talk about AI, the solution may not be elsewhere.
This anecdote powerfully illustrates the disconnect between technological solutions and actual development priorities. It highlights how AI implementation must consider broader socio-economic contexts, particularly employment concerns in developing countries.
This story fundamentally grounded the entire discussion in reality, reminding all participants that AI solutions must align with local development priorities. It served as a cautionary tale that influenced the closing tone of the discussion, emphasizing that AI is not automatically the right solution for every problem.
Speaker: Mio Oka
Overall Assessment

These key comments collectively transformed what could have been a typical technology-focused discussion into a more nuanced, critical examination of AI infrastructure development in the Global South. Dr. Garg’s opening challenge about energy efficiency set a questioning tone that permeated the entire conversation. Hill’s insights about regulation as competitive advantage and Riza’s ‘triple deficit’ framework provided analytical depth, while Oka’s practical experiences – both the scale argument and the fish farming anecdote – grounded the discussion in real-world constraints and priorities. Together, these comments shifted the conversation from simple infrastructure scaling to comprehensive ecosystem thinking, from technology-first to human-centered approaches, and from universal solutions to context-sensitive strategies. The discussion evolved from optimistic opportunity-focused opening statements to a more mature understanding of the complex trade-offs and multi-dimensional challenges facing AI development in emerging economies.

Follow-up Questions
Are there alternative mechanisms to current AI models that require billions of bytes to run through again and gigawatts of power for every new query?
This addresses the infrastructure-heavy nature of existing AI models and explores whether more efficient alternatives exist, given that humans only require 100 watts compared to gigawatts needed for AI infrastructure
Speaker: Dr. Saurabh Garg
How can we ensure that data sets are discoverable, trustworthy, interoperable, and usable across systems in a federated manner?
This is fundamental to making data AI-ready and involves developing proper metadata structures, quality assessment frameworks, unique identifiers, and common standards and classifications
Speaker: Dr. Saurabh Garg
What kind of dissemination and access mechanisms can make data sets usable for business while preserving privacy aspects of individual data?
This addresses the challenge of balancing data utility for AI and business applications with privacy protection requirements
Speaker: Dr. Saurabh Garg
How can SMEs bridge the adoption gap and integrate AI into their business models effectively?
There’s a significant gap between AI technology advancement and SME understanding of how to practically implement AI in their operations, such as simple applications like product photography and uploading
Speaker: Arndt Husar
How do countries balance priority setting across infrastructure, policy, and skills development when resources are scarce?
This addresses the practical challenge developing nations face in allocating limited resources across multiple critical AI development areas simultaneously
Speaker: Arndt Husar
How can cross-border collaboration be supported while respecting data sovereignty, especially for countries that don’t have the scale to develop their own AI capabilities?
This explores the tension between the need for international cooperation in AI development and countries’ desires to maintain control over their data and AI capabilities
Speaker: Arndt Husar
How can the democratization of AI compute infrastructure work in practice across borders?
This refers to new initiatives for sharing AI infrastructure internationally, which is still evolving territory that needs further development and research
Speaker: Arndt Husar
What type of compute is actually needed for different AI applications and how can investment in AI infrastructure be made sustainable?
This addresses the need to match AI infrastructure investments with actual requirements rather than pursuing one-size-fits-all solutions, influencing financing decisions
Speaker: Arndt Husar
How can the pentahelix platform model (government, industry, academia, civil society, media) be effectively implemented to create AI ecosystems?
This explores the practical mechanisms for bringing together diverse stakeholders to collaborate on AI development and implementation
Speaker: Hamam Riza
How can countries develop AI solutions that are culturally aligned, particularly for large language models?
This addresses the need for AI systems that reflect local cultures and contexts rather than relying solely on foreign-developed models
Speaker: Hamam Riza

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

The Foundation of AI Democratizing Compute Data Infrastructure

The Foundation of AI Democratizing Compute Data Infrastructure

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on democratizing AI access and compute power, particularly for developing countries and underrepresented communities. The panel, moderated by Faith Waidaka and featuring experts from various organizations including the World Bank, Gates Foundation, and academic institutions, identified five key barriers to AI democratization: energy access, computing power, data access, talent building, and responsible AI frameworks.


The panelists highlighted significant data inequality, with over 80% of global datasets skewed toward developed countries and less than 2% representing sub-Saharan Africa. Yann LeCun emphasized the importance of open-source AI models and proposed federated learning approaches that would allow regions to contribute training data while maintaining ownership. He also discussed the future evolution from current large language models to more intelligent “world models” that understand the physical world rather than just accumulating knowledge.


Saurabh Garg and Sanjay Jain advocated for digital public infrastructure (DPI) as a foundation for AI democratization, emphasizing the need for trusted, interoperable systems that give users agency rather than just access. They proposed building modular platforms that countries can adapt to their specific needs while maintaining data sovereignty.


Chenai Chair stressed the importance of community participation and ownership, drawing from the Masakhane project’s success in African language processing through grassroots collaboration. The discussion emphasized that democratizing AI requires simultaneous investment in multiple areas: talent development, use case creation, open models, computing infrastructure, and community empowerment. The panelists agreed that sustainable AI democratization must be participatory, meeting communities where they are and addressing their specific needs rather than imposing top-down solutions.


Keypoints

Major Discussion Points

Barriers to AI Democratization: The panel identified key obstacles including lack of access to computing power, heavily skewed datasets (80% from developed countries, less than 2% from sub-Saharan Africa), limited access to open models, and insufficient AI literacy in underserved regions.


The Critical Role of Data Sovereignty and Local Context: Speakers emphasized that while computing infrastructure may be unequally distributed, local communities can maintain ownership and control of their cultural data and context, which represents a significant opportunity for creating more inclusive AI systems.


Digital Public Infrastructure (DPI) as an Enabler: Discussion focused on how DPI can provide trusted, interoperable systems that allow countries to be co-creators rather than just consumers of AI, with examples like India’s Aadhaar system and open-source platforms like MOSIP.


Community-Centered Approaches: Using examples like the Masakhane African Languages Hub, panelists highlighted the importance of participatory, community-driven AI development that meets people where they are and addresses their specific needs rather than imposing external solutions.


The Future of AI Architecture: Yann LeCun presented a vision of the next AI revolution moving beyond large language models to “world models” that understand the physical world, potentially requiring less training compute but more inference power, fundamentally changing the infrastructure requirements.


Overall Purpose

The discussion aimed to explore practical strategies for democratizing AI access globally, particularly for underserved regions and communities. The panel sought to move beyond theoretical concepts to identify concrete approaches for ensuring that AI development is inclusive, community-driven, and empowering rather than extractive.


Overall Tone

The tone was collaborative and solution-oriented throughout, with speakers building on each other’s ideas rather than debating. There was a sense of urgency about addressing current inequalities in AI access, but also optimism about emerging opportunities. The conversation maintained a balance between technical depth and accessibility, with speakers drawing from diverse perspectives (academic, governmental, community-based, and industry) to create a comprehensive view of the challenges and potential solutions.


Speakers

Speakers from the provided list:


Sangbu Kim: World Bank representative, focuses on AI democratization and computing power access


Faith Waidaka: Panel moderator, builds electrical and mechanical infrastructure in data centers in Africa, Board Chair of the Africa Data Center Association


Yann LeCun: Executive Chairman of AMI Labs (Advanced Machine Intelligence Labs), Professor at New York University, former Chief AI Scientist at Meta (12 years)


Sanjay Jain: Leads the digital public infrastructure team at the Gates Foundation


Saurabh Garg: Secretary in the Ministry of Statistics and Program Implementation in the Government of India


Chenai Chair: Director of Masakane African Languages Hub, focuses on African language NLP


Arun Sharma: Works with the World Bank


Audience: Various unidentified audience members who asked questions


Additional speakers:


Daniel Dobos: Particle physicist from CERN, Research Director for Swisscom


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion on democratising AI access and computing power brought together diverse stakeholders to address one of the most pressing challenges in global technology development. The panel was moderated by Faith Waidaka, board chair of the Africa Data Center Association, and featured Yann LeCun (who left Meta just a month ago after 12 years as chief AI scientist and now leads Advanced Machine Intelligence Labs), Sangbu Kim from the World Bank, Saurabh Garg from India’s Ministry of Statistics and Program Implementation, Sanjay Jain from the Gates Foundation, and Chenai Chair from the Masakhane African Languages Hub.


The Scale of the Challenge

Sangbu Kim from the World Bank opened by outlining five critical barriers to AI democratisation: access to energy, computing power, data access, talent building, and credible responsible AI framework and policy. He presented stark statistics on global data inequality, noting that over 80% of the world’s datasets are concentrated in developed, high-income countries, while less than 2% represents sub-Saharan Africa. When South Africa is excluded, the representation drops to virtually zero for the rest of sub-Saharan Africa.


The panellists identified different primary barriers reflecting their diverse perspectives. Chenai Chair emphasised linguistic diversity, noting over 2,000 documented languages on the African continent alone. Saurabh Garg focused on access to open models and AI literacy as fundamental barriers, arguing that while infrastructure might be acquired over time, the focus should be on models and capabilities. Sanjay Jain stressed the importance of personal data accessibility through protected means, while Yann LeCun highlighted the concentration of high-quality data in proprietary systems.


Rethinking AI Architecture and Compute Requirements

Yann LeCun provided a fundamental critique of current AI approaches, arguing that today’s large language models are essentially “knowledge storage systems” that accumulate factual information, requiring enormous computational resources because they store rather than process information intelligently. As he memorably put it, “your house cat is smarter than the biggest LLMs” in terms of understanding the physical world.


LeCun outlined his vision for the next AI revolution centered on “world models” – systems that understand the real world through sensory input rather than just manipulating text. These systems would predict consequences of actions before taking them, enabling genuine planning and reasoning. He explained that such systems could be smaller in training requirements while potentially being more computationally intensive during inference, shifting demands from centralized training facilities to distributed inference systems.


He provided a compelling comparison: LLMs are trained on approximately 10^14 bytes of text data, representing roughly half a million years of human reading. In contrast, a four-year-old child receives the same amount of visual data in just 16,000 hours of being awake through their visual cortex processing 2 megabytes per second, yet develops superior understanding of the physical world. LeCun gave the example of smart glasses for farmers in rural India that could identify crop diseases – something requiring real-world understanding rather than text manipulation.


Community-Driven Development Models

Chenai Chair presented the Masakhane model, demonstrating how community-driven approaches can succeed without formal funding. Masakhane, meaning “we build together” in isiZulu, emerged as a grassroots community focused on African language natural language processing. This volunteer-driven initiative won a Wikimedia Award in 2021, showing that community ownership can produce significant results.


Chair explained that Masakhane started in 2019-2020 using data from Jehovah’s Witness Bible translations, one of the few available sources for African languages at the time. The approach emphasizes participatory design responding to community realities, ensures contributors are recognized as co-authors on research papers, and meets communities where they are.


She outlined Project Echo as an example of gender-responsive AI development, created in partnership with the Gates Foundation and IDRC. This initiative focuses on AI tools addressing women’s economic empowerment and health needs while incorporating African languages, explicitly acknowledging gendered inequalities and designing interventions to improve outcomes.


Chair also described community network models where residents build their own internet infrastructure, including transmission masts, local content creation, and power management systems, with community members establishing charging stations in their homes.


Digital Public Infrastructure as an Enabler

Sanjay Jain and Saurabh Garg presented digital public infrastructure (DPI) as a foundational layer for AI democratisation. Garg mentioned participating in a “democratizing AI working group” and introduced the METRI platform (Multi-stakeholder AI for Trusted and Resilient Infrastructure), designed as a digital public good supporting the four key AI components: compute, data, models, and talent.


Their approach emphasizes creating systems that provide agency, enabling people to be co-creators rather than consumers. The Indian experience with systems like Aadhaar provides one model, but the global approach focuses on open-source platforms like MOSIP (Modular Open Source Identity Platform) that countries can adapt. Ethiopia’s FIDA system, based on MOSIP but customized locally, demonstrates this modular approach. The Gates Foundation has supported OpenG2P for government payments and Digit for healthcare campaigns, following the principle of building open-source tools for community adoption.


Jain described systems where “the data never goes to the model, but the model comes to the data” through federated learning approaches, combined with India’s proposed Data Empowerment and Protection Architecture to maintain individual and community control over personal information.


Economic Realities and Infrastructure Challenges

Faith Waidaka, despite her role building physical data center infrastructure across Africa, acknowledged that democratising AI requires addressing multiple interconnected elements simultaneously. Sangbu Kim provided an economic perspective, arguing that creating demand for computing power through practical applications is more important than simply building physical infrastructure. Without clear use cases adding value, he argued, nobody can sustainably operate data center businesses in Africa.


Yann LeCun offered a realistic assessment of efficiency improvements, noting that while industry has strong incentives to reduce power consumption, progress is happening “as fast as it can, and it’s not fast enough.” He suggested dramatic breakthroughs won’t occur until moving beyond CMOS transistors and silicon-based computing, which he estimated won’t happen for 10-20 years. However, Waidaka challenged this timeline, suggesting ten years represents significant time given AI’s rapid recent evolution.


Investment Priorities: The $500 Million Question

When Waidaka asked how they would deploy $500 million for AI democratisation, panellists revealed different priorities. Sanjay Jain noted this represents approximately what’s needed to deploy DPI globally, focusing on digitizing health records, identity systems, and foundational data infrastructure enabling people to participate in AI with appropriate protections.


Sangbu Kim emphasized developing practical use cases and changing user mindsets, particularly among populations who “don’t know what they don’t know” about AI capabilities. Saurabh Garg prioritized capability development and domain-specific models requiring less power than large language models. Chenai Chair advocated for investment across the entire value chain: open models, talent development from technical builders to end-user capacity building, and creating ecosystems where people are excited rather than fearful about technological innovation.


Technical Pathways and Collaboration

The discussion explored federated learning as a pathway for regions to contribute cultural and linguistic data to global AI models while maintaining ownership. LeCun explained this involves exchanging parameter vectors rather than raw data, allowing regions to contribute to model training without sharing sensitive information.


Several existing country-specific AI initiatives could form collaboration foundations. Groups in Switzerland (EPFL and ETH), the UAE (MBZ UAI), Korea, and other countries have developed language models and could potentially join forces for more comprehensive global systems. However, organizational challenges remain significant, as highlighted by an audience member from CERN who noted that while federated learning is technically feasible, collaboration architecture between countries requires careful design.


Addressing Fears and Building Trust

Chair noted that in South African contexts, dominant AI narratives focus on job displacement, creating fear rather than excitement. This highlights the importance of designing AI systems that demonstrably improve lives rather than threatening livelihoods. The community-driven approach offers a model for building trust through participation and ownership, while gender-responsive design explicitly acknowledges existing inequalities and designs interventions to improve outcomes for marginalized groups.


Audience Engagement and Future Directions

The Q&A session revealed ongoing challenges around coordination mechanisms for global-scale federated learning, particularly balancing different countries’ interests while maintaining technical coherence. The transition from current LLM-based systems to world model architectures will require significant research investment, much currently happening in academic rather than industry settings.


LeCun joked about being “very unpopular in Silicon Valley” for his critiques of current approaches, but emphasized that his proposed federated learning model could create AI systems superior to current proprietary models by accessing diverse global data no single company could obtain.


Conclusion

This discussion moved beyond simple questions of access to more sophisticated considerations of agency, ownership, and sustainable development. The convergence of perspectives from infrastructure builders, community organizers, government officials, researchers, and international development practitioners created a comprehensive framework for understanding both challenges and opportunities in democratising AI.


The emphasis on community participation, data sovereignty, and alternative technical architectures suggests AI development need not follow centralized patterns. However, realizing this potential requires sustained effort across technical research, policy development, community organizing, and international cooperation. The path forward requires simultaneous investment in talent development, practical use cases, open models and federated infrastructure, and trust-building through participatory design processes.


Session transcriptComplete transcript of the session
Sangbu Kim

access and energy. Number two, computing power. Number three, data access. Number four, talent building. And number five, credible, responsible AI framework and policy. Among those five, everything is very important, but we are currently struggling with some lack of access to computing power and data sets. So that’s why today’s discussion is very important. Unfortunately, more than 80 % of our data set in the world are very heavily skewed to the developed world, high -income countries. Less than 2 % in Africa, sub -Saharan Africa. If we just carve out South Africa, less than zero -something percent, only for the other sub -Saharan Africa. So we see the big gap in this space. So this is a pretty important time to talk about how we can really democratize the competing power access in this space.

So thank you for joining us, and then I look forward to really good discussion with all of our panels. Thank you.

Faith Waidaka

Thank you, Sangbu, for that opening. So I will start by asking the panelists to introduce themselves in a very short way, and I’ll start with myself. I’m Faith Waidaka. I build the infrastructure that makes AI possible. So I build the electrical, mechanical infrastructure in data centers in Africa, and I’m also the board chair of the Africa Data Center Association. So we’ll go this way. Yann, please tell us who you are.

Yann LeCun

So I’m Yann Le Carin. I’m the executive chairman of AMI Labs, Advanced Machine Intelligence Labs, which is a new company. I’m building. to build a next generation AI system. I’m also a professor at New York University still. And just a month ago, I left my position as chief AI scientist of Meta after 12 years at Meta.

Sanjay Jain

I’m Sanjay Jain. I lead the digital public infrastructure team at the Gates Foundation.

Saurabh Garg

I’m Saurabh Garg. I’m secretary in the Ministry of Statistics and Program Implementation in the Government of India.

Chenai Chair

And I am Chennai Che, the director of Masakane African Languages Hub, which emerged from a grassroots community called Masakane, focusing on African language NLP.

Faith Waidaka

Good. So, Chennai, and coming back this way to all my panelists, what is the single biggest barrier? And I can imagine that we’re all coming from different segments from the introductions we just did. But what do we feel is the single biggest barrier today to democratizing AI compute? Chennai?

Chenai Chair

Thanks, Faith. So there are over 2 ,000 documented languages on the African continent. So our single biggest barrier is the breadth of work we actually have to do to document these languages to ensure they’re well represented and also focus on the communities that actually speak them.

Saurabh Garg

I would say access to models, open models, and AI literacy to be able to utilize those models. And the reason I say that is perhaps infrastructure is something which might get acquired over time. And hopefully the… the requirement of the size of that infrastructure may also change. And the focus, we probably need to focus much more on the models.

Sangbu Kim

I would say too much concentration of digitized data only for developed world.

Sanjay Jain

I should also go on the data point because we believe that AI will scale effectively only when data for everyone is available. So when I can get a personalized service because my personal data is accessible through some protected means to a model, so then that will allow AI to reach everyone.

Yann LeCun

I’ll just echo some of the things that were said earlier. Certainly, the availability of top -performing open models, open -weight but also open -source, would be a way to remove the barrier. or at least if not a sufficient condition at least a necessary condition and the problem is that today there is no such thing the open models are behind but there is a way to get them to surpass the proprietary system and it’s through data so the access to data was mentioned if various regions of the world collect or digitize their cultural data whatever it is and then contribute to training a global model that would constitute eventually a repository of all human knowledge then those models would be much better quality than all the proprietary system because the proprietary system would not have access to that data and this can be done technically in a way in which regions don’t need to actually communicate that data they can keep ownership of that data and then contribute to training a global model by exchanging parameter vectors I don’t want to get into the weeds of technicalities there but it’s a form of federated learning and I think this is a way to open up access to AI and it’s absolutely crucial for the future because we’re going to need a wide diversity of AI assistance for the reason that there’s a wide diversity of linguistic, cultural differences value systems, political opinions and philosophies and if our AI assistance comes from a handful of companies on the west coast of the US or China, we’re in big trouble so we absolutely need this

Faith Waidaka

Okay, so we’ve had the challenges and there are a wide range of them from inclusion to compute to data sets what we’re going to discuss today is how do we overcome those barriers from the different perspectives and the different angles that we have on this team So coming to you, Sangbo, from a World Bank perspective, what does it mean to democratize AI? And would you please give us one indicator that signals that a country is moving from consuming AI to actually building it?

Sangbu Kim

From the World Bank point of view, democratizing data computing is very important. But let’s think about this. So many people very easily talk about building data centers physically and securing more GPUs and servers from the beginning. I agree that the fundamental infrastructure is very crucial and very important. But the more important thing is how can we use that computing power for what? So we need to really think about… what would be the best way which can create demand for computing power. That is more crucial part. So without having very clear application and some solutions, nobody can really run their own computing data center business in Africa. So it is very crucial part. So I would like to say we need to think differently from even though computing power is very important, how can we really create the data demand.

So in this regard, so the clear indicator is that how can we really fully manage the data in the local. So one good thing, one good news is that anyhow local data, local context can be fully owned, controlled, managed and managed. by local country and local people. That is a very good news. Even though we see a lot of inequality in the computing infrastructure and resources, but what cannot change, even in this AI era, is that people and the local country and local community can strongly hold their context and then hold their data set. So it is a really important signal and opportunity. So I would say measuring the fully utilizing and harnessing the data set in the local will be the key indicator for this.

Faith Waidaka

Okay. Yan, you spoke about compute a few minutes ago, open compute. And I would really like, I would like to know, Is the concentration of frontier compute a temporary scaling phase or a structural feature of AI? And where do you see the biggest technical opportunity to reduce compute intensity? It’s something that Sang -Boo as well touched on.

Yann LeCun

Okay, so first of all, I think the computing requirements for training modern AI systems is temporary. It’s temporary because the type of AI systems that we build at the moment, LLMs, essentially are knowledge storage systems, right? They accumulate factual knowledge, and therefore they need enormous amounts of memories. The reason why the models are so big in terms of number of parameters, we’re talking hundreds of billions of parameters, which make them really expensive to train and to run, is the fact that they just accumulate knowledge so that it can be easily retrieved. Subtitles by the Amara .org community but there’s another way to be useful in terms of AI it’s not accumulating knowledge but actually being smart and you can replace knowledge by intelligence so current systems are not particularly intelligent but they store knowledge there is another revolution of AI coming which actually my new company is built around which intends to build systems that are smarter even if they don’t necessarily accumulate as much knowledge so those models will be smaller now the bad news with this is that perhaps at inference time they will be more expensive because they’ll reason more than current systems so we’re going to see maybe a shift in the requirements for training but the requirements for inference which is really where most of the computation goes is still going to be quite significant now to answer your second question The incentives are there for the industry to reduce the power consumption of AI system.

A lot of engineers working on AI in industry these days, even in academia, are actually focusing on how can I make this model smaller? How can I distill it in a smaller model? How can I use a mixture of experts so I have sort of a ladder of models that are more and more complex? So that to answer simple questions, I can use a simple model, et cetera. All of it is to optimize power consumption. Why? Because that’s where the money goes. That’s where you spend all the money when you operate an AI system. It goes into power and maintaining your hardware. So the incentives are there. So that’s the good news. You don’t need to have laws or regulations or anything.

They are working on it because they need to. The bad news is that it’s progressing. It’s progressing as fast as it can, and it’s not fast enough. But we’re not going to be able to make it faster unless we find some technological breakthrough at the fabrication level or the architecture or technology. There’s a lot of mileage to be had in those things still. The power efficiency is actually making progress really quickly, much faster than Moswell, but it’s still too slow. So I’m not expecting some big revolution in hardware design until we start building something else than CMOS transistors and silicon. That’s not happening for another 10 or 20 years. 10 or 20 years? Well, I mean, there’s going to be progress in the meantime.

It’s not what I mean. But if you want a real breakthrough, like some completely new way of building computing systems, there’s nothing on the visible horizon. There’s no horizon that really will allow this, whether it’s carbon nanotubes, Pintronics, or whatever it is.

Faith Waidaka

Okay, that’s very interesting to think that the training models will become smaller, yet the inference might be the one that will take up the compute. Yet we’re also looking at bringing inference to devices as close as possible to the people using it. So there’s a bit of a balance to be done in that 10 -year period. I think 10 years is a lot of time, considering what AI has shown us over the past decade. And I think in terms of research, we might see it sooner. Yeah, so Rob, you led the other digital ID, and now in statistics. How do you see digital public infrastructure enabling AI innovation? And how can countries expand access to shared AI infrastructure without creating new dependencies or compromising data sovereignty?

Saurabh Garg

Thank you. So I think two characteristics of digital public infrastructure, which are key, are to ensure that not only there is access, but also agency of the people. So most people would not like to be just consumers, but also be co -creators. And I think that’s the real issue going forward. For any system to be a DPI, I think there are a few essential characteristics. It needs to be trusted. It needs to be interoperable and shareable. And obviously. Reusable is part of it because and that’s what. is it’s able to bring these characteristics onto this. And this is what will also ensure that innovators focus on solutions rather than trying to get together the infrastructure together.

And in the democratizing AI working group, which was one of the seven working groups of this AI summit setup, which I had the privilege of chairing along with representatives from Kenya and Egypt, one of the outcomes of this, of course, there was a charter on AI diffusion. But one of the outcomes of that is what we are suggesting building initially, which might be a digital public good, but modularly it will become an infrastructure as we move ahead, is the METRI platform, which we’ve called Friendship. METRI standing for multi -stakeholder AI for a trusted and resilient infrastructure. and how we can, in a modular manner, add on the four, which I think my fellow panelists have also mentioned, components of AI, compute, data, models, and talent.

These are the four aspects, and, of course, governance mechanisms would, of course, be there. So how we can ensure that different countries are able to contribute in whatever manner to build this, if I can call it a global platform, which is, in a way, owned by all and yet looks at what are the issues of real criticality. And I’m sure there’s a major role for not only countries, for private sector and philanthropies to be able to build. So how we can build this structure together, which will meet the requirements of of countries, private sector and the philanthropies because each of them have different motivations to it and the private sector would have a profit motive and that has to be kept in view.

As far as the dependencies, that’s the second part of the question that you asked me. I think one of the areas is that we need to ensure that we follow a federated structure rather than a centralized structure. I think that would be key and that would also ensure that the variety of languages and cultural contexts that the data sets carry and which will also ensure that ownership remains wherever is contributed with the data. And yet technology and open systems exist now to be able to ensure that sharing can be done in a safe and trusted manner. So how we are able to ensure that this collaboration and cooperation is done based on trust. and what kind of mechanisms we can develop.

And they could be partly technological and partly policy -based or protocol -based. And a combination of this will ensure that we don’t generate new dependencies. Thank you.

Faith Waidaka

Sanju, when I said DPI, you nodded your head. So in terms of digital public infrastructure, we’ve seen it scale because it was interoperable. How can we ensure that data and AI systems that we build now are interoperable and open by design so that even small startups or governments, like we’ve just spoken about, can plug in and benefit? I actually

Sanjay Jain

want to go off what Dr. Goerg said. Broadly, DPI provides a way for data of all individuals, so their records, their ID, their transactions. are sort of a system of record on top of which DPI sits. So DPI provides a management layer on that and provides consented access. And so that’s something which we have seen around the world, particularly, for example, in India we see this a lot, is that now that you have access to all of this data, you can actually build on top of that through consented access lots of applications. And that’s really where a lot of the value comes in. And I think Jan mentioned about training data sets. That’s, again, the same model can be applied to allow either consented access or anonymized access so that you can do a federated learning so that the data never goes to the model, but the model comes to the data.

And so with, and India has been looking at this data empowerment and protection architecture, which is on that lines. And that, I think we are now starting to see the structural building blocks come together, which would allow for this underlying data layer to be built, but that requires strong DPI. And so we do think that there’s a lot of reason for countries around the world to adopt DPI systems so that citizens’ data can be managed in a very trusted way, access with consent. And then we have things like MCP coming up, which then allow users’ context to be taken, which then allows AI to be safe. Of course, as long as the data is, the rights on the data are quite clear that they’re not going to be stored.

So overall, I think we are moving towards this world where we are seeing the underlying pieces come together. They have to come together at a global scale. I think that’s the point that Dr. Gerg was making. And so from that perspective, I think we are in a fairly good place. But then to make sure this happens, we have to, I think, act in a unified manner. I mean, for example, we have to work together to fund efforts at the grassroots. So, for example, what you’re seeing with Masakhane, where you’re working with… With countries, with communities, so that their languages can be represented. so that that context becomes very important because finally we are going to have to serve users in their languages.

So I do think, you know, I’m very positive that we’re moving in the right direction. I just think that there’s still some ways to go. I think there are other barriers as well. But on this aspect, I think DPI provides a way for us to get past the data hurdle as long as, of course, DPI is implemented in a responsible manner in the countries and in the right way. Thank you. Chenai, you’ve

Faith Waidaka

cautioned against technology becoming extractive. How should we build data infrastructure that is trusted by communities? And would you please give us an example of what principles would make an AI project in a village or in a community, in some rural place, place in Africa, for example? Thank you. feel empowering rather than extractive? Thank you so much

Chenai Chair

Faith for that question. And I think I have the pleasure of sitting here as a representation of what it means when community is involved in building something. Masakana basically means we build together loosely translated in Isizuru. And that was then a creation of a participatory approach in knowledge building as a result of being excluded in spaces. So if we’re going to build data infrastructure that community trusts is to respond to the realities that they live in and to be participatory. So that’s the first example. And just to prove how important for something to be participatory is that 2019 -2020 there were not as many data sets around African languages. I think a source of data was the Jehovah’s Witness 300 Bible.

And they had translated the languages for their own purpose. And then so the community came together, the Masakane community came together and brought in everyone, linguists, NLP people, machine learning people, anyone who spoke the language to actually develop the scripts and do the machine translation work on top of that. And this community that was unfunded, doing everything by the bootstraps, actually won a Wikimedia Award in 2021 for their participatory action work. And I think that is then crucial to actually show that if you’re going to build trust, people have to see what the end value is and also be recognized. So this paper actually has, I think, about 20 people on it, a lot of people on it, which some people could never have been authors, but they contributed to it and they’ve got a paper published and that’s significant.

And then secondly, it’s really thinking about meeting communities where they are, regardless of what their location is. It’s realizing the inequity that we exist with. So one of the projects that we will be doing at Masakane is called Project Echo. It’s designed to be a gender -responsive project because gender transformative is also the North Star that we’re hoping to get to one day. And in that instance, it understands the realities of gendered inequality on the African continent, regardless of any technological innovation. And what we’re doing in partnership with Gates Foundation and also working with IDRC, who are working on this as well as a gendered intervention, is to actually then create, work with tech entrepreneurs developing gender -responsive use cases that focus on women’s economic empowerment as well as health to then think about how we’re creating an impactful tool when you add African languages on top that will result in better economic outputs for them or better information when it comes to health.

So again, it is thinking about designing with the communities and meeting the needs of the communities and where they are. And then lastly… And this is to say that this is, we love to say this on our team, that what we’re not doing is new. The technology may be new, but there are practices that we can borrow from other spaces to actually then ensure this is done. So I would like to reference the community network models. Last mile connectivity is a significant issue across the continent. We’ve had universal service access funds as an incentive for mobile network operators to do this. But sometimes some communities are not served well enough. And so then there have been interventions to actually result in internet connectivity that’s localized, being developed by the communities.

They’re in charge of building the mass for their community networks. They’re in charge of creating the content that people are going to need, figuring out what the necessary power is. Do you then, you know, create and have a transformative booster in one person’s home? And then people go and charge their phones there because it’s the whole life cycle of this. So if we’re going to build infrastructure that people trust, we have to borrow from what’s already been done and then ensure that people are part of the whole life cycle so that they see ownership and also it allows for sustainability because they are like, that’s my resource and I’m not going to wait for anyone else to support it but I’m going to be in charge of making sure that it continues to exist.

Interesting.

Faith Waidaka

I like that. Community ownership. And I don’t think we can do that if we don’t build small AI. So Sangbu, you’ve written a lot on small AI. What would be your playbook for scaling small AI responsibly?

Sangbu Kim

user can, you know, can, restrictions, so user cannot fully utilize some technology without get trained and learn. So, 20, 30 years ago, we talked a lot about digital literacy and some basic digital skills and how to use window and explorer, et cetera. That mean, that meant it is not very user -centric because user had to do a lot of things. But now, AI is going towards very user -centric services. So, users doesn’t need to do that much. They can only control and ask verbally about what they are curious, what they need. And then it can be automatically provided to the users. That is the philosophical concept of AI in my mind. So, in that sense, our focus is how to more bring more user centric mindset to this field along with our client because you know we have compared to develop the world we have pretty much big you know context base ground and local data and so many user interest so that’s our approach how that’s how we but are fully harness and utilize for this area

Faith Waidaka

thank you for that now that we’re speaking about communities and users Sanjay you’ve spoken about moving from digital age to digital empowerment in the context of AI what would digital empowerment look like and what should development partners like gates while bank sitting in this forum prioritize so that countries are not just consumers of AI but co -creators.

Sanjay Jain

So the thread I’m going to pick up back is the DPI thread. And broadly what we have done in that space is to look at how instead of building systems for countries, we sort of have open source systems which countries can then adopt to build systems which are adapted to their needs. So when we look at Aadhaar in India, that’s one thing, but then for the rest of the world we’re looking at MOSIP. And MOSIP is a modular open source ID platform that we have supported, which countries are taking and building with their own policy layers, building their own application versions of it. And so in Ethiopia you have FIDA, which is based on MOSIP, and it’s actually very much customized to what they need.

So the idea is you build these pieces of technology which then countries can adopt and build in a way that suits their needs, is governed by them, is local laws work on that, so all of that institutional infrastructure. legal infrastructure is then sits on top of the technology layer to do that. Similarly we have supported other open source efforts like OpenG2P for government payments, we have supported Digit for Healthcare campaigns and so the whole idea is you build open source, let countries and communities take that and adopt it. Similarly with Masakhane again the same idea is that if you have a way by which local communities can come together and collect data but then make that available for global needs.

So we have funded those kinds of efforts in India and in Africa as well so that these efforts are now there where local communities are empowered to make sure that AI systems can understand and speak their language and that is again a form of empowerment. So broadly that’s sort of the way we think about it is how do we build open standards, open source products that countries and communities can use and contribute back to and co -create essentially their versions of their systems. that then work in a unified way across the world. And so that is really empowering them to be a part of the community, and that is what we would love to see more happen.

Faith Waidaka

Thank you for that. Now, Jan, I can’t help but come back to these world models. That in my mind, I was thinking they would increase the compute power necessary so the infrastructure would be bigger. But from your explanation, it looks like being more intelligent means less compute, and we now move the power not on the grid side for the training models, but on the infant side, on the devices. So what does that actually mean for the government people, the AI ecosystem, the startups that are in this room? What does that actually mean for the government people, the AI ecosystem, and what should be their focus over the next 1, 5, 10 years? if these changes are to happen, and I do believe they will happen.

Yann LeCun

Wonderful question. Thank you. So there’s going to be another AI revolution, right? We’ve seen in recent years the deep learning revolution and the LLM revolution. And unfortunately, the type of AI systems we have access to at the moment manipulate language very well, and it fools a lot of people into thinking that we have it made, that we have systems that are as intelligent as humans because we think of language abilities as properly human. But it’s a mistake that generations after generations of computer scientists and people around them have made in AI. for the last 70 years of discovering a new paradigm for AI and assuming that this paradigm will lead us to systems that have human -level intelligence.

And it’s just false, and it’s false today as well. Our current technology is limited. It’s useful. There’s no question it’s useful. It should be deployed, developed. It’s going to help people use it all the time. But it’s limited, like previous generations of computer technologies and AI systems. So what is the next revolution? It’s the revolution of AI systems that understand the real world. And I think there is a lot of applications of that throughout the world for all kinds of domains, of market segments, if we’re talking about commercial systems, or just helping people in their daily lives. Now, it turns out that, and we’ve known this for a long time, that understanding the real world is much, much more complicated than understanding language and manipulating language.

It’s because language is a sequence of discrete symbols and it turns out that makes it easy for computers to handle. But the real world is messy, it’s high dimensional, it’s continuous, it’s noisy, and it’s just much more complicated. So I’ve been making that joke for many years to kind of try to explain this to everyone that your house cat is smarter than the biggest LLMs. And in many ways that’s true, certainly in the understanding of the physical world, your cat is way smarter than the biggest LLMs. It doesn’t mean the LLMs cannot accumulate knowledge about the real world, but they don’t really understand the underlying nature of it. So the next revolution are systems that really understand how the world works and sort of learn how the world works, a little bit like children who open their eyes.

And let me give you a… Interesting number. LLMs are pre -trained today. on basically all the text available on the internet publicly, which mostly is English or languages spoken in developed countries, which of course, as this panel has pointed out, is an issue. But it represents roughly 10 to the 14 bytes. Okay, a one with 14 zeros. That seems like a lot of data, and it is, because it would take us, any of us, about half a million years to read through it. But then compare this with the amount of data that gets to the visual cortex of a young child. In four years, a young child has been awake a total of 16 ,000 hours. And if we put a number on how much data gets to the visual cortex, it’s about 2 megabytes per second.

Do the arithmetics, that’s about 10 to the 14 bytes in four years, instead of half a million years. And so it tells you we’re never going to get to human -level intelligence or anything like that by just training on text, which is human -produced. we’re going to have to have systems that understand the real world and are trained to understand the real world through sensory input, it can be video it can be all kinds of stuff and by the way, 16 ,000 hours of video is not a lot of video, it’s about 30 minutes of YouTube uploads if you get a day of YouTube uploads, it’s about a million hours, and that’s about 100 years of video, and we have video systems that we’ve trained that have been trained with that kind of data they understand a lot more about the real world than any LLM they can tell you if something impossible happens in the video that they watch so they’ve acquired a little bit of common sense so my guess is that this is going to make a lot of progress in the future and from those kind of techniques, we can build world models, what is a world model given there’s an idea or representation of the state of the world at time t and an action or intervention that you imagine taking, a world model would predict the state of the world at time t plus one resulting from this action or intervention.

And this is how you can build an intelligent system because they would be able to predict the consequences of their actions before taking the action. And they would be able to plan and reason because reasoning is like planning. So everybody is talking about agentic systems in the industry. The way agentic systems are built today is not this way. Anyway, agentic systems today are not able to predict the consequences of their actions. And this is a terrible way of planning actions. So I think, you know, again, we’re going to see a revolution over the next few years based on world models, based on systems that can learn from the real world, messy data. And I’m not very popular in Silicon Valley when I say this, but those are not generative models.

They’re kind of a different type. And so, yeah, my colleagues who work on LLM and generative AI… don’t like me very much. For me, I’m really liking this.

Faith Waidaka

So I’m going to ask you a number question. What would it take? What kind of money would it take to make this faster?

Yann LeCun

Okay, so there’s a number of different things that need to happen. The first thing is there’s a lot of research to be done, like academic research, right? And in fact, what’s interesting as a phenomenon is that this idea of world model and this non -generative architecture, which I call JEPA, but there’s sort of various incarnations of it, are mostly worked on by academic groups who are interested in applying AI to science and mostly ignored by industry. Industry, particularly Silicon Valley, which is, you know, dominant players, is entirely focused on LLM and everybody is working on this. It’s the same thing. everybody is stealing each other’s engineers and working on the same thing because nobody can afford to do something slightly different and then run the risk of falling behind.

And so that creates kind of a monoculture that makes the industry a little blind. And so right now it’s in the hands of academia. So basically kind of propping up this kind of research in academia and preventing LLMs from sucking the oxygen out of every room you get into, I think is the first step. Second step is, of course, there is a role for governments and industry to play there in sort of pushing those models once they work. And that’s what I’m working on. That’s why I left Meta and created this company, because I think the time is right for trying to make this, make it real. And then, you know, obviously there’s going to be a lot of applications of this everywhere in the world.

There was an experiment that was run a few years ago, a couple years ago by some of my colleagues at MITA where they gave smart glasses to farmers in India, rural India. And you could talk to the assistant in, you know, Indic languages, asking them, what’s this disease on my crop? Or, you know, should I harvest now or wait a little bit? What’s the weather tomorrow? So there’s a lot of things like this that could be useful if the price, you know, could be brought down with systems that really understand the world better than we currently do. And in the future, all of us will be walking around with an AI assistant that will, you know, essentially amplify our own intelligence.

It’s like, you know, all of us will be sort of, you know, the leader, manager of a staff of virtual people who are smarter. Which is a great thing to do, by the way, working. I’m very familiar with the concept of working with people who are smarter than you. it’s the greatest thing that can happen to you so we shouldn’t feel threatened by that so it’s going to allow people to get more knowledgeable, more educated make more rational choices but we need systems that basically approach or surpass human intelligence in certain domains and understand the real world

Faith Waidaka

Thank you Yann, so we know where Yann is putting his money coming back to all my panelists not just your money if I had 500 million dollars to give and I’m not asking you for a P &L I’m not asking you to give me a profit I’m just asking you to help me democratize AI and make it accessible for everyone where would you each put your money let’s start with sanjay

Sanjay Jain

incidentally 500 million is the amount that you’re looking at as raising capital capital to get dpi everywhere in the world because we think that you know getting those underlying systems of record getting people access to their data in a digital form can actually empower them so much that they can then participate in the ai revolution in the right way with the right controls and structures in place so you know you’ve kind of just made my case that we would want to think about how we can take that money deploy it and bring everyone up to the same level in terms of digital infrastructure getting the data getting their ledgers getting the health records all of those digitized so that then they can take benefit of ai for their needs so that’s actually what we would want to do

Sangbu Kim

okay okay okay again again i would say i’ll spend that big money to develop some more use cases again and again. So we are identifying agriculture, education, healthcare, and some more. The government service can be a really promising use case field. So developing some more practical and profitable use case and which adds so much value will be the really critical one. On top of that, maybe why we are developing the use cases, more important thing is that some change user mindset and inspire users. Because one typical problem we are facing is that our low -income users and clients and people are not… do not really know what they don’t know. So inspire, even though they can do something with this type of technology, but they…

don’t clearly understand what they can do. So inspire them that they can really do this with higher productivity, with low cost. That would be very important things to remind them. Thank you.

Saurabh Garg

Given the volume of funds available, I would focus a lot more on capability development of people to be able, their ability to use AI for improving productivity. And maybe if I can add to it, just to again stress on models on the need for small domain specific niche models. Small may not be the right word to use. But domain specific and niche models, which will ensure that they use lot less power, lot less infrastructure and not have the problems of large language model.

Chenai Chair

so I’m assuming each one of us is getting 500 million yes so I co -sign on everything in addition I would say for us what is critical given the point I mentioned about the breadth of work that needs to be done is actually having open models and also investing in talent so the open models do allow for people to innovate on top of them and an example of this is crane AI which actually developed a offline first AI stack focusing on health education and agricultural services and they emerged from the Masakana community so what happens when we actually can fund a lot of people to think about this and build on top of open models and then lastly talent, talent is very important across the whole value chain, talent that actually looks at the building of the models, the uptake the business cases to motivate for people to allow for sustainability but also the talent to actually build capacity of the end users to understand so that we create an ecosystem where people are excited for these new technological innovations instead of afraid.

And that’s sort of been the biggest narrative of you’re either very excited or you’re very afraid. And coming from a South African context, everyone is afraid to lose their job to AI. So how do we ensure that we’re creating that ecosystem that’s favorable for innovation?

Faith Waidaka

So as we come to the end of our panel, with everything that’s been said, even with all the money on the table, free money, we see that it’s not a one -size -fits -all. We simply can’t just focus on one area and leave the rest. We need the talent. We need the compute. We need the data centers. We need the regulatory framework. We need the reforms. We need everything to come together to make this possible. And with that… I’m done with my questions. I have five minutes. Before I even finish my question. So would someone help me with a mic? What I’ll do, I’ll take three questions, hopefully to three different people from you guys.

And then since I see no one, I’m quite good. Thank you. Let’s start here.

Arun Sharma

Thanks, Faith. Thank you all for such a brilliant session. My name is Arun Sharma. I work with the World Bank. My question to anyone, Jan specifically, what is the lag that we have in the physical and the virtual world? It’s dominated a lot by the machinery. I mean, you gave the example of a farmer wearing glasses. But then the seeds or the fertilizer, anything that he orders still run on archaic systems. So obviously there is a lag between the hardware and the software. The software is evolving much faster. where do you see that happening and going and I ask this specifically because in the Indian system where we have not been able to deploy our resources is in the education space or in the healthcare space where we still lag in those areas so thanks

Faith Waidaka

let me take the three questions I would prefer that you throw the next question to someone else I’ll take a question from the back there

Audience

thanks a lot Daniel Dobos particle physicist from CERN originally and then a research director for Swisscom you mentioned federated learning technologically this is easy the architecture of collaboration might be difficult for that So do you have some ideas which kind of organization could coordinate this kind of collaboration? Thank you.

Faith Waidaka

Okay, and one last question, let me get from him. The guy with the red flag.

Audience

Hi, thank you. Thank you, sir. My question is to you. Like, you have said that we have the data like 10 to the power of 14 bytes and the same data that a boy consumes, likely four to five years of age. So do you think that data is the only bottleneck, despite of compute and the architecture, to get the AGI, or maybe the humans, the superintelligence, artificial superintelligence? And the next question is, when we will achieve AGI, what was the benchmark? Like, what was the benchmark? Like, how we benchmark AGI that, like, it will be definitely smarter than humans. So how humans will evaluate that? so yeah that’s it

Yann LeCun

quick answers I’ll go in reverse order so there’s no such thing as a GAI there is human level AI perhaps but human intelligence is extremely specialized and so calling this general intelligence is complete nonsense but we will get to we will build systems that are as intelligent as humans in all domains where humans are intelligent it’s just not going to be next year unlike what you know some some colleagues in the industry are claiming this is going to take a lot longer it’s not going to be an event it’s not like we’re going to discover one secret that’s going to just you know unlock intelligence it’s going to be you know progress it’s going to be much more difficult than we think it’s always been more difficult than we thought in the past and it’s still the case so no event for AGI and no AGI human level AI yet super intelligent AI yes we should call it ASI artificial super intelligence yeah well it depends so that’s the first thing and you had a second part to your question I can’t remember what it was so I’m going to answer the other one there is a number of organizations that could so first of all the thing that’s needed for this federated learning idea for an open source model should be bottom up it should be people actually kind of putting up a github and then collaborating on sort of building the infrastructure for this of course we can get help from governments and organizations and that’s required too but I think it’s going to ultimately people need to build code, write code so there’s a number of groups that have already built their own LLMs that are pretty good quality, there’s a group in Switzerland centered at EPFL and ETH so you probably know it there is a group in the UAE centered on MBZ UAI there is similar models in Korea in various other countries and they all would like they should all get together and basically join forces and then bring in other countries as well I think SEM can play a role I think UNESCO can play a role I think Switzerland should play a role they have all those organizations in Geneva maybe that’s where and the next summit is going to be there so maybe that’s the right thing to do and have a bottom up and top down one big organization that can play a role is the AI Alliance which is a group that promotes open source AI

Faith Waidaka

Jan let me cut you short we’ve run out of time and we would like to thank you all for coming yes thank you so much for all the speakers we just have a small memento from the government side to make this a memorable event. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
C
Chenai Chair
6 arguments169 words per minute1023 words361 seconds
Argument 1
Language diversity creates enormous scope of work with over 2,000 documented African languages
EXPLANATION
Chenai Chair identifies the sheer breadth of linguistic diversity on the African continent as the primary challenge for AI democratization. With over 2,000 documented languages, the work required to document, represent, and serve these languages in AI systems is massive and requires focus on the communities that speak them.
EVIDENCE
Over 2,000 documented languages on the African continent
MAJOR DISCUSSION POINT
Barriers to Democratizing AI Compute
DISAGREED WITH
Saurabh Garg, Sangbu Kim, Sanjay Jain
Argument 2
Community participation and meeting people where they are builds trust in data infrastructure
EXPLANATION
Chair argues that building trusted data infrastructure requires responding to community realities and being participatory. This approach ensures that people see the end value and are recognized for their contributions, leading to community ownership and sustainability.
EVIDENCE
Masakhane community won a Wikimedia Award in 2021 for participatory action work; Project Echo designed as gender-responsive project focusing on women’s economic empowerment and health; community network models for last mile connectivity
MAJOR DISCUSSION POINT
Data Infrastructure and Sovereignty
AGREED WITH
Sangbu Kim, Sanjay Jain
Argument 3
Participatory approaches like Masakhane demonstrate community ownership and sustainability
EXPLANATION
Chair presents Masakhane as an example of successful community-driven AI development, where the name means ‘we build together’ in isiZulu. This grassroots approach emerged from exclusion and created participatory knowledge building that achieved recognition and success.
EVIDENCE
Masakhane community was unfunded, doing everything by bootstraps, won Wikimedia Award in 2021; paper had about 20 people as authors who contributed and got published recognition
MAJOR DISCUSSION POINT
Community-Centered AI Development
Argument 4
Gender-responsive design meeting communities’ actual needs creates impactful tools
EXPLANATION
Chair emphasizes that AI projects must understand and address existing inequalities, particularly gender inequality on the African continent. By designing with communities and meeting their specific needs, AI tools can create better economic and health outcomes.
EVIDENCE
Project Echo partnership with Gates Foundation and IDRC, working with tech entrepreneurs on gender-responsive use cases for women’s economic empowerment and health
MAJOR DISCUSSION POINT
Community-Centered AI Development
Argument 5
Community network models provide proven frameworks for local ownership and control
EXPLANATION
Chair references existing community network models for internet connectivity as a template for AI infrastructure development. These models show how communities can take charge of building, maintaining, and creating content for their own technological infrastructure.
EVIDENCE
Community networks where communities build masts, create content, manage power solutions, and take ownership of the whole lifecycle
MAJOR DISCUSSION POINT
Community-Centered AI Development
Argument 6
Open models, talent development, and capacity building across the entire value chain are essential
EXPLANATION
Chair advocates for investment in open models that allow innovation, combined with comprehensive talent development across the AI value chain. This includes technical talent for building models, business talent for sustainability, and end-user capacity building to create excitement rather than fear about AI.
EVIDENCE
Crane AI developed offline-first AI stack for health, education, and agriculture, emerging from Masakhane community; South African context where everyone is afraid to lose jobs to AI
MAJOR DISCUSSION POINT
Investment Priorities for Democratization
AGREED WITH
Saurabh Garg, Sangbu Kim
S
Saurabh Garg
3 arguments130 words per minute700 words321 seconds
Argument 1
Access to open models and AI literacy are primary barriers
EXPLANATION
Garg identifies that the main obstacles to AI democratization are lack of access to open models and insufficient AI literacy to utilize those models effectively. He suggests that infrastructure challenges may resolve over time, but the focus should be on models and the ability to use them.
EVIDENCE
Infrastructure might get acquired over time and requirement of infrastructure size may change
MAJOR DISCUSSION POINT
Barriers to Democratizing AI Compute
AGREED WITH
Chenai Chair, Sangbu Kim
DISAGREED WITH
Chenai Chair, Sangbu Kim, Sanjay Jain
Argument 2
Digital public infrastructure must ensure access and agency for people to be co-creators, not just consumers
EXPLANATION
Garg emphasizes that effective DPI should provide both access to technology and agency for people to participate as co-creators rather than passive consumers. This requires systems that are trusted, interoperable, shareable, and reusable, allowing innovators to focus on solutions rather than building infrastructure.
EVIDENCE
METRI platform (Friendship) – multi-stakeholder AI for trusted and resilient infrastructure, with modular components of compute, data, models, and talent
MAJOR DISCUSSION POINT
Data Infrastructure and Sovereignty
Argument 3
Domain-specific models using less power and infrastructure are preferable to large language models
EXPLANATION
Garg advocates for developing smaller, domain-specific, and niche models rather than large language models. These specialized models would require significantly less power and infrastructure while avoiding the problems associated with LLMs.
MAJOR DISCUSSION POINT
Investment Priorities for Democratization
S
Sangbu Kim
5 arguments113 words per minute793 words419 seconds
Argument 1
Concentration of digitized data heavily skewed toward developed world
EXPLANATION
Kim highlights the severe data inequality where more than 80% of global datasets come from developed, high-income countries, while sub-Saharan Africa represents less than 2% of data, and excluding South Africa, the percentage drops to nearly zero. This creates a significant gap in AI development and representation.
EVIDENCE
More than 80% of world’s datasets from developed world, high-income countries; less than 2% from sub-Saharan Africa; less than zero-something percent for sub-Saharan Africa excluding South Africa
MAJOR DISCUSSION POINT
Barriers to Democratizing AI Compute
AGREED WITH
Sanjay Jain, Yann LeCun
DISAGREED WITH
Chenai Chair, Saurabh Garg, Sanjay Jain
Argument 2
Local data ownership and context control remains with communities despite infrastructure inequality
EXPLANATION
Kim argues that while there are significant inequalities in computing infrastructure and resources, local communities and countries can still maintain strong ownership and control over their local data and context. This represents both an opportunity and a key indicator of progress toward AI democratization.
EVIDENCE
Local data, local context can be fully owned, controlled, and managed by local country and local people
MAJOR DISCUSSION POINT
Data Infrastructure and Sovereignty
AGREED WITH
Chenai Chair, Sanjay Jain
Argument 3
Creating demand for computing power through clear applications is more crucial than just building infrastructure
EXPLANATION
Kim emphasizes that while physical infrastructure like data centers and GPUs is important, the more critical aspect is developing clear applications and solutions that create demand for computing power. Without practical use cases, data center businesses cannot succeed in Africa.
EVIDENCE
Nobody can really run their own computing data center business in Africa without having very clear application and solutions
MAJOR DISCUSSION POINT
Use Cases and Practical Applications
DISAGREED WITH
Faith Waidaka
Argument 4
User-centric AI services reduce training requirements and allow verbal interaction without technical skills
EXPLANATION
Kim contrasts current AI development with past digital literacy requirements, arguing that AI should move toward user-centric services where users don’t need extensive training. Instead of learning complex technical skills, users can simply ask verbally for what they need, making AI more accessible.
EVIDENCE
20-30 years ago people needed digital literacy and basic digital skills to use Windows and Explorer, but AI allows verbal control and automatic provision of services
MAJOR DISCUSSION POINT
Use Cases and Practical Applications
AGREED WITH
Saurabh Garg, Chenai Chair
Argument 5
Use case development and user mindset change are critical for inspiring low-income users
EXPLANATION
Kim identifies agriculture, education, healthcare, and government services as promising use case fields, but emphasizes that changing user mindset and inspiring users is equally important. The challenge is that low-income users often don’t know what possibilities exist with AI technology.
EVIDENCE
Agriculture, education, healthcare, and government service as promising use case fields; low-income users and clients don’t really know what they don’t know
MAJOR DISCUSSION POINT
Investment Priorities for Democratization
S
Sanjay Jain
3 arguments182 words per minute1081 words355 seconds
Argument 1
Personal data accessibility through protected means is essential for AI to reach everyone
EXPLANATION
Jain argues that AI will only scale effectively when personal data is accessible through protected means to AI models, enabling personalized services. This approach allows AI to reach everyone by providing services tailored to individual needs while maintaining data protection.
EVIDENCE
When personal data is accessible through protected means to a model, it allows personalized service delivery
MAJOR DISCUSSION POINT
Barriers to Democratizing AI Compute
AGREED WITH
Sangbu Kim, Yann LeCun
DISAGREED WITH
Chenai Chair, Saurabh Garg, Sangbu Kim
Argument 2
DPI provides consented access to individual records and transactions through federated learning approaches
EXPLANATION
Jain explains that Digital Public Infrastructure creates a management layer over individual data records (ID, transactions) that provides consented access. This enables federated learning where models come to the data rather than data going to models, protecting privacy while enabling AI development.
EVIDENCE
India’s data empowerment and protection architecture; federated learning where data never goes to model but model comes to data
MAJOR DISCUSSION POINT
Data Infrastructure and Sovereignty
AGREED WITH
Chenai Chair, Sangbu Kim
Argument 3
Digital public infrastructure deployment requires $500 million to bring everyone to same digital level
EXPLANATION
Jain states that $500 million is the estimated amount needed to deploy DPI globally, which would get underlying systems of record and provide people access to their data in digital form. This would empower people to participate in the AI revolution with proper controls and structures.
EVIDENCE
$500 million is the amount for raising capital to get DPI everywhere in the world; getting people’s health records, ledgers, and other data digitized
MAJOR DISCUSSION POINT
Investment Priorities for Democratization
Y
Yann LeCun
11 arguments153 words per minute2772 words1083 seconds
Argument 1
Availability of top-performing open models is necessary but insufficient condition
EXPLANATION
LeCun argues that while open-weight and open-source models are necessary to remove barriers to AI access, they are not sufficient on their own. The current problem is that open models lag behind proprietary systems, but this can be addressed through better access to diverse data sources.
EVIDENCE
Today there is no such thing as top-performing open models; open models are behind proprietary systems
MAJOR DISCUSSION POINT
Barriers to Democratizing AI Compute
AGREED WITH
Saurabh Garg, Chenai Chair
Argument 2
Federated learning allows regions to contribute to global models while maintaining data ownership
EXPLANATION
LeCun proposes that different regions can collect and digitize their cultural data and contribute to training a global model without actually sharing the data. Through federated learning and parameter vector exchange, regions maintain data ownership while contributing to better global AI models.
EVIDENCE
Regions can keep ownership of data and contribute to training global model by exchanging parameter vectors through federated learning
MAJOR DISCUSSION POINT
Data Infrastructure and Sovereignty
AGREED WITH
Sangbu Kim, Sanjay Jain
Argument 3
Current LLMs are knowledge storage systems requiring enormous memory, but smarter systems could replace knowledge with intelligence
EXPLANATION
LeCun explains that current AI systems like LLMs are expensive to train and run because they accumulate and store factual knowledge requiring hundreds of billions of parameters. He argues that future AI systems will be smarter and rely more on intelligence rather than stored knowledge, making them smaller and less expensive to train.
EVIDENCE
LLMs have hundreds of billions of parameters because they accumulate knowledge for easy retrieval; you can replace knowledge with intelligence
MAJOR DISCUSSION POINT
Technical Evolution and Compute Requirements
DISAGREED WITH
Faith Waidaka
Argument 4
Industry incentives naturally drive power consumption optimization because operational costs focus on power and hardware
EXPLANATION
LeCun points out that the AI industry has strong financial incentives to reduce power consumption since that’s where most operational costs go. Engineers are actively working on making models smaller, using distillation, and implementing mixture of experts approaches to optimize power usage.
EVIDENCE
Engineers focus on making models smaller, distilling them, using mixture of experts; money goes into power and maintaining hardware when operating AI systems
MAJOR DISCUSSION POINT
Technical Evolution and Compute Requirements
Argument 5
Real breakthrough in hardware design won’t happen for 10-20 years beyond CMOS transistors
EXPLANATION
LeCun acknowledges that while power efficiency is improving faster than Moore’s Law, it’s still not fast enough. He doesn’t expect major breakthroughs in computing hardware until we move beyond silicon CMOS transistors, which he estimates won’t happen for another 10-20 years.
EVIDENCE
Power efficiency making progress faster than Moore’s Law but still too slow; no visible horizon for carbon nanotubes, spintronics, or other alternatives
MAJOR DISCUSSION POINT
Technical Evolution and Compute Requirements
DISAGREED WITH
Faith Waidaka
Argument 6
Smart glasses for farmers demonstrate practical applications in rural contexts using local languages
EXPLANATION
LeCun cites an experiment where farmers in rural India used smart glasses to interact with AI assistants in local languages, asking about crop diseases, harvest timing, and weather. This demonstrates the potential for AI applications that could be useful if costs were reduced and systems better understood the real world.
EVIDENCE
Experiment by MIT colleagues giving smart glasses to farmers in rural India, allowing interaction in Indic languages about crop diseases, harvest timing, and weather
MAJOR DISCUSSION POINT
Use Cases and Practical Applications
Argument 7
Next AI revolution will focus on understanding the real world through sensory input rather than text
EXPLANATION
LeCun argues that current AI systems are limited because they only manipulate language well, but understanding the real world is much more complex. The next revolution will involve AI systems that learn from sensory input like video, similar to how children learn by observing the world.
EVIDENCE
House cat is smarter than biggest LLMs in understanding physical world; child receives 10^14 bytes of visual data in 4 years vs. half million years to read equivalent text; 16,000 hours of video equals about 30 minutes of daily YouTube uploads
MAJOR DISCUSSION POINT
Future AI Architecture and World Models
Argument 8
World models that predict consequences of actions will enable true planning and reasoning capabilities
EXPLANATION
LeCun describes world models as systems that can predict the state of the world at time t+1 given the current state and a proposed action. This capability would allow AI systems to predict consequences before taking actions, enabling proper planning and reasoning, unlike current agentic systems.
EVIDENCE
Current agentic systems cannot predict consequences of their actions, which is a terrible way of planning; reasoning is like planning
MAJOR DISCUSSION POINT
Future AI Architecture and World Models
Argument 9
Academic research on non-generative architectures is being overlooked by industry’s LLM focus
EXPLANATION
LeCun observes that world models and non-generative architectures (like his JEPA approach) are primarily being researched by academic groups interested in AI for science, while industry focuses entirely on LLMs. This creates a monoculture that makes the industry somewhat blind to alternative approaches.
EVIDENCE
Silicon Valley dominant players entirely focused on LLMs, everybody stealing each other’s engineers and working on same thing; creates monoculture that makes industry blind
MAJOR DISCUSSION POINT
Future AI Architecture and World Models
Argument 10
Bottom-up collaboration through code development combined with government and organizational support
EXPLANATION
LeCun advocates for a bottom-up approach where people collaborate on building infrastructure through platforms like GitHub, combined with support from governments and organizations. He emphasizes that ultimately people need to write code to make federated learning for open models work.
EVIDENCE
Groups in Switzerland (EPFL, ETH), UAE (MBZUAI), Korea and other countries have built their own LLMs and should join forces
MAJOR DISCUSSION POINT
Organizational Collaboration for Open AI
Argument 11
Existing country-specific LLM groups should join forces with support from CERN, UNESCO, and AI Alliance
EXPLANATION
LeCun identifies specific organizations and groups that could coordinate federated learning collaboration, including existing country-specific LLM projects and international organizations. He suggests Switzerland could play a key role given its hosting of international organizations and the upcoming AI summit.
EVIDENCE
Groups in Switzerland, UAE, Korea and other countries with existing LLMs; CERN, UNESCO, Switzerland with organizations in Geneva; AI Alliance promotes open source AI
MAJOR DISCUSSION POINT
Organizational Collaboration for Open AI
F
Faith Waidaka
3 arguments94 words per minute1085 words691 seconds
Argument 1
Building electrical and mechanical infrastructure for data centers in Africa is essential for making AI possible
EXPLANATION
Faith Waidaka identifies herself as someone who builds the physical infrastructure that enables AI, specifically the electrical and mechanical systems in data centers across Africa. As board chair of the Africa Data Center Association, she emphasizes the foundational role of physical infrastructure in AI democratization.
EVIDENCE
Board chair of the Africa Data Center Association; builds electrical, mechanical infrastructure in data centers in Africa
MAJOR DISCUSSION POINT
Infrastructure Requirements for AI
DISAGREED WITH
Sangbu Kim
Argument 2
AI democratization requires a comprehensive approach addressing multiple interconnected elements simultaneously
EXPLANATION
Waidaka concludes that democratizing AI cannot follow a one-size-fits-all approach and requires simultaneous attention to talent, compute, data centers, regulatory frameworks, and reforms. She emphasizes that all these elements must come together to make AI democratization possible.
EVIDENCE
Need for talent, compute, data centers, regulatory framework, reforms – everything to come together
MAJOR DISCUSSION POINT
Holistic Approach to AI Democratization
Argument 3
Moving inference closer to devices creates tension with compute requirements in the next decade
EXPLANATION
Waidaka observes an interesting balance challenge where training models may become smaller but inference might require more compute, while there’s also a push to bring inference closer to end-user devices. She notes that ten years provides significant time for research breakthroughs given AI’s rapid evolution.
EVIDENCE
Training models becoming smaller while inference takes up compute; bringing inference to devices close to people; AI progress over past decade
MAJOR DISCUSSION POINT
Technical Evolution and Compute Requirements
DISAGREED WITH
Yann LeCun
A
Arun Sharma
2 arguments157 words per minute140 words53 seconds
Argument 1
Physical world systems lag behind software evolution, creating implementation barriers for AI applications
EXPLANATION
Sharma points out that while AI software is evolving rapidly, the physical infrastructure and systems it needs to interact with remain archaic. He uses the example of a farmer with smart glasses who can get AI advice but still relies on outdated systems for ordering seeds or fertilizer, highlighting the gap between virtual and physical world capabilities.
EVIDENCE
Farmer wearing glasses example where seeds, fertilizer ordering still runs on archaic systems; software evolving much faster than hardware
MAJOR DISCUSSION POINT
Integration Challenges Between Digital and Physical Systems
Argument 2
Resource deployment challenges persist in critical sectors like education and healthcare in India
EXPLANATION
Sharma specifically mentions that despite India’s digital infrastructure advances, the country still struggles to effectively deploy resources in education and healthcare sectors. This highlights the ongoing challenges in translating digital capabilities into improved service delivery in essential areas.
EVIDENCE
Indian system has not been able to deploy resources in education space or healthcare space where we still lag
MAJOR DISCUSSION POINT
Sectoral Implementation Challenges
A
Audience
2 arguments146 words per minute166 words67 seconds
Argument 1
Federated learning coordination requires addressing architectural collaboration challenges beyond technical implementation
EXPLANATION
An audience member from CERN acknowledges that while federated learning is technically feasible, the real challenge lies in creating organizational structures and coordination mechanisms for collaboration. They seek ideas about which organizations could effectively coordinate such collaborative efforts.
EVIDENCE
Federated learning technologically easy but architecture of collaboration might be difficult
MAJOR DISCUSSION POINT
Organizational Collaboration for Open AI
Argument 2
Data availability may not be the only bottleneck for achieving artificial general intelligence
EXPLANATION
An audience member questions whether data is the sole limiting factor for reaching AGI, despite the comparison between human learning data consumption and AI training data. They also raise concerns about benchmarking AGI and how humans would evaluate systems that become smarter than humans.
EVIDENCE
Reference to 10^14 bytes data comparison between AI training and human learning; question about AGI benchmarking when systems become smarter than humans
MAJOR DISCUSSION POINT
Future AI Architecture and AGI Development
Agreements
Agreement Points
Data access and ownership are fundamental barriers to AI democratization
Speakers: Sangbu Kim, Sanjay Jain, Yann LeCun
Concentration of digitized data heavily skewed toward developed world Personal data accessibility through protected means is essential for AI to reach everyone Federated learning allows regions to contribute to global models while maintaining data ownership
All three speakers identify data access as a critical barrier, with Kim highlighting the severe inequality in global data distribution, Jain emphasizing the need for protected personal data access, and LeCun proposing federated learning as a solution that maintains data ownership while enabling global AI development
Open models are essential for democratizing AI access
Speakers: Saurabh Garg, Yann LeCun, Chenai Chair
Access to open models and AI literacy are primary barriers Availability of top-performing open models is necessary but insufficient condition Open models, talent development, and capacity building across the entire value chain are essential
All three speakers agree that open models are crucial for AI democratization, with Garg identifying access to open models as a primary barrier, LeCun noting they are necessary but not sufficient, and Chair emphasizing their importance for enabling innovation
Community participation and local ownership are critical for sustainable AI development
Speakers: Chenai Chair, Sangbu Kim, Sanjay Jain
Community participation and meeting people where they are builds trust in data infrastructure Local data ownership and context control remains with communities despite infrastructure inequality DPI provides consented access to individual records and transactions through federated learning approaches
These speakers converge on the importance of community involvement and local control, with Chair emphasizing participatory approaches, Kim highlighting local data ownership opportunities, and Jain describing DPI mechanisms that enable community control over their data
Talent development and capacity building are essential across the AI value chain
Speakers: Saurabh Garg, Chenai Chair, Sangbu Kim
Access to open models and AI literacy are primary barriers Open models, talent development, and capacity building across the entire value chain are essential User-centric AI services reduce training requirements and allow verbal interaction without technical skills
All three speakers emphasize the critical importance of developing human capacity, with Garg focusing on AI literacy, Chair advocating for comprehensive talent development across the value chain, and Kim emphasizing user-centric design that reduces technical skill requirements
Similar Viewpoints
Both speakers recognize that current AI systems are inefficient and advocate for approaches that reduce computational requirements – LeCun through market incentives driving efficiency improvements, and Garg through domain-specific models that require less infrastructure
Speakers: Yann LeCun, Saurabh Garg
Industry incentives naturally drive power consumption optimization because operational costs focus on power and hardware Domain-specific models using less power and infrastructure are preferable to large language models
Both speakers strongly advocate for digital public infrastructure as a foundation for AI democratization, with Jain providing specific funding requirements and Garg emphasizing the importance of ensuring people have agency as co-creators rather than just consumers
Speakers: Sanjay Jain, Saurabh Garg
Digital public infrastructure deployment requires $500 million to bring everyone to same digital level Digital public infrastructure must ensure access and agency for people to be co-creators, not just consumers
Both speakers advocate for bottom-up, community-driven approaches to AI development, with Chair demonstrating this through Masakhane’s success and LeCun proposing similar collaborative models for federated learning
Speakers: Chenai Chair, Yann LeCun
Participatory approaches like Masakhane demonstrate community ownership and sustainability Bottom-up collaboration through code development combined with government and organizational support
Unexpected Consensus
Current AI systems are fundamentally limited and a new revolution is needed
Speakers: Yann LeCun, Sangbu Kim
Current LLMs are knowledge storage systems requiring enormous memory, but smarter systems could replace knowledge with intelligence User-centric AI services reduce training requirements and allow verbal interaction without technical skills
It’s unexpected to see both a leading AI researcher (LeCun) and a World Bank representative (Kim) agree that current AI approaches are inadequate. LeCun argues for a complete architectural shift toward world models, while Kim emphasizes the need for more user-centric approaches, both suggesting fundamental changes are needed rather than incremental improvements
Physical infrastructure alone is insufficient without demand creation and use cases
Speakers: Faith Waidaka, Sangbu Kim
AI democratization requires a comprehensive approach addressing multiple interconnected elements simultaneously Creating demand for computing power through clear applications is more crucial than just building infrastructure
It’s surprising that Waidaka, who builds physical data center infrastructure, agrees with Kim that infrastructure alone is not the solution. Despite her role in building the physical foundation for AI, she acknowledges that a holistic approach including talent, regulatory frameworks, and use cases is necessary
Overall Assessment

The speakers demonstrate remarkable consensus on key principles for AI democratization: the critical importance of data governance and community ownership, the necessity of open models and federated approaches, the centrality of capacity building and talent development, and the need for holistic rather than infrastructure-only solutions. There is also agreement that current AI approaches have fundamental limitations requiring new paradigms.

High level of consensus across diverse stakeholders (academic, government, civil society, private sector, international organizations) suggests these principles represent a solid foundation for policy and implementation. The convergence is particularly significant given the speakers’ different backgrounds and roles, indicating these viewpoints transcend sectoral interests and could form the basis for coordinated global action on AI democratization.

Differences
Different Viewpoints
Primary barrier to AI democratization
Speakers: Chenai Chair, Saurabh Garg, Sangbu Kim, Sanjay Jain
Language diversity creates enormous scope of work with over 2,000 documented African languages Access to open models and AI literacy are primary barriers Concentration of digitized data heavily skewed toward developed world Personal data accessibility through protected means is essential for AI to reach everyone
Speakers identified different primary barriers: Chenai focused on linguistic diversity and community representation, Saurabh emphasized model access and literacy, Sangbu highlighted data inequality, and Sanjay stressed personal data accessibility through protected systems
Infrastructure vs. application focus for AI development
Speakers: Faith Waidaka, Sangbu Kim
Building electrical and mechanical infrastructure for data centers in Africa is essential for making AI possible Creating demand for computing power through clear applications is more crucial than just building infrastructure
Faith emphasized the foundational importance of physical infrastructure for data centers, while Sangbu argued that developing use cases and applications to create demand is more important than just building physical infrastructure
Future of AI compute requirements
Speakers: Yann LeCun, Faith Waidaka
Current LLMs are knowledge storage systems requiring enormous memory, but smarter systems could replace knowledge with intelligence Moving inference closer to devices creates tension with compute requirements in the next decade
Yann predicted that training models will become smaller as AI becomes more intelligent rather than knowledge-storing, while Faith observed the tension between this prediction and the practical need to bring inference closer to end-user devices
Timeline and approach for AI breakthroughs
Speakers: Yann LeCun, Faith Waidaka
Real breakthrough in hardware design won’t happen for 10-20 years beyond CMOS transistors Ten years provides significant time for research breakthroughs given AI’s rapid evolution
Yann was more conservative about hardware breakthroughs, suggesting 10-20 years for major advances beyond current silicon technology, while Faith was more optimistic about the potential for breakthroughs within a decade given AI’s rapid progress
Unexpected Differences
Role of physical vs. digital infrastructure prioritization
Speakers: Faith Waidaka, Sangbu Kim
Building electrical and mechanical infrastructure for data centers in Africa is essential for making AI possible Creating demand for computing power through clear applications is more crucial than just building infrastructure
This disagreement was unexpected because both speakers represent infrastructure-focused organizations (Africa Data Center Association and World Bank), yet they had fundamentally different views on whether to prioritize physical infrastructure building or application development first
Academic vs. industry AI research direction
Speakers: Yann LeCun
Academic research on non-generative architectures is being overlooked by industry’s LLM focus
LeCun’s criticism of Silicon Valley’s LLM monoculture was unexpected given his recent departure from Meta and his position as a leading industry figure, suggesting significant internal tensions about AI development directions within the tech industry
Overall Assessment

The discussion revealed moderate disagreements primarily around prioritization and sequencing rather than fundamental goals. Key areas of disagreement included: what constitutes the primary barrier to AI democratization, whether to prioritize physical infrastructure or applications first, and timelines for technological breakthroughs. Most speakers agreed on core principles like the importance of open models, community ownership of data, and inclusive AI development.

The disagreement level was moderate and constructive, with speakers offering complementary rather than contradictory perspectives. The disagreements reflect different professional backgrounds and regional contexts rather than fundamental philosophical differences. This suggests that a comprehensive approach incorporating multiple viewpoints would be most effective for AI democratization, rather than choosing a single approach. The consensus on core principles provides a strong foundation for collaborative action despite tactical differences.

Partial Agreements
All agreed on the importance of open models for AI democratization, but disagreed on implementation approach – Yann focused on federated learning and data contribution, Saurabh emphasized literacy and domain-specific models, while Chenai stressed community participation and comprehensive talent development
Speakers: Yann LeCun, Saurabh Garg, Chenai Chair
Availability of top-performing open models is necessary but insufficient condition Access to open models and AI literacy are primary barriers Open models, talent development, and capacity building across the entire value chain are essential
All agreed on the importance of local data ownership and community control, but differed on mechanisms – Sangbu focused on leveraging existing local data advantages, Chenai emphasized participatory community-driven approaches, while Sanjay advocated for formal DPI systems with consented access
Speakers: Sangbu Kim, Chenai Chair, Sanjay Jain
Local data ownership and context control remains with communities despite infrastructure inequality Community participation and meeting people where they are builds trust in data infrastructure DPI provides consented access to individual records and transactions through federated learning approaches
Both agreed on the need for efficient, scalable infrastructure solutions, but Sanjay focused on comprehensive DPI deployment while Saurabh emphasized smaller, specialized models as the solution to infrastructure constraints
Speakers: Sanjay Jain, Saurabh Garg
Digital public infrastructure deployment requires $500 million to bring everyone to same digital level Domain-specific models using less power and infrastructure are preferable to large language models
Takeaways
Key takeaways
Democratizing AI requires a holistic approach addressing five key areas: access and energy, computing power, data access, talent building, and credible AI framework/policy Data inequality is stark – over 80% of global datasets are from developed countries, with less than 2% from sub-Saharan Africa Community participation and ownership are essential for building trusted AI infrastructure, as demonstrated by the Masakhane African Languages Hub’s participatory approach Current LLMs are primarily knowledge storage systems requiring enormous computational resources, but future AI systems focused on intelligence rather than knowledge accumulation could be more efficient Digital Public Infrastructure (DPI) can enable AI democratization by providing trusted, interoperable systems that give people agency as co-creators rather than just consumers Creating demand for computing power through practical use cases (agriculture, education, healthcare, government services) is more important than just building physical infrastructure The next AI revolution will focus on world models that understand the real world through sensory input, enabling better planning and reasoning capabilities Open source models and federated learning approaches can help regions maintain data sovereignty while contributing to global AI development
Resolutions and action items
Development of the METRI platform (Multi-stakeholder AI for Trusted and Resilient Infrastructure) as a modular digital public good Implementation of Project Echo by Masakhane as a gender-responsive AI project focusing on women’s economic empowerment and health Continued funding and support for grassroots efforts like Masakhane for African language representation in AI Bottom-up collaboration through code development for federated learning, potentially coordinated through organizations like CERN, UNESCO, and the AI Alliance Investment in open source DPI systems that countries can adopt and customize (like MOSIP for digital ID, OpenG2P for government payments) Focus on developing domain-specific, smaller models that require less computational power and infrastructure
Unresolved issues
How to effectively coordinate international collaboration for federated learning at scale The timeline and practical implementation of transitioning from current LLMs to world model-based AI systems Specific mechanisms for ensuring data sovereignty while enabling global AI model training How to bridge the gap between rapidly evolving AI software and slower-moving physical infrastructure and systems Standardization and interoperability challenges across different countries’ DPI implementations Sustainable funding models for community-driven AI development initiatives Balancing profit motives of private sector with democratization goals Addressing the fear of job displacement from AI, particularly in developing countries
Suggested compromises
Federated learning approach that allows data contribution to global models while maintaining local ownership and control Modular DPI systems that can be customized by individual countries while maintaining interoperability Combination of technological and policy-based mechanisms to prevent new dependencies while enabling collaboration Focus on user-centric AI design that reduces technical barriers while building local capacity Investment strategy that balances infrastructure development with use case creation and talent building Open source approach combined with government and organizational support for sustainable development
Thought Provoking Comments
Current systems are not particularly intelligent but they store knowledge there is another revolution of AI coming which actually my new company is built around which intends to build systems that are smarter even if they don’t necessarily accumulate as much knowledge so those models will be smaller… your house cat is smarter than the biggest LLMs
This fundamentally challenges the prevailing narrative about current AI capabilities and reframes the entire discussion from scaling existing models to developing fundamentally different approaches. The cat analogy is particularly powerful in illustrating the gap between language manipulation and true world understanding.
This shifted the conversation from focusing on democratizing access to current AI systems toward considering what the next generation of AI might look like. It influenced Faith’s follow-up questions about compute requirements and forced other panelists to think beyond current LLM paradigms when discussing infrastructure needs.
Speaker: Yann LeCun
We need to think differently… even though computing power is very important, how can we really create the data demand. So without having very clear application and some solutions, nobody can really run their own computing data center business in Africa.
This inverts the typical infrastructure-first approach to AI democratization, arguing that demand creation through practical applications should drive infrastructure development rather than the reverse. It’s a crucial economic insight often overlooked in technical discussions.
This comment redirected the panel’s focus from supply-side solutions (more compute, more data centers) to demand-side considerations (use cases, applications, user inspiration). It influenced subsequent discussions about practical applications and user-centric design, with multiple panelists later emphasizing the importance of meeting communities where they are.
Speaker: Sangbu Kim
If various regions of the world collect or digitize their cultural data… and then contribute to training a global model that would constitute eventually a repository of all human knowledge then those models would be much better quality than all the proprietary system because the proprietary system would not have access to that data… this can be done technically through federated learning
This presents a concrete technical pathway for developing countries to gain leverage in AI development by contributing unique cultural data while maintaining sovereignty. It transforms the narrative from ‘catching up’ to ‘leading through unique contributions.’
This concept became a recurring theme throughout the discussion, with Saurabh Garg building on it in his METRI platform proposal and influencing questions about federated learning coordination. It provided a technical foundation for several panelists’ arguments about data sovereignty and collaborative development.
Speaker: Yann LeCun
Masakana basically means we build together loosely translated in Isizuru. And that was then a creation of a participatory approach in knowledge building as a result of being excluded in spaces… if you’re going to build trust, people have to see what the end value is and also be recognized.
This provides a concrete, successful example of community-driven AI development that challenges top-down approaches. The emphasis on recognition and participatory design offers a practical model for inclusive AI development that goes beyond consultation to genuine co-creation.
This grounded the theoretical discussions in real-world success, influencing how other panelists framed their responses about community engagement. It reinforced the importance of bottom-up approaches and was referenced by Sanjay Jain as an example of the kind of grassroots efforts that should be funded.
Speaker: Chenai Chair
AI will scale effectively only when data for everyone is available. So when I can get a personalized service because my personal data is accessible through some protected means to a model… DPI provides a way for data of all individuals… to be managed in a very trusted way, access with consent.
This connects AI democratization to broader digital infrastructure development, suggesting that individual data empowerment through DPI is prerequisite to meaningful AI access. It reframes the challenge from collective to individual data sovereignty.
This introduced the DPI framework as a foundational layer for AI democratization, influencing subsequent discussions about data sovereignty and individual empowerment. It provided a concrete policy pathway that other panelists could build upon, particularly regarding federated approaches and community ownership.
Speaker: Sanjay Jain
The incentives are there for the industry to reduce the power consumption of AI system… because that’s where the money goes. That’s where you spend all the money when you operate an AI system. It goes into power and maintaining your hardware… The bad news is that it’s progressing as fast as it can, and it’s not fast enough.
This provides a realistic economic analysis of AI efficiency improvements, tempering optimistic expectations about rapid cost reductions while explaining why progress is happening. It’s particularly insightful because it aligns economic incentives with democratization goals.
This helped ground the discussion in economic realities and influenced how other panelists approached infrastructure planning. It suggested that waiting for dramatic efficiency improvements isn’t viable, pushing the conversation toward alternative approaches like smaller, domain-specific models and different architectural approaches.
Speaker: Yann LeCun
Overall Assessment

These key comments fundamentally shaped the discussion by challenging conventional approaches to AI democratization. Rather than focusing solely on replicating Western AI infrastructure in developing countries, the conversation evolved toward more nuanced strategies: leveraging unique cultural data as competitive advantage, building from community needs upward, and preparing for fundamentally different AI architectures. The interplay between Yann LeCun’s technical insights about AI limitations and future directions, combined with Chenai Chair’s community-driven examples and the policy frameworks from Sanjay Jain and Saurabh Garg, created a multi-dimensional approach that moved beyond simple resource transfer to genuine co-creation and innovation. The discussion ultimately reframed AI democratization from a catch-up game to an opportunity for leapfrogging through alternative approaches.

Follow-up Questions
How can we create demand for computing power in developing regions, particularly Africa?
Kim emphasized that while physical infrastructure is important, the more crucial question is how to create applications and solutions that generate demand for computing power, without which data center businesses cannot be sustainable in Africa.
Speaker: Sangbu Kim
What technological breakthroughs are needed to significantly reduce AI power consumption beyond current optimization efforts?
LeCun noted that while industry incentives exist to reduce power consumption, progress isn’t fast enough, and real breakthroughs may require moving beyond CMOS transistors and silicon, which won’t happen for 10-20 years.
Speaker: Yann LeCun
How can federated learning be implemented technically to allow regions to contribute to global AI models while maintaining data ownership?
LeCun mentioned this as a technical solution for democratizing AI but acknowledged he didn’t want to get into the technical weeds, leaving the implementation details as an area for further exploration.
Speaker: Yann LeCun
What organizational structure could coordinate federated learning collaboration between different countries and regions?
While federated learning is technically feasible, the architecture of collaboration between different entities remains a challenge that needs to be addressed.
Speaker: Daniel Dobos (audience member)
How can the METRI platform be developed and scaled as a modular digital public infrastructure for AI?
Garg introduced the concept of METRI (multi-stakeholder AI for trusted and resilient infrastructure) but the specific implementation details and scaling mechanisms need further development.
Speaker: Saurabh Garg
How can world models be developed and what research funding is needed to accelerate this next AI revolution?
LeCun emphasized that world models represent the next AI revolution but noted that most work is happening in academia while industry focuses on LLMs, suggesting need for more research support.
Speaker: Yann LeCun
How can the lag between rapidly evolving AI software and slower-changing physical infrastructure be addressed?
Sharma pointed out the disconnect between fast software evolution and archaic physical systems, particularly in sectors like agriculture, education, and healthcare.
Speaker: Arun Sharma (audience member)
What are the specific benchmarks and evaluation methods for determining when AI systems reach human-level intelligence?
The question of how to benchmark and evaluate AGI or human-level AI remains unresolved, particularly regarding how humans would evaluate systems that might be smarter than them.
Speaker: Audience member
How can community network models be adapted and scaled for AI infrastructure development?
Chair referenced community network models for last-mile connectivity as a template for community-owned AI infrastructure, but the specific adaptation mechanisms need further research.
Speaker: Chenai Chair
What specific mechanisms are needed to ensure federated AI systems don’t create new dependencies while maintaining data sovereignty?
While Garg mentioned the need for technological and policy-based protocols, the specific mechanisms to achieve this balance require further development.
Speaker: Saurabh Garg

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Towards a Safer South Launching the Global South AI Safety Research Network

Towards a Safer South Launching the Global South AI Safety Research Network

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion centered on the launch of the Global South Network for Trustworthy AI at the India AI Impact Summit, addressing the critical need for inclusive AI safety infrastructure that represents the perspectives of Global South countries. Dr. Urvashi Aneja, founder of Digital Futures Lab, opened by highlighting how AI systems are rapidly being deployed in critical sectors across the Global South, where low institutional capacity and deep inequities create both immense opportunities and significant risks. She emphasized that Global South organizations remain underrepresented in global AI safety governance, despite being uniquely positioned to identify real-world deployment risks invisible to lab-based evaluations.


Mr. Abhishek Singh from India’s AI Mission stressed that while everyone agrees on the need for safe AI, the challenge lies in developing technical tools and benchmarks to address risks, particularly for multilingual contexts where most models are evaluated only in English. Ambassador Philip Thigo from Kenya pointed out the structural exclusion of Global South countries from safety conversations and emphasized that safety must extend beyond technology to include environmental harms, biases, and full lifecycle accountability. The panelists identified several critical gaps in current AI safety approaches, including the need to redefine safety according to social and cultural contexts, address linguistic nuances beyond simple translation, and understand emerging harms during actual usage.


Industry representatives acknowledged the challenge of scaling context-sensitive evaluations across thousands of languages and millions of cultural settings while maintaining sustainability. The discussion concluded with commitments from various organizations to contribute to multilingual benchmarking, incident reporting tools, and infrastructure investments, establishing a foundation for collaborative AI safety work that centers Global South perspectives and needs.


Keypoints

Major Discussion Points:

Launch of the Global South Network for Trustworthy AI: The primary focus was announcing this new network bringing together research institutions from Asia, Africa, and Latin America to evaluate real-world AI impacts and build trust mechanisms localized to different linguistic, cultural, and infrastructural contexts.


Context-sensitive AI safety challenges in the Global South: Speakers emphasized that current AI safety frameworks miss critical contextual factors like local languages, cultural norms, gender dynamics, and social inequalities. Examples included AI models failing to understand local expressions (like “waters have broken” translating to “thrown away water”) and agricultural tools with male voices potentially exacerbating gender-based violence.


Structural gaps in global AI governance: Multiple speakers highlighted that Global South countries are underrepresented in global AI safety infrastructure, with only Kenya mentioned as a member of international AI safety institutes. This creates a disconnect between where AI harms occur most and where safety decisions are made.


Need for multilingual and multicultural evaluation systems: The discussion emphasized developing benchmarks beyond English-language models, creating evaluation tools that capture societal risks specific to Global South contexts, and building sustainable, community-led assessment frameworks.


Concrete initiatives and commitments: The network outlined five flagship projects including multilingual AI benchmarks, gender safety taxonomy, procurement guidelines, evaluation methodologies, and healthcare AI assessments. Industry partners like Microsoft committed to infrastructure investments and data sharing.


Overall Purpose:

The discussion aimed to formally launch the Global South Network for Trustworthy AI and establish a collaborative framework for addressing AI safety challenges specific to developing countries, while advocating for greater inclusion of Global South perspectives in international AI governance.


Overall Tone:

The tone was collaborative and urgent throughout, with speakers expressing both excitement about the network’s potential and concern about the pressing need for action. There was a consistent emphasis on moving beyond theoretical discussions to practical implementation, with industry and government representatives showing strong commitment to supporting the initiative. The conversation maintained a professional yet passionate quality, reflecting the speakers’ shared belief in the critical importance of inclusive AI safety.


Speakers

Speakers from the provided list:


Dr. Urvashi Aneja – Founder and Director of Digital Futures Lab


Mr. Abhishek Singh – Leadership role in India AI Summit and India AI mission


Ambassador Philip Thigo – Special Envoy on Technology from the Republic of Kenya


Mr. Quintin Chou-Lambert – Chief of Office and AI Lead, UN Office for Digital and Emerging Technologies


Ms. Natasha Crampton – Vice President and Chief Responsible AI Officer at Microsoft


Dr. Rachel Sibande – Senior Program Officer AI for Africa at the Gates Foundation


Ms. Chenai Chair – Director of the Masakane African Language Hub


Mr. Amir Banifatemi – Chief Responsible AI Officer at Cognizant


Dr. Balaraman Ravindran – Head Center of Responsible AI at IIT Madras


Additional speakers:


None – all speakers mentioned in the transcript were included in the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This discussion centered on the launch of the Global South Network for Trustworthy AI at the India AI Impact Summit, bringing together government officials, industry leaders, civil society representatives, and academics to address the underrepresentation of Global South perspectives in international AI safety governance.


Network Vision and Structure

Dr. Urvashi Aneja, founder and director of Digital Futures Lab, opened by highlighting the fundamental challenge: while AI systems are being rapidly deployed across critical sectors like healthcare, education, and government in the Global South, these regions remain underrepresented in global safety infrastructure. She noted the paradox that countries with the greatest potential to leverage AI for development are precisely those most excluded from safety governance.


Dr. Aneja outlined five flagship projects for the network: multilingual AI benchmarking (with the Collective Intelligence Project and CARIA), a gender safety taxonomy project (with GXD Hub and the Global Center for AI Governance), procurement guidelines development, evaluation methodology work (with ITS Rio), and health information systems evaluation. The founding partners include Digital Futures Lab, Sirai from IIT Madras, Global Center for AI Governance, ITS Rio, and International Innovation Corps.


Government Support and Policy Alignment

Mr. Abhishek Singh from India’s AI Mission provided governmental backing, emphasizing that while consensus exists around the need for safe and trusted AI, the challenge lies in developing technical tools and benchmarks to address identified risks. He highlighted a critical gap: most AI models are assessed using predominantly English-language benchmarks, despite countries like India having 22 official languages and numerous dialects.


Singh noted the network’s alignment with the New Delhi Frontier AI commitments, which secured agreements from major AI companies to share usage data and develop multilingual performance benchmarks.


Structural Representation Challenges

Ambassador Philip Thigo from Kenya described the network as “timely but also late” due to the structural exclusion of Global South countries from safety conversations. He observed that Kenya is the only Global South member of international AI safety institutes, illustrating the representation gap. Thigo argued that “the global north of artificial intelligence is two countries and a few companies,” suggesting even traditionally developed nations face exclusion from AI governance.


He expanded the definition of AI safety beyond technical considerations to include environmental harms, biases, misinformation, and full lifecycle accountability from “minds to models,” emphasizing that governance is fundamentally about power.


Contextual Safety and Real-World Examples

Dr. Rachel Sibande from the Gates Foundation provided compelling examples of how linguistic nuance affects AI safety. She explained how the phrase “waters have broken”—a critical medical emergency—translates literally as “thrown away water” from local languages, potentially causing AI systems to miss life-threatening situations. She emphasized that language support requires understanding lived experiences and cultural contexts, not merely translation capabilities.


Sibande also discussed her work in Malawi, referencing the country’s identity as the “warm heart of Africa,” and highlighted challenges in measuring emerging harms during AI usage, such as cognitive substitution or emotional dependency.


Ms. Chenai Chair from the Masakhane African Language Hub reinforced these concerns, noting that Africa has over 2,000 documented languages, with Masakhane working on only 50. She provided an example of how agricultural tools with male voices could potentially increase gender-based violence in contexts with high gender inequality, demonstrating how design choices can have profound social consequences.


Industry Perspectives and Implementation Challenges

Ms. Natasha Crampton from Microsoft articulated the central scaling challenge: how to extend thoughtful, community-led evaluation work across thousands of languages and cultural settings while maintaining sustainability. She emphasized the need for continuous rather than one-time evaluation systems and referenced Microsoft’s commitment to the New Delhi Frontier AI agreements.


Mr. Amir Banifatemi offered a more critical assessment, arguing that safety lacks clear definition and is not integrated into companies’ financial planning or cost structures. He observed that “there is no penalty of not being safe,” highlighting the need for regulatory frameworks that make safety a financial imperative.


Coordination and Network Proliferation

Dr. Balaraman Ravindran from IIT Madras raised important questions about coordination, noting that multiple AI safety networks are launching simultaneously—including initiatives in Africa, China, and through UN processes. He called for coordination rather than competition among networks and suggested focusing on problems requiring cross-border collaboration.


Multilateral Integration

Mr. Quintin Chou-Lambert from the UN Office for Digital and Emerging Technologies discussed the evolution of international AI discussions from the concentrated focus of Bletchley Park to broader participation in subsequent summits, with over 100 countries now engaging in discussions. He referenced the UN Global Dialogue on AI Governance and the independent international scientific panel on AI as potential integration points for the network’s work.


Technical Challenges and Event Context

The event faced some technical difficulties, with portions of Dr. Aneja’s presentation becoming repetitive due to technical issues. Time constraints affected the panel discussion format, and the session concluded with mentions of a photo opportunity for participants.


Conclusion

The launch represents an important step toward including Global South perspectives in AI safety governance. The network aims to address critical gaps in current evaluation systems while navigating challenges around scaling, coordination with other initiatives, and translating evaluation work into meaningful protections for citizens across the Global South. Success will depend on the network’s ability to maintain contextual sensitivity while developing scalable methodologies and effectively integrating with broader international governance processes.


Session transcriptComplete transcript of the session
Dr. Urvashi Aneja

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Good evening, everyone. My name is Urvishya Neja. I am the founder and director of Digital Futures Lab. And I am so excited to see all of you here and to have you all here for the launch of this network. So it’s a real pleasure to welcome you to the launch of the Global South Network for Trustworthy AI here at the India AI Impact Summit. On behalf of Digital Futures Lab and our other founding partners, Sirai from IIT Madras, the Global Center for AI Governance, ITS Rio, International Innovation Corps, thank you all for being here. And we’re especially grateful to Mr. Abhishek Singh and Ambassador Philip Tigo and Mr.

Quintin Chow and to all our distinguished speakers and guests who are joining us today. Across the Global South, AI systems are being rapidly deployed in critical social sectors such as healthcare, education, judiciary, and in government. And while the opportunities are immense, in many of these contexts, many of these contexts are also marked by low institutional capacity, deep societal inequities, popularization, and populations with low levels of literacy. So while the potential is immense, the risks and harms are also immense. And so it’s particularly important that we figure out ways to make AI safe and trustworthy in these contexts to ensure not only that we protect the populations and to ensure that we don’t exasperate existing harms, but also to ensure that we build the infrastructure for safe and inclusive AI adoption.

Unfortunately, Global South organizations, Global South communities, Global South states remain underrepresented in global safety and governance infrastructures. And many countries in the Global South are actually unlikely to even have in the near term their own safety or oversight institutes. And there’s a real risk, therefore, that the concerns and priorities of these countries, of these communities remain underrepresented in the global safety infrastructure. And precisely those countries that have the most potential or the most opportunity to leverage AI. Independent civil society organizations are uniquely positioned to address this gap. Their proximity to real -world deployment contexts enables them to surface risks that are invisible to lab -based evaluations or testing. The form of grounded evidence that civil society organizations can bring can inform global safety benchmarks, standard -setting processes, and risk assessments, providing corrective signals to technical and regulatory institutions.

The Global South Network for Trustworthy AI works to advance exactly these objectives – to evaluate the real -world impact of AI systems, to build the trust and oversight mechanisms localized to different linguistic, cultural, and infrastructural contexts, and to elevate Global South perspectives in global AI governance forums. It is particularly encouraging that this initiative also aligns closely with the recently announced New Delhi Frontier AI. The Global South Network for Trustworthy AI works to enhance the ability to evaluate the real -world impact of AI systems, and to develop a more robust and efficient system The Global South Network for Trustworthy AI works to enhance the ability to evaluate the real -world impact of AI systems, and to develop a more robust and efficient system the real -world impact of AI systems, and to develop a more robust and efficient system The Global South Network for Trustworthy AI works to enhance the ability to evaluate the real -world impact of AI.

impact of AI. The Global South Network for Trustworthy AI works to enhance the ability to evaluate the real -world systems. The Global South Network for Trustworthy AI brings together some of the leading research institutions from across the Global South. We are joined by a community of organizations from Asia, from Africa, from Latin America, and the names of which you see displayed behind you. I also want to take this opportunity to highlight some of the key activities that we’re going to be doing as part of the network. I think one of the key things that we want to do as part of the network is to really build an independent evidence base to generate community -informed analysis of the societal, ethical, and distributional risks of AI systems across diverse contexts.

We also want to do real -world deployment assessment to conduct contextual and public evaluations of models and applications across diverse social contexts. We also want to push the field of evaluations, push the science of evaluations, where we say that benchmarks are very important, but benchmarks as they stand today do not necessarily capture all the societal risks that we see in the Global South. So how do we ensure that the evaluation work that we’re doing also captures some of those harms? In some sense, what we want to do with the network is field building. We want to bring together Global South civil society organizations to pool in their collective intelligence, to pool in their capacities, and to advocate together for the representation of Global South concerns on global governance forums.

So what we are trying to do here is field building within the Global South around AI safety and around building that trust infrastructure. And eventually what we hope that all of this amounts to is collective advocacy. We see an important role that the network will play in creating a connective tissue between the global governance architecture, between the global safety infrastructure, and what’s happening on the ground. We hope the network can provide that visibility to real -world impact. to technology companies who are designing tools, who are designing safety infrastructure, as well as to governments and international organizations who are building the architecture of global AI governance. So with that, I want to thank you all. Oh, wait, I have one more thing to share with all of you.

I’m not ready to thank you yet. I also want to showcase some of the projects that we’ll be doing in the coming year. Picking up on yesterday’s commitments, one of the things that we’ll be doing is building benchmarks for multilingual AI. This is with our network partners, the Collective Intelligence Project and CARIA, and we’re really excited to start this work. We’re also going to be doing work on gender and safety. This is with our partners at GXD Hub and the Global Center for AI Governance to build a taxonomy of gender harm so that we can start building a more robust incident reporting database when it comes to gender -related harms and really advance gender safety in digital spaces.

The third piece that we’re going to be working on this year is around procurement. All of the evaluation work that we do, all the benchmarks that we build, all of that has to eventually feed into public policy. And so we hope that some of this work can support procurement. And procurement, we think, is a really important lever for countries in the global south to shape markets for responsible innovation. I think we’ve all heard a lot about the kind of third way of AI governance that India brings to the global governance landscape. And procurement can be an important lever of making that third way a reality and setting the bar for what responsible innovation looks like.

Like I mentioned earlier, we also want to push on the science of evaluation. What does good evaluation look like? What are the kind of methodologies that we need? What are the kind of methodologies that reflect the concerns and the capacities of communities in the global south? So we’re very excited to be doing this work with ITS Rio, who’s also one of the founding partners, and specifically to implement and advance this discussion on evaluations. We’ll be looking at labor market impacts in the global south. and finally we’re going to be looking at evaluations of health information systems do the existing generative AI tools and large language models that we see do they deliver for clinicians do they deliver for doctors what more can they do to support the needs of healthcare professionals in the global south.

So those are the five kind of big flagship projects that we’re going to be launching within the coming year. We’re going to be very busy as you can see we have a lot that we’re going to try and get done and we’re really excited to be on this journey with all of you and would love to engage with all of you post the launch and see how we build this civil society and research infrastructure together. So with that I am delighted to welcome our keynote speakers first and I would like to give the floor to Mr. Abhishek Singh. Sir thank you for your continued support Thank you for the network and for your leadership on the India AI Summit.

Over to you, sir.

Mr. Abhishek Singh

Thank you, Urvashi. And first and foremost, I’d like to congratulate all the team, the network which has brought this together, this Global South Network for Trustworthy AI. With a few months back when we started discussing this concept with Urvashi, with Kalika, with my team, we felt that how do we go about it? Because safe and trusted AI is something that nobody disagrees with. Everybody says that whenever AI innovation is happening, but we must ensure that we must protect ourselves, we must kind of secure ourselves from the harms that can come from misuse of AI or from the risks that frontier AI poses. So yes, we did have the Yoshua Bengio’s report, the scientific panel report, which is part of all the impact summits, the Action Summit and the Bletchley Park Summit, in which it has kind of…

identifies the risks that frontier AI model poses. But what we do believe is that just identifying the risk is not sufficient. We need to think of how do we address those risks. And for addressing those risks, you need to first have the technical tool, the capacity to identify those risks. What are the benchmarks on which you will evaluate them? Some of which Roshi identified, like how do various models perform on multilingual benchmarks? Because very often, most models are evaluated on benchmarks which are predominantly in English language. But if you look at India, a diverse country, we have 22 official languages and multiple other dialects. How do we evaluate how a model performs on various domains in prompts given in those languages?

We don’t have specific linguistic benchmarks. The same applies to many countries of global south. So it felt that while limited expertise exists in some institutions where research is going on, like Serai is one of them, where Professor Balram Ravindran is leading it. There are many labs, of course. whether it’s Microsoft Research or whether other labs wherein such work is going on. The AI Security Institute in UK is doing some work in this direction. The OECD has been doing some work. But how do we ensure that we enable the access to such resources, such tools, such studies to the larger global majority? So with that, this whole concept of creating a global south network for trustworthy AI came in.

And then we immediately had these conversations with all the key stakeholders, partners. We got a lot of support from almost all stakeholders. And along with that, the conversation for the New Delhi Frontier AI commitments was also going on, which Kalika from my team was leading it. And luckily, we were able to announce it in which all models committed to those two commitments about sharing usage data as also multilingual performance benchmarks. So that was a huge achievement. And I feel that the launch of this. Global South Network for trustworthy AI is a further step in that direction. How do we enable compliance to those commitments? How do we ensure that how this data will be shared?

How do we create tools for evaluating models in various languages? How do we build up capacity in all countries of the global south? How do we share resources? How do we share knowledge across? So this is just the beginning and I feel that we support from all industry organizations, the frontier AI labs, the research organizations, governments across the world. This can really, really grow into a resource that can be a global utility. So I compliment all that team which is involved in doing that. The launch of the network is the first step. But how do we action it out? How do we make it functional? How do we ensure that we get necessary support from all stakeholders?

Very often whenever we talk about trusted AI, whenever we talk about safe AI, some people think that we are trying to stifle innovation. The objective is not that. We always say that while the primary objective is to ensure diffusion of AI, primary objective is to ensure that more and more users benefit from the usage of AI. But at the same time, we need to do that in a responsible manner. We need to do it in a safe manner. We do need to do it in a trustworthy manner to limit the harm that can be caused. So this Global South Network for Trustworthy AI which is being launched will work in that direction. It will be an institution that will support not only India but the entire Global South.

And I am sure with just the presence of all the speakers who are present in this session, the strong commitment that all industry and all countries and all multilateral organizations are showing to this initiative, I am sure this will get further strengthened in the days to come. There is a lot of work that Urvashi and team are taking. They are taking on their own. But we will be there to provide all necessary support for India AI mission and we will work towards ensuring that you get the same level of support from every. participating country which is here. So thank you once again and congratulations for this launch and look forward to working towards the objectives in the near future.

Thank you.

Dr. Urvashi Aneja

MS. Thank you, sir, for your remarks and most importantly for your support. I think it means a lot to us to be working so closely with the India AI mission and we’re really excited to be able to deliver on this promise. It’s now my honor to invite Ambassador Philip Tigo, the Special Envoy on Technology from the Republic of Kenya, to share his reflections.

Ambassador Philip Thigo

AMBASSADOR PHILIP TIGO, ASSOCIATE OF TECHNOLOGY, KENYA, Thank you so much for this opportunity to share my reflections. And I noticed that this is really a women -led network, so again, congratulations, Ovashi and Rachel, for putting this together. I think before we celebrate the launch of the network, I think we must acknowledge that we are working with the right people and we are working with the right people. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here.

And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think we have a lot of good people here. And I think because you must acknowledge the structural problem around the safety conversations and the infrastructure that has been cutting safety in the last three years. I think the global south has always been excluded from this conversation. I say this from a position of strength because Kenya is the only, Kenya I think, we’re the only member of the international network of AI safety institutes.

And so there’s a challenge there. And so I think that model that is not inclusive to a global majority, that in most cases bears the brunt and the impacts of AI, is not acceptable. And so this network, in my sense, is timely but also late. And so there’s almost an urgency that we need to work very closely in how we scale up what this network does. The second part, of course, is, as I mentioned, that a lot of the global majority countries that are there are not. They are the ones that not just bear the brunt of the models, but bear the adverse societal harms of the models. Kenya is one of the countries that uses one of the models.

and from the use cases we see that they use it for the wrong reasons. Emotional support or companionship, it’s not necessarily for anything meaningful or productivity. And so as the world advances, it therefore behoves us that we work with these frontier model companies to ensure that their models are safe beyond secure, but also are more trustworthy. The second part, of course, is that part of model evaluations assumes access. We now know that a lot of my colleagues who are doing model evaluations are doing it from an external point of view. So we need to be very clear that global majority countries, and by this when I say global majority countries, we also have a new global south in AI, because it’s just not the global majority.

We know in the global north of artificial intelligence is two countries and a few companies. So we must, beyond this, extend to also include other colleagues, whether it’s from Europe, Western Europe, or Latin America. Safety must also go beyond technology. towards socio -technical issues. We look at AI in the countries of Kenya from minds to models and so safety must also include environmental harms, biases, misinformation, disinformation but also harms to water, environment and so we need full lifecycle accountability. It’s good to evaluate the models but also it’s good to evaluate the footprints of the model quickly. There are four structural gaps that we see and this is why I love this network and the network I think one is yes you want global majority folks to evaluate the models but we have great teaming capacity gaps so I hope that this network will look at this.

Secondly I think is also issues of access to compute. We can’t have global majority researchers trying to evaluate models without necessarily having access to compute to do that. Third part of course has been mentioned by I think his left issues around linguistic and cultural mismatch so we need to do that the other part of course is benchmarking. as governance power. Also, benchmarks are not neutral. Sometimes I think I like to be honest because that’s what evaluation needs to do. And so we need, in most cases, to ensure that only a handful of institutions should not define what risks are measured, what harms are prioritized, and what safe performance means. Governance is about power. And we must deconcentrate that power even if it’s unintentional.

Finally, I think for me, evaluation is also about agency. And we must have a question of agency, a notion of agency around these models, but also including sovereign capability. As we know, a lot of your countries are trying to build sovereign models, but also sovereign capabilities across the track. What should this network deliver, in my view? And I’ll humbly make these quick suggestions. One, I think, yes, good to have the network, but can we have regional nodes for this? So that, because Africa… I speak for Africa, Africa is another country, it’s 54 countries, expanded to have nodes. Secondly, include multilingual benchmark data sets. Could be an interesting annual red teaming exercise. Could be potentially, why not publish a Global South AI Safety Report with an expansive definition of what safety is.

And I would be remiss if I don’t say how do we fit this into the multilateral process. We already have a global UN scientific panel on AI, and there’s a global dialogue on AI governance. I’m one of the champions for this, so hopefully we will get this in there. Finally, let’s close the accountability loop. How do all this ultimately matter for citizens? We can evaluate all we want, but if they don’t translate

Dr. Urvashi Aneja

Thank you, Ambassador, for highlighting the urgency of this work and also reframing the safety conversation for the Global South. And just to say we are planning to have regional hubs, and we do. And I think the point about how we engage with the multilateral system is very important, and we will have the Indian AC as part of our steering committee, and we hope we can work with the government of Kenya as well. And, of course, we have Professor Ravindran, who is part of the scientific council, so we will be relying on him as well. But thank you. Thank you for your remarks. And with that, I’d like to call our final keynote speaker for the day, who represents the UN Office for Digital and Emerging Technologies.

I’m pleased to invite Mr. Quenchen Chow Lambert, the Chief of Office and AI Lead, to deliver the next keynote. Thank you for your keynote address.

Mr. Quintin Chou-Lambert

There is less, perhaps, infrastructure or energy connection to go around. So the concept of AI safety becomes less of a, or it kind of edges into this more contextual field, and that’s where this kind of low perspectives, field -tested examples can be very helpful to surface, which we’re missing. And I’d say the idea of AI standards as technical standards don’t solve that issue because a one -size -fits -all standard will not be contextually sensitive. So moving from this kind of scaling a small, a very concentrated, highly expensive model across a massive user base to more tailored, small -language models to context turns the issue of AI safety into a more fuzzy kind of discussion and one which really needs empirical evidence.

And I think the trends in the institutional discussions from Bletchley Park to Sears, Seoul, where there were also around 30 countries signing the declaration, to Paris, where you had 60 -plus, and now here. over 100 countries engaging. We now have the United Nations Global Dialogue on AI Governance, which will include a whole 193 member states informed by analysis from an independent international scientific panel on AI, which will look at the risks and also opportunities and impacts of AI. And so as the conversation in these summit settings and in the international level has widened and to include more countries and more people and covered more of humanity, the focus has, through the open source developments, been allowed to become much more focused of encompassing other perspectives.

And that’s why, to close and to echo Ambassador Thieger, these kinds of networks play a crucial role in connecting and bringing examples of the challenges that we face. Thank you very much. cases of threats from various sources to local people into discussions so that international discussions do not ignore or omit or discount the perspectives of the vast majority of people on the planet. Thank you.

Dr. Urvashi Aneja

Thank you, Mr. Chow, for those remarks. I’d now like to call our panelists onto the stage. Ms. Natasha Crampton, Vice President and Chief Responsible AI Officer at Microsoft. Dr. Rachel Tabande, Senior Program Officer AI for Africa at the Gates Foundation. Before you sit, we’re going to take one quick picture. Ms. Chennai Chair. I don’t see you. Oh, there you are. Yes, okay. Director of the Masakane African Language Hub. Mr. Amin Banefatami, Chief Responsible AI Officer. I’m cognizant. And last but certainly not least, Dr. Balaram Ravindran, Head Center of Responsible AI at IIT Madras. Yes, and can we get the keynote speakers as well? Thank you. As with all good things in life, we’re short on time.

But so let’s get started. Rachel, I’m going to start with you. Thank you. where according to you what according to you or where according to you do you feel like we still lack clarity on how safe and reliable AI systems are when they’re deployed in real world context in the global south

Dr. Rachel Sibande

thank you so couple of things maybe two three things number one is we need to redefine what is safe and what is harmful in as far as AI models or applications are concerned according to the social cultural context that they are deployed in and that means that having models or applications that are great at understanding the data or the patterns to generate content is not enough if they do not understand the social norms the gender dynamics the religious beliefs the political sensitivities or indeed even the humor the slang or the tone particularly now that voice is being used in the media a key channel for delivery of AI. So we need to redefine safety and harm in the context in which AI models are deployed.

So I think we’re missing that, but hopefully we get there. I think the second piece is around language. It’s not enough for a large language model to have strong translation capabilities. Language in itself is not just about vocabulary. It’s also about the lived meaning, the lived experiences. I come from a beautiful country called Malawi. It’s also called the warm heart of Africa. Now, if you’re deploying a model for pregnant mothers to access advisory messaging there, if the mother says their waters have broken, which clinically is a critical incident that should warrant that mother to be referred to a health facility, but if you translate that from the local language to English, which is where most of these large language models and applications have been benchmarked on, that will literally mean I have thrown away water.

So if the model is not trained to understand that context, then you will miss that flag. And then finally, I wanted to say that we also need to understand the harms that emerge as people use the AI models. Currently, I think much of the benchmarking is done on the content and predefined metrics. So final example, personally, I use my AI companion as my therapist. So it’s the one persona that knows a lot about my personality from all spheres, as a mother, as a career person, my finances, all of that. But at what point can we then be able to track whether I’m substituting my cognizance and cognitive capabilities with that AI model or application, or that I’m becoming overly emotionally dependent?

So I think there are those three areas that we’re missing, and hopefully we can get better at it. Thank you.

Dr. Urvashi Aneja

Thank you, Rachel, and thank you also for those powerful examples, because I think we’ve been saying some of this at almost a theoretical level, but I think those examples really bring home the gaps in terms of where the current safety conversation is. Chennai, from a civil society perspective, what do you feel companies or developers often miss about the safety implications of deploying AI systems in the global south?

Ms. Chenai Chair

That’s one thing they miss, the user experience. So on a more serious note, thank you, Vashi. So this is great to actually piggyback from what you said, and I was like, are we reading the same notes? So I think what really is missed when people are deploying some of these solutions is around the context in which they’re deploying the tool. And this is particularly looking at an example where on the African continent, there is high levels of gender inequality, a very youthful population with young people often unemployed, and also older people forgotten in actually the development of technologies. So I don’t know who we’re developing for, but sometimes we actually don’t consider that diversity and the inequalities that exist.

So you can find that sometimes when these tools are deployed, they actually further exacerbate a situation of inequality. And I’ll give you one example where perhaps an agricultural tool that has a voice system on it to provide farmers or women information on what to plant may actually have a male -sounding voice. And if in that context there’s high issues of gender -based violence or lack of trust, and the community members were not consulted in the design process, what it actually leads to is just exacerbating an already existing situation. And that is an example. That actually did happen when people were deploying Internet solutions for a community. Then secondly, also thinking about who gets left behind in deploying these solutions.

This is where language, as Rachel was mentioning, comes in. So on the African continent, we have over 2 ,000 languages that have been documented. Masakhane is only working on 50 of those African languages to build up quality data sets. So what you then find is when people are deploying technologies, even if they deploy them in something like Kiswahili, which now has a large number of data sets, people just don’t speak Kiswahili across East Africa. And particularly in Kenya, if you go to Nairobi, the Kiswahili spoken in Nairobi will be Shang. Then you go to, it’s not even Kiswahili, as I’m being corrected. And then if you go to the coast in Mombasa, it will be completely different. So we have to actually take into account the context and nuance of what is being deployed.

And then lastly, the way in which the sector, the technology is actually used, if deployment doesn’t take into account. the whole ecosystem of the end user, it can actually result in misuse. And I want to specifically say that there’s two forms of misuse here. There could be people who unintentionally actually carry out a problematic, harmful act online based on how they’re interacting with the technology. And we know that content, particularly if it’s in their own language, and we know that content moderation for the global majority is not sufficient. Or people are underpaid, as we’ve seen the cases that were coming out about content moderators in Kenya. Then there’s actually intentional misuse. Now, this is where we find gender disinformation, the use of deepfakes to discredit people, particularly around election period.

And now with increased open AI that people can actually just type something and get something back, we are seeing that high level of deployment without thinking about what is the after -end impact. To close it off, because I’m talking about AI as if it’s coming later. A10. when they were deployed. It was great, I can track my missing bag on a flight. They have now been put in women’s bags or children’s bags by people who they do not know and they track them. That’s already an act of surveillance that was, if people had been consulted, it might have been mitigated against. Yes, I do want to know where my bag is, but I don’t want to be tracked unknowingly.

Dr. Urvashi Aneja

Thanks, Chennai, for that and also for pointing out, bringing the gender dimension on the table and highlighting the issues around what seems like useful technology, how quickly it can become surveillance technology. I’d like to now bring the industry perspective into this conversation. So, Natasha, maybe I can start with you. As you scale systems globally, what are some of the hardest constraints that you as a company face in ensuring context -sensitive safety?

Ms. Natasha Crampton

Well, thanks for that question, Arati, and congratulations to everyone on the establishment of the network. I think it’s a really important step forward. So when I think about Microsoft, I think about sort of Microsoft’s scale, and our mission is really to try and empower every person in every organization in the world to achieve more. And so one of the challenges I think that we face with scaling up our efforts here is how do we take the very deep, careful, thoughtful, community -led evaluation work that animated a project like Samishka, which the CAIA organization, as well as the Collective Intelligence Project and Microsoft Research worked on together, which really developed very context -aware evaluations that were appropriate for the use case.

And how do we take that thoughtful work and really scale it up? Because really we want to do that type of work for thousands of languages and probably millions of different cultural settings. And so I think we really need to think about this system of how we are going to build multilingual and multicultural evaluations that we can really run broadly. I think sometimes we think evaluations, we don’t sort of understand how sustainably they need to be run. As in you can’t just do it once before you release a product. You need to run the evaluations on an ongoing basis to understand how there might have been shifts. And so I really think for us we need to think about this system.

How are we going to build a sustainable, grounded, community -led system of scalable evaluation?

Dr. Urvashi Aneja

Thanks, Natasha. And I hope in some sense also the network can actually play at least part of that function in building that kind of coherence to the space of evaluation and helping us at least build a shared vocabulary and a shared set of methodologies together as organizations. Amir, what do you think needs to change, whether it’s internally within companies or externally in terms of the ecosystem that we’re operating in, to make such grounded evaluations, the kind that Natasha was talking about, become the standard practice for industry? Should they be the standard practice? And if so, how? How do we get there?

Mr. Amir Banifatemi

Thank you for that question. And first, congratulations. I’m happy to be also part of this network and support it. I think Natasha mentioned part of the foundational questions. And I think from a, I’m putting my hat off, cognizant chief responsible, we work with a lot of companies and governments into deploying. new scenarios. We call it systems or applications or anything else. The concept of safety, I was mentioned, is diffused. It’s not very clear what we’d call safety. So evaluating the underlying element that needs to be changed or to be addressed is not obvious. When we talk about models, models are not just one thing that you deploy. It goes into an application, there’s a system, infrastructure, there’s network access, there’s API connected data access.

All of them are contextually different. That was mentioned before. And then the problem, one of the problems is that, you didn’t ask me about the problem, but there’s a problem issue is that there’s a lack of imagination. People that are building systems have no awareness about the context in which those situations occur and how they occur and what’s the causes and what’s the likelihood of solution to happen. So absent of that, all this context which language is part of it, culture is part of it, is not captured. So without that, there is very little capability. to address that from a regulation or incentive perspective. Safety, on the other side, is not costed into financial systems and so forth.

There is no penalty of not being safe. So as long as there is no constraint to put safety as a cost structure, which strong mandate, companies will not pay attention or enough attention. So if it’s not part of the financial planning and the processes and so forth, it won’t happen. So there is a disconnect between what we do as enterprises to make sure that systems and platforms are properly built and deployed. There is a disconnect between the system in which they are deployed. At the same time, there is a talent inclusion that is missing. So the inclusion part is that all the talent that is building into those safety conversations are not the talent that are exposed to those issues.

So that absent voice is also a piece that needs to be addressed, not just from a skilling perspective, but also from an integration perspective. And finally, the infrastructure part. The infrastructure is not just systems and models and data, it’s also the tooling and the evaluation. And it was mentioned that evaluation has to be done differently, but if you don’t know what harm or safety means, evaluation’s gotta be different. There is probably an opportunity here to come up with a series of evaluation tools that are not only built for model design, but also built for system deployment. And if we go from pilot to scaling, what issues occur and what examples are happening and what incidents are deployed, and incident reporting is a huge opportunity here because it will capture, nested in reporting, some of the hidden element of the control issues, data access, regulation, absence, or anything else.

Finally, there is a latency issue in global north, and you mentioned probably correctly that there’s a lot of latency issues. There are only a handful of countries in the global north that probably the new slot is much bigger. there are institutional framework, you have basically the rule of law, you have civil society which is very active, you have legal framework that basically creates an accelerated feedback loop into all this incident safety in most of the global south countries these mechanisms don’t exist which delays the feedback loop and basically compounds the possible harm and everything else so there is probably an opportunity to figure out how we can accelerate the learning capabilities and the skills at which we capture knowledge and data to be tied with tools that probably need to be implemented and deployed either on an open source matter or a free access matter and build it with a contextual environment, the talent pool to make it together so the ownership of the global south, all these pieces are important so the network can actually incentivize those different pieces that could complete each other to really play a role into the global south understanding better where safety issues are, where harm can happen and what corrections can be made in the rhythm that needs to happen because rhythms are not exportable and what we do from one country to another is not.

And finally the network could probably help bring it together.

Dr. Urvashi Aneja

Thank you for laying that out and also just pointing out how all the kind of pieces link to each other and we can’t just kind of go at it at one level alone and to the importance of capacity across all those. Professor Ravindran, AI deployment is accelerating in the global south, in India, in many other countries as well. But at the same time we don’t see or so far we haven’t seen as much investment in the kind of safety and safety infrastructure. Would you agree? You’re actually asking an academic about investment? Sure, of course there’s not enough money. Why not and how do we change it?

Dr. Balaraman Ravindran

so I’m going to answer a different question sure perfect like a true academic I’m sorry I’ll connect it back to what you asked so there are a whole lot of initiatives that are getting announced at the summit and also things that I kind of discovered while having various conversations that there are multiple networks that are getting launched there are already in operation there is a network in Africa looking at capacity building there is a network in China apparently which none of us seem to have heard about that’s being launched on AI safety and capacity building and that is our network that’s getting launched and that is the UN initiative on building this network of capacity building institutes for the global south which we had a meeting in the morning as well about that so there is just too many of these initiatives that are getting launched.

And we have to figure out a way how we would coordinate operations among these initiatives as well. So I think that would be a great multiplier instead of everybody going out and saying, okay, let me see what small piece of the pie that I can get so that I can do these activities. And after that, if there is a lot more coordination. And if you remember our initial conversations about when we wanted to start this thing was about this would be like this one node in the global AC. I can’t even say global network of safety institutes anymore, can I? So they’re not even safety institutes. So AC institutes, whatever ACs, this should be like one node in the AC network which kind of represents unheard voices there because almost except for, as the ambassador was pointing out, except for Kenya.

So we really have, and of course India, I presume. We don’t have safety institutes in the global south, right, that can participate in the dialogue. So I mean, that kind of larger collaboration framework is something that we should enable so that, I mean, even if you say, we go to Gates, and then how many different people, how many different networks would Gates want to spend their money to? If that is one way we can say that there’s this whole operation that’s happening, then that would be a great way of harmonizing our efforts. I can turn it back to the question. Thank you.

Dr. Urvashi Aneja

No, I mean, I think you raised a really important issue of kind of harmonizing these efforts, and also that how this network can play a really important role in the larger kind of AC network. Luckily, the S remains the same, so we can still go with the acronym, I guess, on the safety network. We’re almost at time, so let’s just do one kind of quick rapid -fire round with all the panelists, and maybe Natasha, I can start with you. What is the one concrete step your institution, Microsoft, could take in the next year to strengthen AI safety in the global south?

Ms. Natasha Crampton

Well I’m looking forward to making good on the New Delhi Frontier AI commitments that Microsoft made which is going to help advance multilingual and multicultural evaluation work as well as share data that will help policy makers make or understand AI adoption within their countries and make the sort of choices and policy interventions that help bring that broader access so if I can be sneaky and kind of come as one thing. The second thing I’m really excited about is we’re making large infrastructure investments across the global south to the tune of 50 billion dollars by the end of this decade now that infrastructure as Amir and others on the panel have mentioned is essential to being able to building up this scaled system of sustainable evaluation so I’m looking forward to those investments too.

Dr. Urvashi Aneja

Thank you.

Dr. Balaraman Ravindran

is that a fire alarm or something?

Dr. Urvashi Aneja

No, no, no, they’re telling us that we have to wrap up I think.

Dr. Balaraman Ravindran

Okay, great, so wrapping up, so we have to get the work going, rolling, right, so talking about it is one thing, but actually starting to do this collaboration and getting this research efforts going, we’d love to reach out to partners across the globe, in fact, I’m part of the other UN network as well and we have been talking about looking at problems that would necessarily require cross -border collaboration, right, as supposed to, you know, problems that we would anyway solve in our geography, then just working with somebody else to solve it in two geographies, okay but if you can pick problems that will necessarily require people across borders to collaborate, I think that will certainly drive this and also will, you know, kind of put forth the importance of having the network itself, not just information sharing, but actually problem solving that can be done only across the network.

Dr. Urvashi Aneja

Thank you. Rachel, 30 seconds.

Dr. Rachel Sibande

30 seconds I think from the foundation side is to really institutionalize the evaluation of safety of AI solutions right at deployment because what we see now is that safety issues almost emerge post deployment thank you

Ms. Chenai Chair

so from the hub side we actually do have a benchmarking initiative that’s going on this year so this will be one contributing to the African benchmarking work and so that will be our output in contribution

Dr. Urvashi Aneja

amazing looking forward to that thank you Chennai and Amir last but not least

Mr. Amir Banifatemi

we’re working already on with our two labs one in Bangalore actually and one in San Francisco on safety evaluations mostly on incident reporting and we already made it culturally contextual so I hope that we are helpful to basically provide open source tools for evaluation to disseminate them and work with that work to basically make them accessible to the public available to all partners.

Dr. Urvashi Aneja

Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Urvashi Aneja
2 arguments129 words per minute2383 words1102 seconds
Argument 1
Network addresses underrepresentation of Global South in AI safety infrastructure and governance forums
EXPLANATION
Dr. Aneja argues that Global South organizations, communities, and states remain underrepresented in global safety and governance infrastructures, with many countries unlikely to have their own safety or oversight institutes in the near term. This creates a risk that concerns and priorities of these countries remain underrepresented in global safety infrastructure.
EVIDENCE
Many countries in the Global South are actually unlikely to even have in the near term their own safety or oversight institutes. Independent civil society organizations are uniquely positioned to address this gap through their proximity to real-world deployment contexts.
MAJOR DISCUSSION POINT
Global South underrepresentation in AI governance
AGREED WITH
Ambassador Philip Thigo
DISAGREED WITH
Ambassador Philip Thigo
Argument 2
Network will focus on multilingual benchmarks, gender safety taxonomy, procurement guidelines, evaluation methodologies, and health information systems
EXPLANATION
Dr. Aneja outlines five flagship projects the network will launch: building benchmarks for multilingual AI, developing a taxonomy of gender harm, supporting procurement as a lever for responsible innovation, advancing evaluation methodologies, and evaluating health information systems for Global South contexts.
EVIDENCE
Building benchmarks for multilingual AI with Collective Intelligence Project and CARIA; gender safety work with GXD Hub and Global Center for AI Governance; procurement work to support responsible innovation; evaluation science work with ITS Rio; health information systems evaluation for clinicians and doctors in the Global South.
MAJOR DISCUSSION POINT
Network’s concrete activities and projects
M
Mr. Abhishek Singh
2 arguments181 words per minute892 words294 seconds
Argument 1
Initiative aligns with New Delhi Frontier AI commitments and supports compliance with multilingual benchmarks
EXPLANATION
Mr. Singh explains that the network launch aligns with the New Delhi Frontier AI commitments where all models committed to sharing usage data and multilingual performance benchmarks. The network will help enable compliance with these commitments and create tools for evaluating models in various languages.
EVIDENCE
The New Delhi Frontier AI commitments where all models committed to sharing usage data as well as multilingual performance benchmarks. The network will help create tools for evaluating models in various languages and build capacity across Global South countries.
MAJOR DISCUSSION POINT
Alignment with international AI commitments
AGREED WITH
Ms. Natasha Crampton, Ms. Chenai Chair
Argument 2
Innovation focus should not stifle responsible AI development but ensure safe and trustworthy deployment
EXPLANATION
Mr. Singh emphasizes that while the primary objective is to ensure AI diffusion and benefit more users, this must be done responsibly, safely, and in a trustworthy manner to limit potential harm. The goal is not to stifle innovation but to enable responsible deployment.
EVIDENCE
The primary objective is to ensure diffusion of AI and that more users benefit from AI usage, but this needs to be done in a responsible, safe, and trustworthy manner to limit harm that can be caused.
MAJOR DISCUSSION POINT
Balancing innovation with safety
A
Ambassador Philip Thigo
4 arguments196 words per minute1016 words309 seconds
Argument 1
Network is timely but also overdue given exclusion of global majority from safety conversations
EXPLANATION
Ambassador Thigo argues that there has been a structural problem with safety conversations over the last three years, with the Global South consistently excluded. He notes that Kenya is the only Global South member of the international network of AI safety institutes, highlighting the exclusion of countries that bear the brunt of AI impacts.
EVIDENCE
Kenya is the only member of the international network of AI safety institutes from the Global South. The global majority countries are the ones that bear the brunt and adverse societal harms of AI models.
MAJOR DISCUSSION POINT
Structural exclusion from AI safety governance
AGREED WITH
Dr. Urvashi Aneja
DISAGREED WITH
Dr. Urvashi Aneja
Argument 2
AI safety extends beyond technology to include environmental harms and full lifecycle accountability
EXPLANATION
Ambassador Thigo argues that safety must go beyond technology to address socio-technical issues including environmental harms, biases, misinformation, and impacts on water and environment. He emphasizes the need for full lifecycle accountability from ‘minds to models.’
EVIDENCE
Safety must include environmental harms, biases, misinformation, disinformation, and harms to water and environment. Kenya looks at AI from ‘minds to models’ requiring full lifecycle accountability including evaluating the footprints of models.
MAJOR DISCUSSION POINT
Comprehensive definition of AI safety
AGREED WITH
Dr. Rachel Sibande, Ms. Chenai Chair
DISAGREED WITH
Mr. Amir Banifatemi, Dr. Rachel Sibande
Argument 3
Global South countries lack capacity building, access to compute, and face linguistic/cultural mismatches in benchmarking
EXPLANATION
Ambassador Thigo identifies four structural gaps: capacity building limitations, lack of access to compute for researchers, linguistic and cultural mismatches in evaluation, and benchmarking issues. He emphasizes that benchmarks are not neutral and power should not be concentrated in a few institutions.
EVIDENCE
Four structural gaps identified: capacity building gaps, lack of access to compute for Global South researchers, linguistic and cultural mismatches, and benchmarking issues. Only a handful of institutions should not define what risks are measured and what safe performance means.
MAJOR DISCUSSION POINT
Structural barriers to Global South participation
AGREED WITH
Mr. Amir Banifatemi, Dr. Balaraman Ravindran
Argument 4
Benchmarks are not neutral and power concentration in few institutions must be addressed
EXPLANATION
Ambassador Thigo argues that benchmarks are not neutral tools and that governance is fundamentally about power. He emphasizes the need to deconcentrate power in defining what risks are measured, what harms are prioritized, and what constitutes safe performance, even if this concentration is unintentional.
EVIDENCE
Benchmarks are not neutral. Only a handful of institutions should not define what risks are measured, what harms are prioritized, and what safe performance means. Governance is about power and we must deconcentrate that power even if it’s unintentional.
MAJOR DISCUSSION POINT
Power dynamics in AI governance
M
Mr. Quintin Chou-Lambert
1 argument135 words per minute324 words143 seconds
Argument 1
Network should connect empirical evidence from field testing to international governance discussions
EXPLANATION
Mr. Chou-Lambert argues that as AI deployment moves toward more tailored, small-language models for specific contexts, AI safety becomes more contextual and requires empirical evidence. The network plays a crucial role in connecting field-tested examples to international discussions to ensure global governance doesn’t ignore the perspectives of the vast majority of people.
EVIDENCE
The trend from scaling concentrated, expensive models to more tailored, small-language models makes AI safety more contextual. International discussions have widened from 30 countries at Bletchley Park to over 100 countries, and the UN Global Dialogue includes all 193 member states.
MAJOR DISCUSSION POINT
Connecting local evidence to global governance
D
Dr. Rachel Sibande
3 arguments144 words per minute454 words188 seconds
Argument 1
Safety and harm must be redefined according to social, cultural, and linguistic contexts where AI is deployed
EXPLANATION
Dr. Sibande argues that AI safety cannot be defined universally but must account for social norms, gender dynamics, religious beliefs, political sensitivities, and cultural nuances including humor, slang, and tone. Current safety definitions are insufficient for diverse deployment contexts in the Global South.
EVIDENCE
Example from Malawi where ‘waters have broken’ (critical medical condition) translates literally to ‘I have thrown away water’ – if models aren’t trained to understand this context, they will miss critical health flags for pregnant mothers.
MAJOR DISCUSSION POINT
Contextual definition of AI safety
AGREED WITH
Ms. Chenai Chair, Ambassador Philip Thigo
DISAGREED WITH
Ambassador Philip Thigo, Mr. Amir Banifatemi
Argument 2
Language models need understanding of lived experiences, not just translation capabilities
EXPLANATION
Dr. Sibande emphasizes that language encompasses lived meaning and experiences beyond vocabulary. Strong translation capabilities are insufficient if AI systems don’t understand the cultural and contextual meaning behind language use in specific communities.
EVIDENCE
Example from Malawi healthcare context where clinical terminology has specific cultural meanings that literal translation would miss, potentially causing life-threatening misunderstandings in maternal health scenarios.
MAJOR DISCUSSION POINT
Cultural context in language processing
Argument 3
Foundation will institutionalize safety evaluation right at deployment rather than post-deployment
EXPLANATION
Dr. Sibande commits to changing the current practice where safety issues emerge after deployment by institutionalizing safety evaluation processes at the point of deployment. This represents a shift from reactive to proactive safety measures.
EVIDENCE
Current observation that safety issues almost emerge post-deployment, indicating the need for earlier intervention in the deployment process.
MAJOR DISCUSSION POINT
Proactive vs reactive safety measures
AGREED WITH
Ms. Natasha Crampton
M
Ms. Chenai Chair
2 arguments167 words per minute683 words244 seconds
Argument 1
Developers miss user experience and context, often exacerbating existing inequalities like gender-based violence
EXPLANATION
Ms. Chair argues that developers fail to consider the diverse contexts of deployment, including high levels of gender inequality, youth unemployment, and marginalized older populations. This oversight can worsen existing inequalities rather than addressing them.
EVIDENCE
Example of an agricultural tool with a male-sounding voice deployed in communities with high gender-based violence and lack of trust, where community members weren’t consulted, leading to exacerbation of existing problems.
MAJOR DISCUSSION POINT
Context-insensitive deployment consequences
AGREED WITH
Dr. Rachel Sibande, Ambassador Philip Thigo
Argument 2
Masakhane Hub will contribute African benchmarking work for 50 African languages
EXPLANATION
Ms. Chair commits to contributing to African benchmarking initiatives, noting that while Africa has over 2,000 documented languages, Masakhane is working on building quality datasets for 50 African languages. She emphasizes the complexity of language variations even within single language groups.
EVIDENCE
Africa has over 2,000 documented languages, but Masakhane is only working on 50. Even within languages like Kiswahili, there are significant variations – Nairobi speaks Sheng (not even Kiswahili), while coastal Mombasa speaks completely different variations.
MAJOR DISCUSSION POINT
African language diversity and benchmarking
AGREED WITH
Mr. Abhishek Singh, Ms. Natasha Crampton
M
Ms. Natasha Crampton
3 arguments136 words per minute404 words177 seconds
Argument 1
Challenge lies in scaling thoughtful, community-led evaluation work across thousands of languages and millions of cultural settings
EXPLANATION
Ms. Crampton identifies the core challenge as scaling deep, careful, community-led evaluation work like the Samishka project across thousands of languages and millions of cultural settings. The challenge is building sustainable systems that can handle this scale while maintaining quality and community involvement.
EVIDENCE
Reference to the Samishka project collaboration between CAIA, Collective Intelligence Project, and Microsoft Research that developed context-aware evaluations appropriate for specific use cases.
MAJOR DISCUSSION POINT
Scaling community-led evaluations
AGREED WITH
Mr. Abhishek Singh, Ms. Chenai Chair
DISAGREED WITH
Dr. Balaraman Ravindran
Argument 2
Need for sustainable evaluation systems that run continuously, not just once before product release
EXPLANATION
Ms. Crampton emphasizes that evaluations cannot be one-time activities before product release but must run continuously to understand shifts and changes. This requires building sustainable, grounded, community-led systems of scalable evaluation.
EVIDENCE
Explanation that evaluations need to be run on an ongoing basis to understand how there might have been shifts, not just once before releasing a product.
MAJOR DISCUSSION POINT
Continuous vs one-time evaluation
AGREED WITH
Dr. Rachel Sibande
Argument 3
Microsoft commits to New Delhi Frontier AI commitments and $50 billion infrastructure investment in Global South
EXPLANATION
Ms. Crampton commits Microsoft to fulfilling the New Delhi Frontier AI commitments regarding multilingual and multicultural evaluation work and data sharing for policymakers. Additionally, Microsoft is making large infrastructure investments of $50 billion in the Global South by the end of the decade.
EVIDENCE
Specific commitment to New Delhi Frontier AI commitments for sharing usage data and multilingual performance benchmarks, plus $50 billion infrastructure investment by end of decade.
MAJOR DISCUSSION POINT
Corporate commitments to Global South AI development
M
Mr. Amir Banifatemi
3 arguments169 words per minute844 words299 seconds
Argument 1
Safety lacks clear definition and is not integrated into financial planning or cost structures of companies
EXPLANATION
Mr. Banifatemi argues that the concept of safety is diffused and unclear, making it difficult to evaluate what needs to be changed. More critically, safety is not costed into financial systems and there are no penalties for not being safe, so companies won’t pay adequate attention without financial constraints.
EVIDENCE
Safety is not part of financial planning and processes. There is no penalty for not being safe, so as long as there’s no constraint to put safety as a cost structure with strong mandate, companies will not pay enough attention.
MAJOR DISCUSSION POINT
Economic incentives for AI safety
DISAGREED WITH
Ambassador Philip Thigo, Dr. Rachel Sibande
Argument 2
Absence of institutional frameworks in Global South delays feedback loops and compounds potential harms
EXPLANATION
Mr. Banifatemi explains that while Global North countries have institutional frameworks, rule of law, active civil society, and legal frameworks that create accelerated feedback loops for safety incidents, most Global South countries lack these mechanisms. This delays learning and compounds potential harms.
EVIDENCE
Global North has institutional frameworks, rule of law, active civil society, and legal frameworks creating accelerated feedback loops. Most Global South countries lack these mechanisms, which delays feedback loops and compounds possible harm.
MAJOR DISCUSSION POINT
Institutional capacity gaps
AGREED WITH
Ambassador Philip Thigo, Dr. Balaraman Ravindran
Argument 3
Cognizant labs will provide open source safety evaluation tools with cultural context
EXPLANATION
Mr. Banifatemi commits to working through Cognizant’s labs in Bangalore and San Francisco on safety evaluations, particularly incident reporting that is culturally contextual. They aim to provide open source tools for evaluation and make them accessible to all partners.
EVIDENCE
Working with two labs, one in Bangalore and one in San Francisco, on safety evaluations mostly focused on incident reporting that is already culturally contextual.
MAJOR DISCUSSION POINT
Open source safety tools development
D
Dr. Balaraman Ravindran
2 arguments172 words per minute565 words196 seconds
Argument 1
Multiple networks are launching simultaneously requiring coordination to avoid fragmentation
EXPLANATION
Dr. Ravindran points out that numerous AI safety and capacity building networks are being launched simultaneously – in Africa, China, and through UN initiatives – and emphasizes the need for coordination among these initiatives rather than competing for resources. He suggests this coordination would be a great multiplier for effectiveness.
EVIDENCE
Multiple networks launching: one in Africa for capacity building, one in China for AI safety and capacity building, the current Global South network, and UN initiative on building network of capacity building institutes for Global South.
MAJOR DISCUSSION POINT
Coordination among multiple AI networks
AGREED WITH
Ambassador Philip Thigo, Mr. Amir Banifatemi
DISAGREED WITH
Ms. Natasha Crampton
Argument 2
Need for cross-border collaboration on problems that require international cooperation
EXPLANATION
Dr. Ravindran emphasizes the importance of picking problems that necessarily require people across borders to collaborate, rather than just working with others to solve problems in separate geographies. This approach would drive the network’s importance and demonstrate the value of cross-border collaboration for problem-solving.
EVIDENCE
Distinction between problems that require cross-border collaboration versus problems that could be solved in individual geographies but are being worked on collaboratively.
MAJOR DISCUSSION POINT
International collaborative problem-solving
Agreements
Agreement Points
Global South underrepresentation in AI safety governance requires urgent action
Speakers: Dr. Urvashi Aneja, Ambassador Philip Thigo
Network addresses underrepresentation of Global South in AI safety infrastructure and governance forums Network is timely but also overdue given exclusion of global majority from safety conversations
Both speakers emphasize that Global South countries and communities have been systematically excluded from AI safety governance structures, with Ambassador Thigo noting Kenya is the only Global South member of international AI safety institutes, while Dr. Aneja highlights that many Global South countries lack their own safety institutes
AI safety must be contextually defined and culturally sensitive
Speakers: Dr. Rachel Sibande, Ms. Chenai Chair, Ambassador Philip Thigo
Safety and harm must be redefined according to social, cultural, and linguistic contexts where AI is deployed Developers miss user experience and context, often exacerbating existing inequalities like gender-based violence AI safety extends beyond technology to include environmental harms and full lifecycle accountability
All three speakers agree that current AI safety definitions are inadequate for Global South contexts and must account for local social norms, cultural nuances, gender dynamics, and broader socio-technical impacts including environmental considerations
Need for multilingual and multicultural evaluation systems
Speakers: Mr. Abhishek Singh, Ms. Natasha Crampton, Ms. Chenai Chair
Initiative aligns with New Delhi Frontier AI commitments and supports compliance with multilingual benchmarks Challenge lies in scaling thoughtful, community-led evaluation work across thousands of languages and millions of cultural settings Masakhane Hub will contribute African benchmarking work for 50 African languages
All speakers recognize the critical need for AI evaluation systems that work across diverse languages and cultures, with specific commitments to multilingual benchmarking and acknowledgment of the scale challenge involved
Capacity building and institutional strengthening are essential
Speakers: Ambassador Philip Thigo, Mr. Amir Banifatemi, Dr. Balaraman Ravindran
Global South countries lack capacity building, access to compute, and face linguistic/cultural mismatches in benchmarking Absence of institutional frameworks in Global South delays feedback loops and compounds potential harms Multiple networks are launching simultaneously requiring coordination to avoid fragmentation
All speakers identify capacity building as a fundamental challenge, noting gaps in technical capacity, institutional frameworks, and the need for coordinated efforts to build sustainable capabilities across the Global South
Evaluation must be continuous and proactive rather than reactive
Speakers: Ms. Natasha Crampton, Dr. Rachel Sibande
Need for sustainable evaluation systems that run continuously, not just once before product release Foundation will institutionalize safety evaluation right at deployment rather than post-deployment
Both speakers emphasize moving from one-time or post-deployment safety assessments to continuous, proactive evaluation systems that monitor AI systems throughout their lifecycle
Similar Viewpoints
Both speakers see the network as a crucial bridge between local, contextual AI deployment experiences and global governance structures, ensuring that international AI governance reflects the perspectives and needs of the global majority
Speakers: Dr. Urvashi Aneja, Mr. Quintin Chou-Lambert
Network addresses underrepresentation of Global South in AI safety infrastructure and governance forums Network should connect empirical evidence from field testing to international governance discussions
Both speakers emphasize that AI safety and innovation are complementary rather than competing objectives, with concrete commitments to responsible AI development and significant infrastructure investments
Speakers: Mr. Abhishek Singh, Ms. Natasha Crampton
Innovation focus should not stifle responsible AI development but ensure safe and trustworthy deployment Microsoft commits to New Delhi Frontier AI commitments and $50 billion infrastructure investment in Global South
Both speakers highlight structural power imbalances in AI governance, with Ambassador Thigo focusing on institutional power concentration and Mr. Banifatemi on economic incentive structures that fail to prioritize safety
Speakers: Ambassador Philip Thigo, Mr. Amir Banifatemi
Benchmarks are not neutral and power concentration in few institutions must be addressed Safety lacks clear definition and is not integrated into financial planning or cost structures of companies
Unexpected Consensus
Economic incentives for AI safety
Speakers: Mr. Amir Banifatemi, Ms. Natasha Crampton
Safety lacks clear definition and is not integrated into financial planning or cost structures of companies Microsoft commits to New Delhi Frontier AI commitments and $50 billion infrastructure investment in Global South
Unexpected alignment between a consultant’s critique of corporate safety incentives and a corporate representative’s substantial financial commitments, suggesting recognition across industry that current economic models inadequately incentivize safety
Need for coordination among multiple AI initiatives
Speakers: Dr. Balaraman Ravindran, Dr. Urvashi Aneja
Multiple networks are launching simultaneously requiring coordination to avoid fragmentation Network addresses underrepresentation of Global South in AI safety infrastructure and governance forums
Academic and network founder both recognize the proliferation of AI safety initiatives could lead to fragmentation rather than strengthened capacity, showing pragmatic consensus on the need for strategic coordination
Overall Assessment

Strong consensus emerged around five key areas: Global South underrepresentation in AI governance, need for contextually-sensitive safety definitions, importance of multilingual evaluation systems, capacity building requirements, and shift toward proactive evaluation approaches. Speakers consistently emphasized that current AI safety frameworks inadequately serve Global South contexts and populations.

High level of consensus with remarkable alignment across government, industry, civil society, and academic perspectives. This suggests the Global South Network for Trustworthy AI addresses widely recognized gaps in current AI governance structures. The consensus implies strong potential for collaborative action and suggests the network fills a critical institutional void in global AI governance.

Differences
Different Viewpoints
Timeline and urgency of network establishment
Speakers: Ambassador Philip Thigo, Dr. Urvashi Aneja
Network is timely but also overdue given exclusion of global majority from safety conversations Network addresses underrepresentation of Global South in AI safety infrastructure and governance forums
Ambassador Thigo emphasizes that while the network is timely, it is also ‘late’ and there’s an urgency to scale up quickly due to structural exclusion, while Dr. Aneja presents it as a timely response to current gaps without the same sense of overdue urgency
Scope and definition of AI safety
Speakers: Ambassador Philip Thigo, Mr. Amir Banifatemi, Dr. Rachel Sibande
AI safety extends beyond technology to include environmental harms and full lifecycle accountability Safety lacks clear definition and is not integrated into financial planning or cost structures of companies Safety and harm must be redefined according to social, cultural, and linguistic contexts where AI is deployed
The speakers disagree on what constitutes AI safety – Ambassador Thigo advocates for a comprehensive approach including environmental impacts, Mr. Banifatemi focuses on the lack of clear definition and financial integration, while Dr. Sibande emphasizes cultural and contextual redefinition
Approach to scaling evaluation systems
Speakers: Ms. Natasha Crampton, Dr. Balaraman Ravindran
Challenge lies in scaling thoughtful, community-led evaluation work across thousands of languages and millions of cultural settings Multiple networks are launching simultaneously requiring coordination to avoid fragmentation
Ms. Crampton focuses on the technical challenge of scaling individual evaluation systems, while Dr. Ravindran emphasizes the need for coordination among multiple competing networks to avoid fragmentation and resource competition
Unexpected Differences
Network coordination vs individual network development
Speakers: Dr. Balaraman Ravindran, Dr. Urvashi Aneja
Multiple networks are launching simultaneously requiring coordination to avoid fragmentation Network addresses underrepresentation of Global South in AI safety infrastructure and governance forums
Unexpectedly, Dr. Ravindran, who is part of the founding network, raises concerns about too many similar networks launching simultaneously and the need for coordination, which somewhat contradicts the celebratory launch tone of the event
Corporate responsibility approach
Speakers: Ms. Natasha Crampton, Mr. Amir Banifatemi
Microsoft commits to New Delhi Frontier AI commitments and $50 billion infrastructure investment in Global South Safety lacks clear definition and is not integrated into financial planning or cost structures of companies
While both represent industry perspectives, Crampton emphasizes Microsoft’s commitments and investments, while Banifatemi argues that companies fundamentally won’t prioritize safety without financial penalties, creating an unexpected tension between corporate commitment and systemic critique
Overall Assessment

The main areas of disagreement center around the scope and definition of AI safety, the urgency and timing of network establishment, approaches to scaling evaluation systems, and the balance between individual network development versus coordination among multiple initiatives

Moderate disagreement with significant implications – while all speakers support the network’s goals, their different emphases on environmental impacts, power dynamics, technical scaling, and institutional coordination could lead to different priorities and resource allocation decisions that may fragment efforts or create competing approaches to Global South AI safety

Partial Agreements
All speakers agree that current evaluation approaches are insufficient and need to be more systematic and continuous, but they disagree on the primary solution – Crampton focuses on sustainable technical systems, Banifatemi on institutional frameworks, and Sibande on timing of deployment
Speakers: Ms. Natasha Crampton, Mr. Amir Banifatemi, Dr. Rachel Sibande
Need for sustainable evaluation systems that run continuously, not just once before product release Absence of institutional frameworks in Global South delays feedback loops and compounds potential harms Foundation will institutionalize safety evaluation right at deployment rather than post-deployment
All agree that current AI systems fail to account for Global South contexts and can cause harm, but they emphasize different aspects – Thigo focuses on power dynamics in benchmarking, Chair on deployment context and inequality, and Sibande on linguistic and cultural understanding
Speakers: Ambassador Philip Thigo, Ms. Chenai Chair, Dr. Rachel Sibande
Benchmarks are not neutral and power concentration in few institutions must be addressed Developers miss user experience and context, often exacerbating existing inequalities like gender-based violence Language models need understanding of lived experiences, not just translation capabilities
Takeaways
Key takeaways
The Global South Network for Trustworthy AI was successfully launched to address the critical underrepresentation of Global South perspectives in global AI safety infrastructure and governance forums AI safety must be redefined contextually for Global South deployment, considering social, cultural, linguistic, and infrastructural differences rather than applying one-size-fits-all standards Current AI evaluation methods are inadequate for Global South contexts, missing critical aspects like lived experiences in language, cultural nuances, gender dynamics, and post-deployment harm tracking Industry faces significant challenges in scaling thoughtful, community-led evaluation work across thousands of languages and millions of cultural settings while maintaining sustainability Safety is not adequately integrated into companies’ financial planning and cost structures, creating insufficient incentives for comprehensive safety measures Multiple similar networks are launching simultaneously, requiring coordination to avoid fragmentation and maximize impact through harmonized efforts The network will serve as crucial connective tissue between global governance architecture and real-world deployment contexts in the Global South
Resolutions and action items
Network will launch five flagship projects in the coming year: multilingual AI benchmarks, gender safety taxonomy, procurement guidelines, evaluation methodologies, and health information systems evaluation Microsoft committed to fulfilling New Delhi Frontier AI commitments on multilingual benchmarks and usage data sharing, plus $50 billion infrastructure investment in Global South by end of decade Gates Foundation will institutionalize safety evaluation right at AI deployment rather than waiting for post-deployment issues Masakhane Hub will contribute to African benchmarking work covering 50 African languages Cognizant will provide open source safety evaluation tools with cultural context through their Bangalore and San Francisco labs Network will establish regional nodes/hubs to better serve diverse Global South contexts India AI mission committed to provide ongoing support for network operations and objectives Network will work to integrate findings into multilateral processes including UN Global Dialogue on AI Governance and scientific panel Focus on cross-border collaborative problem-solving that requires international cooperation rather than parallel work in separate geographies
Unresolved issues
How to effectively coordinate with multiple other AI safety networks launching simultaneously to avoid duplication and fragmentation Lack of clear, universally accepted definition of what constitutes ‘safety’ in AI systems across different contexts How to create sustainable funding mechanisms for ongoing evaluation work rather than one-time assessments Absence of institutional frameworks and rule of law in many Global South countries that delays feedback loops and compounds potential harms How to address the fundamental talent inclusion problem where those building safety systems lack exposure to Global South deployment contexts How to create financial incentives and cost structures that make safety a priority for companies deploying AI in Global South How to scale community-led, contextual evaluation work to cover thousands of languages and millions of cultural settings How to close the accountability loop so that evaluation work translates into meaningful protection for citizens How to address the ‘new global south in AI’ that includes countries beyond traditional Global South due to AI concentration in just two countries and few companies
Suggested compromises
Expanding the definition of ‘Global South’ in AI context to include other regions like parts of Europe and Latin America that face similar exclusion from AI governance Balancing innovation promotion with safety requirements – ensuring safety measures don’t stifle AI development while protecting populations from harm Creating hybrid evaluation approaches that combine technical benchmarks with real-world, contextual assessments Developing both model-level and system-level evaluation tools to address the full deployment context rather than just underlying AI models Establishing open source and free access tools for safety evaluation while building local capacity and ownership in Global South countries Creating incident reporting systems that can capture both intentional and unintentional misuse of AI technologies Developing procurement guidelines as a policy lever that Global South countries can use to shape markets for responsible innovation
Thought Provoking Comments
We now know in the global north of artificial intelligence is two countries and a few companies. So we must, beyond this, extend to also include other colleagues, whether it’s from Europe, Western Europe, or Latin America.
This comment reframes the entire global AI landscape by suggesting that even the ‘global north’ in AI is extremely concentrated, essentially challenging the traditional north-south binary. It introduces the concept of a ‘new global south in AI’ that includes traditionally developed countries who are also excluded from AI development.
This fundamentally shifted the discussion from a simple global south vs. global north framework to a more nuanced understanding of AI power concentration. It broadened the scope of who should be included in the network and influenced subsequent speakers to think beyond traditional geographical boundaries.
Speaker: Ambassador Philip Thigo
Language in itself is not just about vocabulary. It’s also about the lived meaning, the lived experiences… if the mother says their waters have broken, which clinically is a critical incident that should warrant that mother to be referred to a health facility, but if you translate that from the local language to English… that will literally mean I have thrown away water.
This powerful example moves the discussion from abstract concepts of multilingual AI to concrete life-or-death scenarios. It demonstrates how current translation-based approaches to multilingual AI can fail catastrophically in critical contexts, revealing the inadequacy of surface-level language support.
This comment grounded the entire safety discussion in tangible, high-stakes examples. It influenced subsequent speakers to focus more on contextual understanding rather than just technical capabilities, and reinforced the urgency of the network’s mission with a compelling real-world scenario.
Speaker: Dr. Rachel Sibande
Safety, on the other side, is not costed into financial systems… There is no penalty of not being safe. So as long as there is no constraint to put safety as a cost structure, which strong mandate, companies will not pay attention or enough attention.
This comment cuts to the heart of why AI safety remains inadequate by identifying the fundamental economic incentive problem. It shifts the discussion from technical solutions to systemic economic and regulatory issues that drive corporate behavior.
This observation reframed the conversation from focusing solely on technical evaluation methods to addressing the underlying economic structures that perpetuate unsafe AI deployment. It influenced the discussion toward considering regulatory and financial mechanisms as essential components of any safety framework.
Speaker: Mr. Amir Banifatemi
There is just too many of these initiatives that are getting launched… we have to figure out a way how we would coordinate operations among these initiatives as well. So I think that would be a great multiplier instead of everybody going out and saying, okay, let me see what small piece of the pie that I can get.
This meta-observation about the proliferation of AI safety networks introduces a critical coordination challenge that could undermine the effectiveness of all initiatives. It’s insightful because it addresses the risk of fragmentation in a field that requires collective action.
This comment introduced a sobering reality check into the celebratory launch atmosphere, forcing participants to consider how their network would differentiate itself and coordinate with others. It shifted the discussion toward practical implementation challenges and the need for strategic positioning within a crowded landscape.
Speaker: Dr. Balaraman Ravindran
Benchmarks are not neutral. Sometimes I think I like to be honest because that’s what evaluation needs to do. And so we need, in most cases, to ensure that only a handful of institutions should not define what risks are measured, what harms are prioritized, and what safe performance means. Governance is about power.
This comment exposes the political nature of AI evaluation by highlighting how seemingly technical benchmarks embed power structures and value judgments. It challenges the notion that evaluation can be objective and reveals how current systems perpetuate exclusion.
This observation elevated the discussion from technical evaluation methods to questions of power, representation, and democratic participation in AI governance. It reinforced the political urgency of the network’s mission and influenced other speakers to consider the governance implications of their technical work.
Speaker: Ambassador Philip Thigo
Overall Assessment

These key comments fundamentally transformed what could have been a routine network launch into a sophisticated analysis of AI governance challenges. Ambassador Thigo’s interventions consistently elevated the discussion by introducing power dynamics and structural critiques, while Dr. Sibande’s concrete examples grounded abstract concepts in life-or-death realities. Mr. Banifatemi’s economic analysis and Dr. Ravindran’s coordination concerns added practical urgency to the theoretical framework. Together, these comments created a multi-layered conversation that addressed technical, economic, political, and practical dimensions of AI safety in the Global South, establishing a robust intellectual foundation for the network’s future work.

Follow-up Questions
How do we ensure that we get necessary support from all stakeholders to make the Global South Network for Trustworthy AI functional?
This addresses the critical challenge of securing ongoing commitment and resources from industry, governments, and multilateral organizations to operationalize the network beyond its launch
Speaker: Mr. Abhishek Singh
How do we create tools for evaluating models in various languages and build up capacity in all countries of the global south?
This highlights the need for practical evaluation tools and capacity building mechanisms that can work across diverse linguistic and cultural contexts in the Global South
Speaker: Mr. Abhishek Singh
How do we ensure compliance to the New Delhi Frontier AI commitments regarding sharing usage data and multilingual performance benchmarks?
This addresses the implementation gap between policy commitments and actual practice in AI safety and evaluation
Speaker: Mr. Abhishek Singh
How do we fit this network into the multilateral process, including the UN scientific panel on AI and global dialogue on AI governance?
This is crucial for ensuring the network’s work influences global AI governance structures and doesn’t operate in isolation
Speaker: Ambassador Philip Thigo
How do we close the accountability loop so that evaluation work ultimately matters for citizens?
This addresses the fundamental question of translating technical evaluation work into tangible benefits and protections for people in the Global South
Speaker: Ambassador Philip Thigo
How do we take thoughtful, community-led evaluation work and scale it up for thousands of languages and millions of different cultural settings?
This highlights the scalability challenge of conducting context-sensitive AI safety evaluations across the vast diversity of the Global South
Speaker: Ms. Natasha Crampton
How do we build a sustainable, grounded, community-led system of scalable evaluation that can be run on an ongoing basis?
This addresses the need for continuous rather than one-time evaluation systems that can adapt to changing AI systems and contexts
Speaker: Ms. Natasha Crampton
How do we coordinate operations among multiple AI safety and capacity building networks being launched globally?
This addresses the risk of fragmentation and duplication of efforts across various AI safety initiatives in the Global South
Speaker: Dr. Balaraman Ravindran
How can we pick problems that will necessarily require cross-border collaboration rather than just parallel work in different geographies?
This focuses on identifying research challenges that can only be solved through genuine international collaboration, strengthening the network’s value proposition
Speaker: Dr. Balaraman Ravindran
How do we accelerate learning capabilities and feedback loops in Global South countries that lack institutional frameworks for AI safety?
This addresses the structural disadvantage many Global South countries face in developing rapid response mechanisms for AI safety issues
Speaker: Mr. Amir Banifatemi
How do we institutionalize the evaluation of safety of AI solutions right at deployment rather than post-deployment?
This addresses the timing gap in current safety practices where issues are only identified after systems are already causing harm
Speaker: Dr. Rachel Sibande
At what point can we track whether users are substituting their cognitive capabilities with AI models or becoming overly emotionally dependent?
This raises important questions about measuring psychological and cognitive impacts of AI use that current benchmarking doesn’t capture
Speaker: Dr. Rachel Sibande
How do we ensure that only a handful of institutions don’t define what risks are measured, what harms are prioritized, and what safe performance means?
This addresses power concentration in AI safety standard-setting and the need for more democratic and inclusive governance structures
Speaker: Ambassador Philip Thigo

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2

Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2

Session at a glanceSummary, keypoints, and speakers overview

Summary

This panel discussion at an AI Impact Summit focused on the critical challenge of powering AI infrastructure as data centers face unprecedented energy demands. The session was moderated by Ashish Khanna from the International Solar Alliance and featured three expert panelists discussing both “AI for Energy” and “Energy for AI” perspectives.


Professor Raghav Chandra opened by emphasizing that energy, not algorithms or chips, represents the single greatest constraint on AI’s future. He cited several major outages at tech giants like Meta, Google, Amazon, and Microsoft that demonstrated the vulnerability of AI operations to power failures. Chandra highlighted that global data center electricity consumption is projected to nearly triple from 415 terawatt hours today to 945 terawatt hours by 2030, representing 3% of total global electricity consumption. He warned of downstream effects including environmental impacts, rising power costs for consumers, and equity issues regarding who bears the burden of increased energy demands.


Nathan Blom from Cooling Chambers discussed innovation in cooling technologies, explaining how data centers are transitioning from CPU-based systems consuming 150-300 watts per square foot to GPU-intensive facilities requiring up to 10,000 watts per square foot. He advocated for two-phase cooling technologies that use liquid-to-gas phase changes, which are 10-20 times more effective than traditional liquid cooling and could achieve power usage efficiency ratios as low as 1.05 compared to current levels of 1.5.


Vineet Mittal from Avada Group presented India as an ideal location for AI data centers, citing the country’s abundant solar and wind resources, complementary generation patterns, and unified national grid. He noted that India is adding 50 gigawatts of renewable capacity annually and can provide round-the-clock clean power through combinations of solar, wind, and storage technologies. Mittal emphasized that while the US faces 7-8 year grid waiting times, India offers immediate opportunities for gigawatt-scale data center development.


The discussion concluded with agreement that while India and developing countries have tremendous potential to meet AI’s energy demands through renewable sources and cooling innovations, success requires improved ease of doing business, better center-state coordination, and continued policy support for data localization and infrastructure development.


Keypoints

Major Discussion Points:

AI’s Massive Energy Demands and Infrastructure Challenges: The discussion highlighted that AI data centers are becoming enormous energy consumers, with single training runs using as much electricity as thousands of homes annually. Current data centers consume electricity equivalent to Spain’s entire grid and are projected to double every three years, creating unprecedented strain on global power systems.


India’s Opportunity as a Global Data Center Hub: Panelists emphasized India’s unique advantages including abundant solar and wind resources, a unified national grid, complementary renewable energy patterns, and significantly lower power costs compared to the US and Europe. India is positioned to become a major destination for data centers due to its renewable energy capacity and growing digital economy.


Innovation in Cooling Technologies: The discussion explored critical innovations in data center cooling, moving from traditional air cooling to advanced liquid cooling and two-phase cooling systems. These technologies could dramatically improve power usage efficiency (PUE) from 1.5 to 1.05, representing a major breakthrough in reducing energy consumption.


Policy and Regulatory Framework Challenges: Speakers identified the need for better coordination between central and state governments in India, data sovereignty legislation, and streamlined ease of doing business processes. The regulatory landscape needs to support both renewable energy integration and data center development while addressing environmental and social concerns.


Dual Relationship Between AI and Energy: The conversation covered both “AI for Energy” (using AI to optimize renewable energy systems, enable peer-to-peer trading, and improve grid management) and “Energy for AI” (meeting the massive power demands of AI infrastructure through sustainable sources).


Overall Purpose:

The discussion aimed to address the critical challenge of powering AI infrastructure sustainably while exploring opportunities for developing countries, particularly India, to become leaders in the AI data center ecosystem. The International Solar Alliance convened this panel to examine both how AI can help optimize energy systems and how renewable energy can meet AI’s growing demands.


Overall Tone:

The discussion maintained a predominantly optimistic and forward-looking tone throughout, despite acknowledging significant challenges. While speakers presented sobering statistics about energy consumption and infrastructure failures, they consistently emphasized opportunities for innovation and India’s competitive advantages. The tone was professional and solution-oriented, with panelists building on each other’s points to paint a picture of India as a potential global leader in sustainable AI infrastructure. Even when discussing regulatory hurdles and environmental concerns, speakers framed these as surmountable challenges rather than insurmountable barriers.


Speakers

Speakers from the provided list:


Announcer: Event host introducing the session and panelists


Ashish Khanna: Director General of the International Solar Alliance, moderating the discussion on powering AI


Vineet Mittal: Chairman of Avada Group, renewable energy developer and expert


Nathan Blom: Vice President, Cooling Chambers, expert in data center cooling technologies


Raghav Chandra: Professor at IIM Calcutta, Founder and CEO of Consult, former Chairman of NHAI and Secretary to Government of India, academic and former government administrator


Audience: Umesh Prasad Singh, associate member of Indian Institute of Public Administration, asking a question during Q&A


Additional speakers:


None identified beyond the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive panel discussion at the AI Impact Summit addressed the critical challenge of AI’s unprecedented energy demands and the need for sustainable solutions. Moderated by Ashish Khanna, Director General of the International Solar Alliance, the session brought together three distinguished experts to examine both how AI can transform energy systems and how energy systems must evolve to power AI’s future.


The Energy Crisis Constraining AI Development

Professor Raghav Chandra from IIM Calcutta opened with a fundamental assertion that energy, rather than algorithms or semiconductors, represents “the single greatest constraint on AI’s future.” He illustrated this with examples of recent infrastructure vulnerabilities: Meta’s nuclear-powered AI data centre plans were disrupted by environmental concerns including bee colonies, while power outages at major cloud facilities demonstrated how energy failures cascade into global disruptions affecting millions of users.


The scale of the challenge is substantial. Current global data centre electricity consumption stands at 415 terawatt hours, representing approximately 1.5% of worldwide electricity usage, with projections indicating growth to 945 terawatt hours by 2030. In the United States, data centres consumed 176 terawatt hours in 2023, equivalent to 4.4% of national electricity demand. The shift from CPU-based to GPU-intensive computing has dramatically amplified these energy requirements, transforming the infrastructure needs for AI development.


Chandra emphasized that water scarcity compounds these challenges, particularly in India where many cities lack 24/7 drinking water access, yet data centres require substantial water resources for traditional cooling systems.


India’s Strategic Opportunity in Renewable-Powered AI Infrastructure

Vineet Mittal from Avada Group presented India as ideally positioned for sustainable AI infrastructure development. India’s renewable energy transformation has been remarkable—from virtually no solar and wind capacity 15-16 years ago, the country now adds 50,000 megawatts of renewable capacity annually, making it the second-largest green energy player globally after China.


India’s technical advantages are substantial. The country’s solar and wind patterns are naturally complementary, providing 14-18 hours of renewable generation that can be supplemented with storage systems. India’s unified national grid allows power generated in optimal locations like Rajasthan to be consumed in major urban centres in real-time—a significant advantage over fragmented systems elsewhere.


Mittal highlighted how AI applications can address renewable energy intermittency by combining weather data, satellite information, and prediction algorithms to make renewable energy as dispatchable as conventional power. This breakthrough makes renewable energy suitable for the constant, high-reliability demands of AI data centres.


The economic advantages are compelling. While the United States faces grid waiting times of 7-8 years and power shortages preventing new data centre development before 2030, India offers immediate opportunities for gigawatt-scale facilities. Mittal noted a Morgan Stanley study showing $4 million opportunity costs for power delays, emphasizing India’s competitive advantage.


Mittal also described how green power is transforming Indian agriculture, with farming activities shifting from night to day, and suggested states like Andhra Pradesh could become specialized “data states.”


Cooling Technology Innovation and Infrastructure Solutions

Nathan Blom from Cooling Chambers revealed that current “advanced cooling” technology is actually based on 1960s Apollo space programme innovations later adapted for gaming systems. Traditional liquid cooling faces fundamental limitations as chip temperatures rise, requiring increasingly energy-intensive chillers that create efficiency losses.


The breakthrough lies in two-phase cooling technology using ethylene or propylene glycol, which allows coolant to boil and vaporize. This phase change is 10-20 times more effective at capturing heat than traditional liquid cooling, potentially achieving power usage efficiency (PUE) ratios as low as 1.05 compared to current levels of 1.5.


Blom emphasized that innovation comes from smaller, agile companies willing to take risks, which are subsequently acquired by larger corporations. He noted that while legacy infrastructure constrains existing facilities, new builds in countries like India can implement cutting-edge solutions from the outset. The gaming industry’s role in developing both GPU technology and cooling solutions demonstrates how cross-industry innovation drives breakthrough technologies.


The Dual Relationship: AI for Energy and Energy for AI

Khanna established the bidirectional opportunity between AI and energy systems. The world has installed 1,000 gigawatts of solar capacity, with approximately 40% being decentralized, though this figure is only 15-20% in India. AI applications can help distribution companies absorb distributed renewable energy while reducing system costs through sophisticated peer-to-peer trading systems.


The International Solar Alliance announced a global AI mission for energy, including an ISA Academy to train engineers who understand both AI and energy systems—addressing the critical skills gap where specialists in each field lack expertise in the other.


Policy and Implementation Challenges

Despite technical and economic opportunities, significant policy challenges remain. Chandra identified lack of coordination between central and state governments as the primary bottleneck for data centre development, with ease of doing business varying dramatically across states. He cited examples of foreign companies making multiple presentations without achieving progress.


However, encouraging signs exist. States like Maharashtra have streamlined processes and provide substantial incentives. Recent budget announcements include tax exemptions for data centres established through foreign collaboration. Mittal emphasized the need for data sovereignty legislation requiring Indian user data to be stored domestically, providing regulatory certainty for infrastructure planning.


Environmental Considerations and Global Implications

The discussion acknowledged environmental and social challenges, including impacts from increased electricity generation and potential conflicts between technological development and basic human needs like water access. However, speakers maintained these challenges are surmountable through appropriate technology choices and policy frameworks.


An audience question from Umesh Prasad Singh about global ramifications prompted Chandra to emphasize that data centre outages affect billions of users worldwide, while energy and environmental impacts transcend national boundaries.


Innovation Ecosystem and Future Outlook

Mittal emphasized that innovation occurs through cross-industry collaboration, combining knowledge from clean room technology, battery cooling systems, and data centre design to create India-specific solutions. This is particularly relevant for addressing India’s unique environmental conditions, including high humidity in coastal cities where optical fibre cables terminate.


The speakers expressed optimism about developing countries, particularly India, becoming leaders in sustainable AI infrastructure. This opportunity stems from abundant renewable energy resources, growing technical expertise, massive domestic data generation, and the ability to implement cutting-edge technologies without legacy constraints.


The session presented a vision balancing technological advancement with environmental sustainability. While challenges remain, the combination of renewable energy innovation, cooling technology breakthroughs, and policy reforms creates a pathway for developing countries to lead in AI infrastructure development, potentially reshaping global technology leadership.


Session transcriptComplete transcript of the session
Announcer

Good evening, distinguished guests. Welcome to the session on powering AI. As AI scales at speed, so does its infrastructure demands. Data centers are facing unprecedented power and cooling requirements. A single large AI training run can consume as much electricity as thousands of homes use in a year. This raises critical questions like how do we plan for rapidly rising and uncertainty energy demand? Can edge computing reduce the load, or is centralization inevitable? To address these critical issues, we are joined by our exceptional panelists. Mr. Vineet Mittal, Chairman of Avada Group. Sir, I request you to please come on stage. Mr. Natham Blom, Vice President, Cooling Chambers. Professor Raghav Chandra, Professor at IIM Calcutta, Founder and CEO of Consult and former Chairman of NHAI and Secretary to Government of India.

Moderating this important conversation is Mr. Ashish Khanna, Director General of the International Solar Alliance. Mr. Ashish Khanna, Director General of the International Solar Alliance. Thank you panelists for being here with that I now request Mr. Khanna to please take the discussion forward

Ashish Khanna

Good evening everyone not easy being the last panel especially when we are probably starting at the time that we are supposed to end but we hope and we will try and make it more interesting for all of you we are here to talk about Powering AI the format will be that I will begin in terms of framing some of the issues at heart and also tell you a little bit about what International Solar Alliance is going to do then I will hand over to each of the esteemed panelists to make an opening kind of a statement of what’s their vision on this question of Powering AI for about 5 minutes each and then I will ask them one question each and then I will ask the question and then I will ask the question on some of the specific issues for which they are probably an expert on.

And finally, if there is any time left, we will see if any audience member wants to ask a question. Let me start off by saying, why is International Solar Alliance in this session and in this AI Impact Summit? We are here primarily for two reasons. The first reason is, the world has done 1000 gigawatt of solar doubled in just last two years, what was done in last 25 years. Almost 40 % of that is decentralized, which means it’s either solar rooftop or pump or others. That figure is only 15 or 20 % in India and obviously very low in a lot of developing countries. And a distribution company often does not like decentralized solar because it impacts the distribution. It impacts the distribution system and finances.

But the right amount of digitization and AI can actually help them absorb it and reduce the cost of the system as a whole. And therefore, India’s ability to more than double decentralize renewable energy, but in general, world over, will require AI. That’s issue number one, for which actually I will say that we launched a global AI mission for energy in the AI Impact Summit. We call it AI for Energy. The session is going to talk about energy for meeting AI demands. But let’s first talk about AI for Energy. Why? Because there are some elements that the world has not seen, which is, if some of you were part of some sessions earlier, can consumers trade power based on what rooftop and batteries do you have, P2P trading, that requires certain digital enablement of the trade of millions of consumers, producers and consumers, that right now needs a lot of regulatory evolvement.

An IT architecture, so that each distribution company in India, but for that, that matter anywhere in the world will know what will it make ready to actually trade that power. all. It’s about jobs. Today, a lot of AI engineers do not understand energy. Energy engineers do not understand AI. We at International Solar Alliance, which is now 125 country member body, headquartered in India, is creating an ISI Academy to train people to bring together AI and energy skills engineers together. This intersection of energy and AI will be the fundamental shift over next five years, the way Amazon changed retail. This is what is going to happen, we believe, in renewable energy. Third, is about innovation ecosystem. We are in the AI Summit.

A lot of startups are having fundamentally disruptive ideas on both decentralized renewable energy, as well as the way you manage generation, transmission, and others. The fourth is about financing. How will all this financing and de -risking be done? Because not all places have a lot of venture capital or commercial loans and equity possible. We are in the process of creating a new industry. We are in the process of creating a new industry. We are in the process of creating a new industry. We are in the process of creating a new industry. We are in the process of creating a new industry. We are in the process of creating a new industry. And finally, there’s a global dimension where International Solar Alliance is involved.

What are going to be the interoperable standards? Because the world is not united on how all of this will be done. So that’s a lot about AI for energy. But there’s also an equally important element of energy for AI. The world’s largest sources of increase in electricity consumption right now are only two. Data centers and cooling. Some of it is going to happen through electrification of cars, EVs as well. Now, 70 % of all data center demand today is US and China. But in times to come, it’s increasing by almost more than 50%. A lot of it is going to happen in developing countries. And we’ll hear some of that in addition to global elements. A lot of that is also having a lot of innovation that will renewable energy provide that energy.

Can 24 by 7 solar and storage provide cost -competitive energy to some of these data centers, whether they’re small or hyperscale, hyperscale being above 100 megawatt? What’s happening on innovation on cooling? We will hear some of the experts on the private sector who are trying to come out with a lot of innovation on that and what happens on the ecosystem. Obviously, today’s data centers are consuming a grid equal to Spain right now, and it’s going to double every three years. So this is a very important segment. Without further ado, I’m actually going to go probably to the esteemed panel. I’m going to request Mr. Raghav Chandra. Sir, you have been part of the government and now teaching. When you look at this big…

element of powering AI, how do you see it?

Raghav Chandra

Thank you, Ashish. You’ve done a fantastic job in a short time period covering the larger macro issues connected with this sector. Friends, as we gather here in a nation racing towards digital sovereignty and sustainable growth, I want to emphasize and putting on my academic professorial hat, the single greatest constraint on AI’s future, which is not algorithms, not chips, but it is energy for AI -based data centers. And, you know, I’m going to mention a few such instances. In late 2024, Mark Zuckerberg made a confession that stunned his employees. A nesting colony of bees had torpedoed. Meta’s plans to open the world’s first nuclear -powered AI data center. That single environmental snag exposed their deeper vulnerability that Meta’s AI strategy depended on a single resource that it did not control and command, which is electricity.

Power outages and energy shortages have increasingly disrupted major tech companies’ operations, particularly as AI -driven data center demands strains global grids. There has been another very famous incident of March 29, 2025. A sudden loss of utility power in Google Cloud’s Columbus, Ohio unit triggered a critical failure in the uninterruptible power supply UPS batteries that created a major havoc for society. Several hours. This caused a cascading outage of six hours, in fact. Over 20 services were hit. Various customers experienced degraded performance or total unavailability, affecting cloud -dependent apps and websites globally. No direct apology was, of course, issued, but the event underscored energy reliability in a big way in an era of AI growth. In September 2019, utility power failed at one data center in Amazon Web Services, AWS’s North Virginia zone.

Backup generators activated but ran out of fuel after about an hour due to faulty automated refueling systems exacerbating the blackout. And it affected about 7 .5 % of the volume of… Apps and databases and some customers lost data permanently. Backups weren’t in place. Services like Slack and Netflix saw major ripples. And this has happened not only with Google Cloud or with Amazon AWS, but it has happened with companies such as Microsoft’s Azure, which suffered a major setback in 2018. It has affected TikTok, that’s ByteDance’s new USDS joint venture, causing widespread system failures. And what it underscores is the need for ensuring that there is suitable energy availability for data centers and that there is suitable backup for data centers.

Otherwise, you will not be able to have high powered. So energy guzzling, AI based data centers, which are the basic, basic. unit for AI to be implemented across the board for simplifying and for making and achieving our goals of ensuring that we have AI which is responsible, ethical, efficient, and which can do our job effectively. There is one county in the U .S. which my friends here on the dais would be aware of, of Ludon County, Virginia, just outside Washington, D .C., where data centers now outnumber people in density. And this 40 -square -kilometer area of computer server farms is Christend, the data center capital of the world. It hosts about 200 operational facilities, and another 100 or so are coming up.

Their peak draw is nearly 3 gigawatts. That’s enough to power a small country. Over 70 percent of the global Internet traffic passes. This is the clear area. What brought Ludon and its implications to the world’s notice was the massive outage at Amazon, causing tripping of crucial banking services and various social media companies. In Ireland, their data centers consume already one -fifth of the nation’s electricity, more than all the urban homes combined. Data centers traditionally began as largely in -house centers for proprietary computing data storage. They have since evolved, and today they are largely remote facilities or networks of facilities owned by cloud service providers, housing virtualized infrastructure for the shared use of multiple companies and customers. They need tons of electricity.

With all the power -hungry hardware and cooling systems, a data center today uses, higher -density racks, and whereas earlier… the data center typically used something like 150 to 300 watts of electricity per square foot. Today these higher density racks can consume as much as 100 kilowatts per cabinet which equates to 10 ,000 watts per square foot. And therefore a data center power problem can have global ramifications for the company. AI is supercharging data center boom that will recharge global energy systems. Global data center electricity consumption today is 415 terawatt hours. That’s about 1 .5 % of the world’s total consumption of electricity. And by 2030 it’s predicted to be nearly 945 terawatts or 3 % of the total consumption. So AI is not a side story. It’s the main driver with accelerated servers growing 30 % annually.

in the United States, which is the current epicenter. Data centers use 176 terawatts in 2023, or 4 .4 % of the national electricity. Projections are staggering. That’s like adding, in fact, the entire power demand of countries like Australia or Spain. So, you know, when we look at powering AI, we have to look not just at the upstream issues of creating the requisite demand, of creating the requisite power supply, but the other factors which come into play are the downstream effects and the hidden costs of progress. Environmental, if we rely only on fossil fuels to bridge the gap, emissions soar, so you have the debate between thermal and between renewable. which my colleague here will talk about. Big tech’s scope to emissions are already up 30 to 50 % since 2020.

Globally, data centers could claim 40 % of new fossil generation if clean supply lags. And so, while on the one hand, AI can help to accelerate decarbonization through optimal strategies and with intelligent working, but at the same time, the very fact that they are power guzzlers, they have an environmental issues which is inherent, and therefore there is a need for choosing a virtuous path. They have economic and social costs, while power prices are spiking. For instance, in the US, in and around areas which have data centers, the power cost has gone up significantly. In fact, wholesale electricity has jumped 200 to 250%. . in five years in certain areas, and the households are feeling that pinch. There is an issue of reliability.

Grids weren’t built for this. Voltage swings in Virginia have already tripped dozens of centers. In a warming world with rising AC loads, blackouts aren’t theoretical. They’re a governance failure waiting to happen. You have the equity issue. Who bears the burden? Communities near data centers face noise, heat, and land -use conflicts. In developing nations such as India, the digital divide widens if energy access for AI crowds out basic needs. So there’s a need for ingenuity when we’re dealing with this issue, and efficiency has to be the best weapon for dealing with the larger social, environmental, and other issues connected with this. And, of course, you know, in India, a lot is happening about which we’ll talk about.

But there is, indeed, a moment of great… happiness that AI is powering us, but there is also need to be concerned about whether we will be able to power AI effectively and whether we will be able to effectively and efficiently manage the downstream effects of powering that AI effectively. Thank you.

Ashish Khanna

Thank you so much, Raghavji, for the different elements of sustainability risks for the society. Nathan, your opening statement especially from the cooling perspective.

Nathan Blom

that keeps these northern Virginia, as an example, data centers from adapting to more efficient and effective technologies. But when you’re starting with new builds, with white space technologies, you have the opportunity to actually build for the future instead of build for the past. And so that, to me, is the most important element as to how we’re going to solve powering AI in the future.

Ashish Khanna

Thank you, Nathan. I’m sure you educated a lot of us in terms of what’s really happening on the cooling side on the innovation. Vineet, over to you, that from one of the leading renewable energy developers, how do you see?

Vineet Mittal

Good evening, everyone. So I see AI as one of the biggest opportunity. For the renewable sector, historically, people believe that renewable is intermittent, which it is. It is difficult to predict when the sun shines and wind blows. So we needed the technology which can help us intermittent power dispatchable at 15 minutes interval so that the grid can operate in a stable environment. So what AI has helped that with the help of a lot of climatic data, which your weather department collects, company like renewable companies are collecting, defense department collects. And then you can get real time data from low earth orbit satellites. If you use all of them in the right way, you are able to predict using AI that what would be my generation like.

And then you go a step further and you can schedule and dispatch that power like a conventional thermal power would do. So that makes AI for energy and energy for AI. And that empowers the grid to have always on. Clean power. which is the uniqueness India offers. So let me tell you, friends, when India started adding solar and wind some 15 -16 years ago, we didn’t even have 5 megawatt of operational asset. And this year alone, India is going to add 50 ,000 megawatt of solar and wind capacity, making us the second largest green energy player besides China. And what it gives power to India is that, like the previous panelists were saying, in the U .S., in Malaysia, even in Ireland, which used to be the data center capital, every country started charging some surcharge on powering the data center.

But the reality of life is that there are not going to be 50 megawatt or 100 megawatt data center. Now we are talking about 500. 500 megawatt, gigawatt data center because the… compute requires so much of eating as Nathan has explained. Without impacting the society and affordably if you have to do, India is the place. And the reason I say that we are blessed with abundance of sun, wind and water. So using the pumped storage because of our geography, we are actually getting a natural ability to do storage. And largely in most of the states, sun and wind are complementary in nature. So what happens is using sun and wind alone, you can generate 14 to 18 hours of power and then you complement it with pumped storage and battery.

And if you combine with the AI and you build your AI stack properly, you are looking for round the clock green power. So India is the perfect location India is adding 50 gigawatt It’s not competing with the normal consumer. India has a lot of very good policy where using green power, they are able to move even farming activity from the night shift to the day shift. So and our per capita power consumption is one of the lowest in the world. We are less than 1500 kilowatt hour per year per capita. So if India has to become a Vixit Bharat, you can’t become Vixit without data. And data is the new oil. And unfortunately, what is happening today is that we are we have 1 .4 billion people and out of which a billion people are connected.

And we are one of the cheapest data connectivity package in the world. So we are. The largest user of YouTube in the world, almost 700 million. user on the YouTube is from India and we are the largest content creating economy whether you take Insta, whether you take YouTube, you take any social media. It’s a repeat story even on WhatsApp we are more than half billion user. And all of this data as previous speaker was saying resides in some other countries. So why should we generate so much of data and the data should reside in any other country because probably earlier we didn’t focus to use all this abundance of energy and power that data center and now today the scenario is it makes economic sense in US.

Now you cannot get any power before 2030. All even the gas machines are sold out. So if you look at the grid waiting time in the US is typically 7 to 8 years. Permitting you can get during that time but if the world has to adopt TI at a massive scale India offers that opportunity where we can set up multiple gigawatt data center. We can provide them green power using solar, wind and storage. And we actually have a very unique situation. Unlike US or Europe, India has a single grid. You can insert the power in Rajasthan and can pick up in Mumbai in real time basis. And India has invested heavily into the grid. And we continue to grow that national grid where the whole country is connected.

So the best location for solar, best location for wind, best location for pumped storage and battery can bring power to the data centers in Mumbai, Chennai, which are already connected. So the latency does not become the bottom line and it becomes the ideal choice. What is needed is probably more data sovereignty type of act. Indian user content has to be located in India by certain time frame and so that developers can plan for the grid they can plan for the large data center capacity and can bring that to light so it’s one of the greatest opportunity for India Indian ecosystem is purely geared up for that and on top of that we have if you look at even in the AI probably more than 25 % of talent resides in India and that talent currently is working for other countries so they will be based in India, work for India and provide services and intelligence to the rest of the world and that’s the way moving forward

Ashish Khanna

Great so let’s have a little bit of a discussion and I do hope we get time for one or two questions so there was a little bit of questions that, Raghavji, you talked about, but a lot of optimism on both sides. I will ask a question combining two elements, which are important. One of it relates to the whole policy and regulatory landscape. Is India, or for that matter, developing world’s policy and regulatory landscape conducive for promoting data centers? I think, Vineet, you talked about the importance of data, the policy and regulatory landscape related to data sovereignty. Even Africa, I remember, was thinking of having a legislation like Europe, where the data for that particular continent or that particular country should be within that region.

But then there is also a policy and regulatory landscape for discovering price of power for data centers. India believes it’s very competitive. U .S. is struggling with the cost of providing power. Power probably is a limiting factor rather than the Nvidia chips. So that’s where the U .S. is. The second element is on innovation. I think you spoke about it, Nathan, but we’d like to hear what would an innovation landscape for cooling look like? Is it a lot of startups? Is it a lot of some of the larger companies doing some process efficiency? What would this innovation landscape look like? I want to request each of you to think about and say what would a policy and regulatory landscape change and an innovation landscape change accelerate both the speed and the cost of what meeting the demands of data centers look like?

Raghav.

Raghav Chandra

So in the Indian context, you know, the stakes for us, as Vineet mentioned, with all the opportunity and the resources that are available to us in terms of land, in terms of water, in terms of the skilled manpower, the opportunities are enormous. And the data center capacity is all set to explode. Today, it consumes about one gigawatt of power today. We are expecting it to reach about eight or nine gigawatts by 2030. And it’s continuously growing. We have ambitious states like Andhra Pradesh, which can effectively be called the data cities or data states for the country. We have a coal -dominated grid, which India has fortunately allowed to continue in a very pragmatic way. We have rising cooling needs from extreme heat.

And as Nathan mentioned, that some of our states can have a power usage efficiency or effectiveness, which can be extraordinary. Because of all the heat, whereas ideally it should be. one which is the perfect index and we also have a net zero ambition of ensuring that we have complete renewable focus non -fossil fuel based energy dependency to reach by the year 2070 which is our global commitment which is I think again a very bold and generous commitment of India but the biggest issue that I find in this entire landscape if you ask me is about the ease of doing business in India and I am not being skeptical but having been an administrator who has been a managing director of the state industrial development corporation the state investment corporation the managing director of the road corporation the urban development principal secretary chairman of the national highway authority and various other such positions Now when I sit back and I’m on six company boards, I realize that the biggest bottleneck in India today is the lack of synergy between the states and the center, between the departments of the government and essentially between the states and the center.

And if India has to move forward to achieve this huge target that it has set for itself for ensuring that we become the data center country for the world, that we exploit our entire human resources, that we exploit our land resources, the solar energy that we have, we must have, you know, apart from the regulatory schemes, et cetera, and for the regulatory. On the regulatory side, much is being done. For instance, in the latest budget, we are all aware that how the finance. Minister announced the scheme for ensuring tax exemption for data centers that are set up in India with foreign collaboration for the foreign component part of the investment and for their revenues. Lots happening on the renewable energy front.

Lots happening on the various data centers that are being set up. However, lots needs to be done in terms of getting synchronized coordination, ensuring that the best technologies are brought in. One of the points which Nathan made was about leapfrogging and ensuring that India should capture technologies which other nations have faultily or by mistake adopted. We can certainly skip that and go on to the best technology. Water in the days to come is going to be a very, very big and critical issue. Thank you for India. And therefore, using liquid coolants and solutions such as that for cooling are going to be extremely important. And this has to be realized not only by the central government, by the states, and by everybody who is working in the field that they must facilitate ensuring that these things are adopted in a positive manner.

I had an example of a foreign company which the other day was talking to me. And they had signed an MOU with a particular state government for a huge amount of data centers to be established there. And they said that, you know, we are struggling. We’ve made eight presentations and we haven’t been able to move forward on that. That’s the kind of thing which with the best intentions and with our prime minister being so proactive that we should really have proactive chief ministers, everyone getting down to business, and using the large number of experts who are available all around to explain to them the best technology and moving beyond perhaps even L1 to be able to get the best configurations on the ground to ensure that we are not only efficient but we are effective.

Ashish Khanna

Great. So a lot of potential but work to be done on ease of doing business, center state coordination and also from innovation a big potential for Indian companies to innovate on cooling, liquid cooling especially given water constraints. Nathan, over to you.

Nathan Blom

Yeah, I’ll comment on that innovation because that is innovation is the foundation that the IT industry is built upon and it’s built upon the idea that any one individual or small group of individuals can create an idea that changes the entire multi billion dollar industry itself and those who don’t innovate end up falling off the map. You know, you don’t talk anymore about AOL or Ask Jeeves or companies like that, and maybe we’ll say the same thing about Meta or Microsoft or, you know, Google or Amazon someday. Who knows? Because that’s the nature of the industry. And so as we look into the future, I think the innovation is going to require these smaller companies who are able to take risks and think bigger, especially around these cooling technologies.

And that’s what’s already happening today is we’re seeing people who are thinking outside the box of what we’ve normally considered to be advanced cooling technologies. Today, when we talk about advanced cooling that’s being deployed, what we’re really talking about is moving from that air -cooled ecosystem to just a simple liquid cooling ecosystem, which was developed in the 1960s for the Apollo space mission in the United States by IBM. And it’s been used for all of those years, including in the 1960s. If you’re a gamer at home, it’s been used in those large desktop gaming systems. And so this is an old and proven technology. You basically use… ethylene or propylene glycol mixed with water and you pump it through a pipe and it touches a cold plate on top of the hot chip and it captures it in liquid and moves away.

And that is a very simple and easy way of capturing heat, but it has limits. And what we’re facing is the limit that that liquid, as it leaves the chip, is getting so hot that you then have to have some coolant, some way to cool it back down. And that uses an incredible amount of electricity to cool that water back down, to use chillers on the roof of your data center to chill that water back down. And so the delta between the heated water, glycol, and the chilled water has to continue to get bigger and bigger and bigger, which means you have to cool that water lower and lower and lower using more electricity, so you eliminate the efficiency.

There’s now technologies emerging, and this is what my company is focused on, that is very similar to the way we cool. You can cool air in an air conditioner or in your car or in your refrigerator, and it’s called a two -phase technology, and basically what that means, instead of pumping liquid around… and it’s staying liquid, it actually, the liquid boils and vaporizes and that change of phase that is from a liquid to a gas is 10 to 20 times more effective and efficient at capturing heat. And that technology, though, is being spearheaded by small companies and those small companies will get bought up by large companies and they’ll be adopted into the ecosystem. And so expect to see that.

Expect to see the same basic use of refrigeration or refrigerants that we have today and we’ve been using for a long time, but using them specifically within the IT load of a data center ecosystem. That allows us to get those PUEs, that utilization efficiency ratio, not 1 .5 but 1 .05. You see that, I mean, that’s a massive step function increase in efficiency, which means the power generation doesn’t, doesn’t have to be strained nearly as much. And so I think that’s where the innovation is really going to come in the next three to five years.

Ashish Khanna

great I think on the lighter note I’m always it’s baffled but amusing that the gaming industry was the start of GPU and now the cooling as well it’s fascinating how gaming industry is responsible for the AI revolution but lot of space for small companies if you have on the innovation side Vineet what do you think?

Vineet Mittal

Gaming and best actually because the large batteries requires the same amount of cooling so the way I see innovation happening across the board is when the knowledge and cross industry expertise starts fertilizing and for that to happen you have to start creating local ecosystem see we can’t be sitting on the fence and be solving and innovating consistently that you are doing in theory but when you are building large data center of gigawatt scale you can find solution and use those skill because the similar challenge comes when you design the clean room. So how do you combine the expertise of building a clean room of millions of square foot with the expertise which is required for cooling the batteries and the expertise which is required for power usage efficiency in the data center.

How do you combine those skills and build the solution which is good for India where the humidity in some of the cities where optical fibers are terminating through the sea is large and how do you balance it out. So you have to use the external environmental data also to customize your PUE efficiency. So we see that efficiency is possible at all levels whether the ceiling height should be 6 meters or 8 meters. How close to India. So India is in that sense is fortunate that we are building those expertise locally without being building those expertise locally. building those 100 gigawatt off data center. In Morgan Stanley did a study. There is a $4 million opportunity cost for the power.

So they are saying the battle for the AI is no more compute. And it’s no more intelligence. It’s the power. Power is the biggest challenge. And there is a lot of innovation which is happening on power sector in India. You gave a good example of P2P trading using AI. And the policy in India is quite open on open access. So when I give power to the grid and I’m taking it out, I’m getting the power in the real -time basis, which is very few countries are able to do globally. And we have to account for on the monthly basis. So that gives a flexibility to the data center, which you always want. clean power and they want 24 by 7 365 days reliable power that is what is available in India and I agree with Raghavji ease of doing business is not similar across the states in the country but that’s why government of India is doing stack ranking of the state so today you can’t be just dependent on one state like look at Maharashtra the kind of support they are providing today if you want to build data center is amazing like permitting land everything is fairly streamlined and on top of that they incentivize so I think government has got it not every state is on the same page that if you have to become a developed nation your data is the biggest enabler if you have to win any kind of manufacturing battle data is the biggest enabler like if you look at today even our financial data most of the software companies whether it is Oracle or SAP or Microsoft they want the data to be on the cloud and those data even your financial data now even SAP you can’t do ECC everything goes on HANA on rise which is on the cloud so you buy the space either from AWS or Microsoft because they have only partnership with those two so even the 40 ,000 odd companies which are on these ERP softwares in India where are their data going and so the opportunity wise I think India because of its own need will innovate consistently ease of doing business is a challenge and that’s where there is an opportunity to continuously work with the government on transparently on your challenges and suggesting a solution which is not benefiting one voice is a And then third is understanding the nuances of how the application layer is working across the industry and educating government also that why should they have the data localization initiative.

And I see all of this getting combined and India becoming probably the third largest country where the AI adoption and data center would be one of the enabling block for the future growth. Thank you.

Ashish Khanna

So a lot of optimism. I did promise one question. I have one question space only. So please go ahead. If you can identify yourself and have a brief question.

Audience

My name is Umesh Prasad Singh and I’m an associate member of Indian Institute of Public Administration. Sir, my question is directly to you. In your paper you have mentioned about global ramification. that particular aspect of global ramifications are of both types that is positive and negative. With respect to that, will you just have the clarification on that note? I wanted to know on that.

Raghav Chandra

When I said global ramifications, I’m talking of both essentially the downstream effects of focusing on data centers and the implications it has on the fact that it will have an implication for the environment because they are power guzzling, as Nathan mentioned, that today earlier we had data centers which were full of CPUs, today they are full of GPUs and you’re going into all kinds of even more complex computing units so because of the storage, the networking, they are becoming far more complex. So it’s going to have an impact on the environment because of the heat. That is generated intrinsically because of the data center, because of the environment that will be affected, because when you’re consuming coal to produce that power, you’re using water, that same water which could feed millions of people and pay the, you know, today we are not able to feed enough people for, provide adequate drinking water 24 by 7 to all our cities, yet you would have water effectively being used for the cooling of the data centers.

You will have social issues, because people today already for thermal power plants, they are creating issues where they find that their land, especially in the scheduled areas, et cetera, is being consumed for coal mining, so there are issues connected with that. Likewise, there are all kinds of social and environmental issues that are likely to happen. There are issues on the side of… You know, whether we, you know, what other implications it can have… So all these are things which are not essentially just localized, though they are local problems, but they will affect global companies which can have the benefit of India is that we can leapfrog in terms of technology. And hopefully, as one of the speakers earlier in the previous session mentioned, chips are also becoming more and more efficient.

So, you know, as they become, computing becomes more efficient, the chips become more efficient. So you will require a lesser amount of energy. If we can leapfrog, adopt the best technologies in terms of design and infrastructure, that again will be a great saving. So today, no nation is an island. Everyone is connected. And anything which impacts one nation affects the entire thing because data centers, if they are located here, as I mentioned, the case of the Ludon County outage, it affected billions of people all across. So it has a global ramification. while you have to think of your own benefit you have to keep an eye also on the impact whatever you are doing has on all across the nations and which is why when the Prime Minister talks of Manav, it is the human being who is at the center of it and the human being is not just you it is the larger mankind and the larger human community

Ashish Khanna

Thank you, unfortunately we do not have time for any more questions but it’s pretty late I’m ending without summarizing but it’s pretty apparent huge optimism on the power of India and developing countries to meet the demand for AI both through solar storage, innovation on liquid cooling and of course the ecosystem with ease of doing business please join me in giving a big round of applause to all of them and thank you for staying very late thank you everyone for joining Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
R
Raghav Chandra
6 arguments129 words per minute2453 words1140 seconds
Argument 1
Energy is the single greatest constraint on AI’s future, not algorithms or chips
EXPLANATION
Raghav Chandra argues that while much attention is given to algorithms and chip technology, the real bottleneck for AI development is energy availability. This constraint is more fundamental than technological limitations in computing power or software development.
EVIDENCE
Meta’s nuclear-powered AI data center plans were torpedoed by a nesting colony of bees, exposing their vulnerability to electricity dependency that they did not control
MAJOR DISCUSSION POINT
AI’s Energy Infrastructure Challenges and Global Impact
AGREED WITH
Vineet Mittal
Argument 2
Major tech companies face critical vulnerabilities from power outages affecting global operations
EXPLANATION
Power failures at data centers create cascading effects that disrupt services worldwide, demonstrating how energy reliability issues can have far-reaching consequences beyond the immediate facility. These outages expose the fragility of global digital infrastructure when energy systems fail.
EVIDENCE
Google Cloud’s Columbus outage caused 6-hour service disruption affecting 20+ services globally; AWS North Virginia outage affected 7.5% of apps with some customers losing data permanently; Microsoft Azure suffered major setbacks affecting TikTok and other services
MAJOR DISCUSSION POINT
AI’s Energy Infrastructure Challenges and Global Impact
Argument 3
Data centers consume massive amounts of electricity – currently 415 terawatt hours globally, projected to reach 945 terawatts by 2030
EXPLANATION
The electricity consumption of data centers represents a significant and rapidly growing portion of global energy demand. This growth trajectory shows data centers moving from 1.5% to 3% of total global electricity consumption, indicating an unsustainable acceleration in energy requirements.
EVIDENCE
In the US, data centers used 176 terawatts in 2023 (4.4% of national electricity); Loudon County Virginia hosts 200+ data centers with 3 gigawatts peak draw; Ireland’s data centers consume one-fifth of national electricity, more than all urban homes combined
MAJOR DISCUSSION POINT
AI’s Energy Infrastructure Challenges and Global Impact
AGREED WITH
Announcer
Argument 4
Environmental and social costs include rising power prices for households and potential conflicts over land and water resources
EXPLANATION
The expansion of data centers creates negative externalities for local communities through increased electricity costs and competition for essential resources. These impacts disproportionately affect vulnerable populations who may lose access to basic services like drinking water.
EVIDENCE
Wholesale electricity prices jumped 200-250% in five years in certain US areas with data centers; communities near data centers face noise, heat, and land-use conflicts; water used for cooling could otherwise provide drinking water to millions
MAJOR DISCUSSION POINT
Environmental and Social Implications
Argument 5
India needs better center-state coordination and ease of doing business improvements for data center development
EXPLANATION
Despite India’s potential advantages in renewable energy and human resources, bureaucratic inefficiencies and lack of coordination between different levels of government create significant barriers to data center investment. This administrative challenge undermines India’s competitive position in attracting global data center investments.
EVIDENCE
A foreign company signed an MOU with a state government for data centers but struggled after making eight presentations without progress; the speaker’s experience as former administrator across multiple government positions reveals systemic coordination issues
MAJOR DISCUSSION POINT
Policy and Regulatory Framework Challenges
DISAGREED WITH
Vineet Mittal
Argument 6
Global ramifications affect both local communities and international operations when outages occur
EXPLANATION
Data center failures have impacts that extend far beyond their physical location, affecting billions of users worldwide while also creating local environmental and social problems. This interconnectedness means that decisions about data center development must consider both global digital infrastructure needs and local community impacts.
EVIDENCE
Loudon County outage affected global users; data centers impact local environment through heat generation, water consumption, and land use while serving international customers
MAJOR DISCUSSION POINT
Environmental and Social Implications
V
Vineet Mittal
11 arguments141 words per minute1722 words732 seconds
Argument 1
India offers abundant sun, wind, and water resources with complementary generation patterns enabling 14-18 hours of power
EXPLANATION
India’s geographical advantages provide natural complementarity between solar and wind resources, allowing for extended periods of renewable energy generation. Combined with pumped storage capabilities, this creates opportunities for near-continuous clean power supply that other countries cannot match.
EVIDENCE
India is adding 50,000 megawatts of solar and wind capacity annually, making it the second largest green energy player after China; sun and wind are complementary in most states; abundant water resources enable pumped storage
MAJOR DISCUSSION POINT
India’s Renewable Energy Opportunity for Data Centers
AGREED WITH
Ashish Khanna
DISAGREED WITH
Raghav Chandra
Argument 2
India has a single national grid allowing power insertion in one location and pickup in another in real-time
EXPLANATION
Unlike fragmented power systems in other countries, India’s unified national grid infrastructure enables flexible power distribution across the entire country. This allows renewable energy generated in optimal locations to serve data centers in different regions without latency issues.
EVIDENCE
Power can be inserted in Rajasthan and picked up in Mumbai in real-time; India has invested heavily in grid infrastructure connecting the whole country; grid waiting time in the US is 7-8 years compared to India’s more flexible system
MAJOR DISCUSSION POINT
India’s Renewable Energy Opportunity for Data Centers
Argument 3
India is adding 50 gigawatts of renewable capacity annually without competing with normal consumers
EXPLANATION
India’s massive renewable energy expansion is structured to avoid conflicts with existing electricity users, instead creating additional capacity that can serve new demands like data centers. This approach ensures that AI infrastructure development doesn’t compromise basic energy access for the population.
EVIDENCE
India started with less than 5 megawatts 15-16 years ago and now adds 50,000 megawatts annually; policies enable moving farming activities from night to day shift using green power; per capita consumption is low at less than 1500 kilowatt hours per year
MAJOR DISCUSSION POINT
India’s Renewable Energy Opportunity for Data Centers
Argument 4
AI can help make intermittent renewable power dispatchable through better prediction and scheduling
EXPLANATION
Artificial intelligence can solve the traditional problem of renewable energy intermittency by using multiple data sources to predict generation patterns and schedule power dispatch. This technological solution makes renewable energy as reliable as conventional thermal power for grid operations.
EVIDENCE
AI uses climatic data from weather departments, renewable companies, defense departments, and real-time data from low earth orbit satellites to predict generation and schedule dispatch at 15-minute intervals
MAJOR DISCUSSION POINT
AI for Energy Applications and Innovation
Argument 5
Power availability is now the biggest challenge for AI, not compute or intelligence
EXPLANATION
The bottleneck for AI development has shifted from technological capabilities to energy infrastructure, with power constraints becoming the primary limiting factor for scaling AI systems. This represents a fundamental change in what determines AI advancement possibilities.
EVIDENCE
Morgan Stanley study identifies $4 million opportunity cost for power; in the US, no new power is available before 2030 and all gas machines are sold out; grid waiting time is 7-8 years
MAJOR DISCUSSION POINT
AI’s Energy Infrastructure Challenges and Global Impact
AGREED WITH
Raghav Chandra
Argument 6
Data sovereignty policies requiring local data storage are needed to drive domestic data center investment
EXPLANATION
Government policies mandating that data generated by domestic users must be stored within the country would create guaranteed demand for local data center infrastructure. This regulatory approach would provide the certainty needed for large-scale investments in domestic data center capacity.
EVIDENCE
India has 1.4 billion people with 1 billion connected users generating massive data through 700+ million YouTube users and 500+ million WhatsApp users, but this data currently resides in other countries
MAJOR DISCUSSION POINT
Policy and Regulatory Framework Challenges
Argument 7
Some Indian states like Maharashtra are providing streamlined permitting and incentives for data centers
EXPLANATION
While ease of doing business varies across Indian states, leading states are demonstrating best practices by simplifying regulatory processes and offering attractive incentive packages. This state-level competition is driving improvements in the investment climate for data center development.
EVIDENCE
Government of India does stack ranking of states; Maharashtra provides amazing support for data center development with streamlined permitting, land allocation, and incentives
MAJOR DISCUSSION POINT
Policy and Regulatory Framework Challenges
DISAGREED WITH
Raghav Chandra
Argument 8
India’s open access power policies allow flexible real-time power trading for data centers
EXPLANATION
India’s regulatory framework enables data centers to inject power into the grid and withdraw it in real-time, with accounting done on a monthly basis. This flexibility is rare globally and provides data centers with the reliable, clean power access they require for operations.
EVIDENCE
Open access policies allow power injection and real-time withdrawal with monthly accounting; this gives data centers the 24x7x365 reliable clean power they need, which few countries can provide globally
MAJOR DISCUSSION POINT
Policy and Regulatory Framework Challenges
Argument 9
India generates massive amounts of data with over 700 million YouTube users and billions on social platforms
EXPLANATION
India’s large connected population creates enormous amounts of digital content and data across multiple platforms, making it one of the world’s largest data-generating economies. This domestic data generation provides a strong foundation for local data center demand if supported by appropriate policies.
EVIDENCE
1.4 billion people with 1 billion connected; largest YouTube user base globally with 700+ million users; largest content creating economy across Instagram, YouTube, and social media; over 500 million WhatsApp users; cheapest data connectivity packages globally
MAJOR DISCUSSION POINT
India’s Data and Talent Advantages
Argument 10
Over 25% of global AI talent resides in India and could work domestically instead of for other countries
EXPLANATION
India possesses a significant portion of the world’s AI expertise, but this talent currently serves international markets rather than domestic development. Redirecting this talent toward domestic AI infrastructure and services could accelerate India’s position in the global AI economy.
EVIDENCE
More than 25% of global AI talent resides in India; this talent currently works for other countries but could be based in India working for India while providing services to the rest of the world
MAJOR DISCUSSION POINT
India’s Data and Talent Advantages
Argument 11
India offers the cheapest data connectivity packages globally with over 1 billion connected users
EXPLANATION
India’s telecommunications infrastructure provides extremely affordable data access to a massive user base, creating ideal conditions for data-intensive applications and services. This cost advantage, combined with the large user base, makes India an attractive location for data center operations serving both domestic and international markets.
EVIDENCE
India has one of the cheapest data connectivity packages in the world; over 1 billion people are connected out of 1.4 billion total population
MAJOR DISCUSSION POINT
India’s Data and Talent Advantages
N
Nathan Blom
4 arguments173 words per minute697 words240 seconds
Argument 1
Current data centers use outdated air-cooling technology when new builds should adopt future-ready solutions
EXPLANATION
Many existing data centers are constrained by legacy cooling infrastructure that prevents them from adopting more efficient technologies. However, new data center construction provides opportunities to implement advanced cooling solutions designed for future high-density computing requirements rather than outdated approaches.
EVIDENCE
Northern Virginia data centers are constrained by existing infrastructure; new builds with white space technologies can build for the future instead of the past
MAJOR DISCUSSION POINT
Cooling Technology Innovation and Efficiency
Argument 2
Two-phase cooling technology using refrigerants is 10-20 times more effective than traditional liquid cooling
EXPLANATION
Advanced cooling systems that allow liquid to boil and vaporize capture heat much more efficiently than traditional liquid cooling systems that keep coolant in liquid form. This phase-change technology, similar to refrigeration systems, represents a significant leap in cooling efficiency for high-density computing environments.
EVIDENCE
Traditional liquid cooling uses ethylene or propylene glycol mixed with water from the 1960s Apollo space mission; two-phase technology similar to air conditioners, cars, and refrigerators is 10-20 times more effective at capturing heat
MAJOR DISCUSSION POINT
Cooling Technology Innovation and Efficiency
AGREED WITH
Vineet Mittal
Argument 3
Innovation in cooling can achieve power usage efficiency ratios of 1.05 instead of 1.5
EXPLANATION
Advanced cooling technologies can dramatically improve the power usage effectiveness (PUE) of data centers, meaning much less electricity is wasted on cooling relative to actual computing work. This improvement represents a massive step function increase in overall data center efficiency.
EVIDENCE
Current advanced cooling deployments achieve PUE of 1.5; two-phase cooling technology can achieve PUE of 1.05, representing a massive step function increase in efficiency
MAJOR DISCUSSION POINT
Cooling Technology Innovation and Efficiency
Argument 4
Small companies are driving cooling innovation and will be acquired by larger companies
EXPLANATION
The IT industry’s innovation model relies on small companies taking risks and developing breakthrough technologies that larger corporations then acquire and scale. This pattern is expected to continue in cooling technology development, where startups are pioneering solutions that will eventually be adopted industry-wide.
EVIDENCE
Innovation requires smaller companies who can take risks and think bigger; the IT industry is built on individuals or small groups creating ideas that change multi-billion dollar industries; companies like AOL and Ask Jeeves fell off the map due to lack of innovation
MAJOR DISCUSSION POINT
Cooling Technology Innovation and Efficiency
A
Ashish Khanna
2 arguments153 words per minute1530 words598 seconds
Argument 1
AI can help distribution companies absorb decentralized solar and reduce overall system costs
EXPLANATION
Distribution companies traditionally resist decentralized renewable energy because it complicates their operations and finances, but artificial intelligence can provide the digital tools needed to manage distributed generation effectively. This technological solution can make decentralized renewables beneficial rather than problematic for utilities.
EVIDENCE
40% of global solar (400 gigawatts) is decentralized, but only 15-20% in India; distribution companies don’t like decentralized solar because it impacts distribution systems and finances
MAJOR DISCUSSION POINT
AI for Energy Applications and Innovation
AGREED WITH
Vineet Mittal
Argument 2
The intersection of energy and AI will create fundamental shifts similar to how Amazon changed retail
EXPLANATION
The convergence of artificial intelligence and energy systems represents a transformative moment that will reshape entire industries and business models. This intersection is expected to create disruptions and opportunities comparable to major digital transformations in other sectors.
EVIDENCE
International Solar Alliance is creating an ISA Academy to train people to bring together AI and energy skills; many AI engineers don’t understand energy and energy engineers don’t understand AI
MAJOR DISCUSSION POINT
AI for Energy Applications and Innovation
A
Announcer
3 arguments88 words per minute177 words119 seconds
Argument 1
A single large AI training run can consume as much electricity as thousands of homes use in a year
EXPLANATION
The Announcer highlights the massive energy consumption of AI systems by comparing a single training run to the annual electricity usage of thousands of households. This demonstrates the unprecedented scale of power requirements for AI infrastructure and sets the context for the discussion about energy challenges.
EVIDENCE
A single large AI training run can consume as much electricity as thousands of homes use in a year
MAJOR DISCUSSION POINT
AI’s Energy Infrastructure Challenges and Global Impact
AGREED WITH
Raghav Chandra
Argument 2
Data centers are facing unprecedented power and cooling requirements as AI scales at speed
EXPLANATION
The Announcer establishes that the rapid scaling of AI technology is creating infrastructure demands that exceed anything previously experienced. This framing emphasizes both the power generation needs and the cooling technology requirements that must evolve to support AI growth.
EVIDENCE
Data centers are facing unprecedented power and cooling requirements as AI scales at speed
MAJOR DISCUSSION POINT
AI’s Energy Infrastructure Challenges and Global Impact
Argument 3
Critical questions include how to plan for rapidly rising and uncertain energy demand and whether edge computing can reduce the load
EXPLANATION
The Announcer identifies key strategic questions about managing unpredictable energy demand growth and exploring technological solutions like edge computing. This highlights the uncertainty in planning for AI infrastructure and the need to consider distributed computing approaches as potential solutions.
EVIDENCE
Questions like how do we plan for rapidly rising and uncertainty energy demand? Can edge computing reduce the load, or is centralization inevitable?
MAJOR DISCUSSION POINT
AI’s Energy Infrastructure Challenges and Global Impact
A
Audience
1 argument175 words per minute67 words22 seconds
Argument 1
Global ramifications of data center development have both positive and negative aspects that need clarification
EXPLANATION
The audience member seeks clarification on the dual nature of global impacts from data center expansion, recognizing that there are both benefits and drawbacks to consider. This question demonstrates awareness that data center development creates complex trade-offs with international implications.
EVIDENCE
In your paper you have mentioned about global ramification… that particular aspect of global ramifications are of both types that is positive and negative
MAJOR DISCUSSION POINT
Environmental and Social Implications
Agreements
Agreement Points
Energy is the primary constraint for AI development, not computing power or algorithms
Speakers: Raghav Chandra, Vineet Mittal
Energy is the single greatest constraint on AI’s future, not algorithms or chips Power availability is now the biggest challenge for AI, not compute or intelligence
Both speakers agree that the bottleneck for AI advancement has shifted from technological capabilities to energy infrastructure, with power constraints becoming the fundamental limiting factor for scaling AI systems.
Data centers consume massive amounts of electricity with rapidly growing demand
Speakers: Raghav Chandra, Announcer
Data centers consume massive amounts of electricity – currently 415 terawatt hours globally, projected to reach 945 terawatts by 2030 A single large AI training run can consume as much electricity as thousands of homes use in a year
There is clear consensus on the unprecedented and rapidly growing energy demands of AI infrastructure, with specific quantification of current and projected consumption levels.
Innovation in cooling technology is critical for data center efficiency
Speakers: Nathan Blom, Vineet Mittal
Two-phase cooling technology using refrigerants is 10-20 times more effective than traditional liquid cooling Innovation happening across the board when knowledge and cross industry expertise starts fertilizing
Both speakers emphasize the importance of technological innovation in cooling systems, with Nathan providing specific technical solutions and Vineet highlighting the need for cross-industry knowledge transfer.
India has significant advantages for data center development
Speakers: Vineet Mittal, Ashish Khanna
India offers abundant sun, wind, and water resources with complementary generation patterns enabling 14-18 hours of power AI can help distribution companies absorb decentralized solar and reduce overall system costs
Both speakers see India as uniquely positioned to leverage renewable energy and AI technologies to become a major data center hub, with natural resource advantages and technological capabilities.
Similar Viewpoints
Both recognize the importance of managing social and environmental impacts of data center development, though Vineet is more optimistic about India’s ability to avoid resource conflicts through renewable energy expansion.
Speakers: Raghav Chandra, Vineet Mittal
Environmental and social costs include rising power prices for households and potential conflicts over land and water resources India is adding 50 gigawatts of renewable capacity annually without competing with normal consumers
Both acknowledge challenges in India’s regulatory environment while recognizing that some states are making progress in improving the business climate for data center investments.
Speakers: Raghav Chandra, Vineet Mittal
India needs better center-state coordination and ease of doing business improvements for data center development Some Indian states like Maharashtra are providing streamlined permitting and incentives for data centers
Both see innovation as driven by smaller, more agile companies that can take risks and develop breakthrough technologies, leading to industry-wide transformations.
Speakers: Nathan Blom, Ashish Khanna
Small companies are driving cooling innovation and will be acquired by larger companies The intersection of energy and AI will create fundamental shifts similar to how Amazon changed retail
Unexpected Consensus
Gaming industry’s role in AI infrastructure development
Speakers: Nathan Blom, Ashish Khanna
Traditional liquid cooling uses technology from the 1960s Apollo space mission and gaming systems Gaming industry was the start of GPU and now cooling as well
There was unexpected consensus on how the gaming industry has been a crucial driver of both GPU technology and cooling innovations that are now essential for AI infrastructure, highlighting an unconventional pathway for technological development.
Need for cross-industry knowledge transfer
Speakers: Vineet Mittal, Nathan Blom, Ashish Khanna
Innovation happening across the board when knowledge and cross industry expertise starts fertilizing Innovation requires smaller companies who can take risks and think bigger International Solar Alliance is creating an ISA Academy to train people to bring together AI and energy skills
All speakers unexpectedly converged on the importance of breaking down silos between industries and skill sets, recognizing that AI and energy challenges require interdisciplinary approaches and knowledge sharing across traditional boundaries.
Overall Assessment

The speakers demonstrated strong consensus on several key issues: energy as the primary constraint for AI development, the massive and growing electricity demands of data centers, the critical importance of cooling technology innovation, and India’s potential advantages in renewable energy and data center development. There was also agreement on the need for better regulatory frameworks and cross-industry collaboration.

High level of consensus with complementary expertise – the speakers approached the topic from different angles (policy/academic, renewable energy development, cooling technology, and international coordination) but arrived at remarkably similar conclusions about the challenges and opportunities. This strong alignment suggests a mature understanding of the issues and creates a solid foundation for coordinated action on powering AI infrastructure sustainably.

Differences
Different Viewpoints
Ease of doing business and regulatory efficiency in India
Speakers: Raghav Chandra, Vineet Mittal
India needs better center-state coordination and ease of doing business improvements for data center development Some Indian states like Maharashtra are providing streamlined permitting and incentives for data centers
Raghav Chandra emphasizes systemic coordination problems and bureaucratic inefficiencies as major barriers, citing examples of foreign companies struggling with multiple presentations without progress. Vineet Mittal acknowledges the challenge but presents a more optimistic view, highlighting successful examples like Maharashtra and government initiatives like state stack ranking that are driving improvements.
Timeline and feasibility of India becoming a major data center hub
Speakers: Raghav Chandra, Vineet Mittal
India needs better center-state coordination and ease of doing business improvements for data center development India offers abundant sun, wind, and water resources with complementary generation patterns enabling 14-18 hours of power
Raghav Chandra presents a cautious view emphasizing the significant administrative and coordination challenges that need to be overcome before India can realize its potential. Vineet Mittal presents an optimistic timeline suggesting India is already well-positioned and could become the third largest country for AI adoption and data centers in the near term.
Unexpected Differences
Priority focus for addressing AI energy challenges
Speakers: Raghav Chandra, Vineet Mittal
Environmental and social costs include rising power prices for households and potential conflicts over land and water resources India is adding 50 gigawatts of renewable capacity annually without competing with normal consumers
While both speakers acknowledge energy challenges for AI, Raghav unexpectedly emphasizes the social equity and environmental justice aspects (water access, rising electricity costs for households), while Vineet focuses on market opportunities and technical solutions. This represents a fundamental difference in framing the problem – social responsibility versus economic opportunity.
Overall Assessment

The discussion revealed moderate disagreements primarily around implementation timelines and regulatory challenges in India, with speakers generally aligned on the fundamental energy constraints facing AI development but differing on solutions and priorities.

The disagreement level is moderate but significant for policy implications. While there’s consensus on the core challenge (energy constraints for AI), the different perspectives on India’s readiness and the emphasis on social versus economic considerations could lead to different policy recommendations and investment strategies. The disagreements suggest a need for balanced approaches that address both technical opportunities and social equity concerns.

Partial Agreements
All speakers agree that energy is the fundamental constraint for AI development and that current infrastructure faces massive challenges. However, they differ on solutions: Raghav emphasizes the need for comprehensive policy coordination and warns about social/environmental costs, Vineet focuses on India’s renewable energy advantages and market opportunities, while Nathan emphasizes technological innovation in cooling systems.
Speakers: Raghav Chandra, Vineet Mittal, Nathan Blom
Data centers consume massive amounts of electricity – currently 415 terawatt hours globally, projected to reach 945 terawatts by 2030 Power availability is now the biggest challenge for AI, not compute or intelligence Current data centers use outdated air-cooling technology when new builds should adopt future-ready solutions
Both agree on AI’s potential to solve renewable energy challenges, but approach from different angles: Ashish focuses on helping distribution companies manage decentralized solar through digitization, while Vineet emphasizes using AI to make renewable power as reliable as thermal power through prediction and scheduling.
Speakers: Ashish Khanna, Vineet Mittal
AI can help distribution companies absorb decentralized solar and reduce overall system costs AI can help make intermittent renewable power dispatchable through better prediction and scheduling
Takeaways
Key takeaways
Energy availability, not algorithms or chips, is the single greatest constraint on AI’s future development India has significant potential to become a global data center hub due to abundant renewable energy resources, single national grid, and complementary solar-wind generation patterns AI can transform energy systems through better prediction and scheduling of renewable power, making intermittent sources dispatchable Cooling technology innovation, particularly two-phase cooling systems, can dramatically improve power usage efficiency from 1.5 to 1.05 Data sovereignty policies requiring local data storage are essential to drive domestic data center investment and capitalize on India’s data generation The intersection of AI and energy will create fundamental industry shifts comparable to how Amazon transformed retail Small innovative companies are driving breakthrough cooling technologies that will be adopted by larger corporations India’s massive data generation (700M+ YouTube users, 1B+ connected users) combined with 25% of global AI talent creates strong foundation for domestic AI infrastructure
Resolutions and action items
International Solar Alliance launched a global AI mission for energy called ‘AI for Energy’ ISA Academy to be created to train engineers combining AI and energy skills Need for interoperable global standards for AI-energy integration systems States like Maharashtra are streamlining permitting processes and providing incentives for data center development Government implementing stack ranking of states to improve ease of doing business for data centers Tax exemption schemes announced for data centers with foreign collaboration
Unresolved issues
Center-state coordination challenges and inconsistent ease of doing business across Indian states Water scarcity concerns for data center cooling in a country struggling to provide 24/7 drinking water access Environmental and social costs including rising power prices for households near data centers Balancing development benefits with impacts on basic needs like drinking water and environmental sustainability Grid waiting times and permitting delays that could hinder rapid data center deployment Need for comprehensive data localization legislation with specific timelines Workforce development to bridge the gap between AI engineers and energy engineers
Suggested compromises
Leapfrog to advanced cooling technologies rather than adopting outdated solutions used in other countries Focus on states with better regulatory frameworks while working to improve lagging states Combine expertise across industries (clean rooms, battery cooling, data centers) to develop India-specific solutions Use AI to shift farming activities from night to day to better utilize solar power generation Implement liquid cooling solutions to address water constraints while maintaining efficiency Balance renewable energy development with grid stability through AI-enabled prediction and storage systems
Thought Provoking Comments
The single greatest constraint on AI’s future, which is not algorithms, not chips, but it is energy for AI-based data centers.
This comment reframes the entire AI development narrative by identifying energy as the primary bottleneck rather than the commonly discussed technological constraints. It challenges the conventional focus on computational power and algorithmic advancement.
This statement set the foundational premise for the entire discussion, shifting focus from technical capabilities to infrastructure limitations. It established energy as the central theme and influenced subsequent speakers to address power-related solutions and innovations.
Speaker: Raghav Chandra
A nesting colony of bees had torpedoed Meta’s plans to open the world’s first nuclear-powered AI data center… exposed their deeper vulnerability that Meta’s AI strategy depended on a single resource that it did not control and command, which is electricity.
This anecdote brilliantly illustrates how even tech giants are vulnerable to seemingly minor environmental factors, highlighting the fragility of AI infrastructure dependencies. It demonstrates that technological advancement can be derailed by factors completely outside the digital realm.
This story provided a memorable and concrete example that grounded the abstract discussion in reality. It influenced the conversation to consider not just power availability but also the reliability and environmental factors affecting data center operations.
Speaker: Raghav Chandra
The battle for the AI is no more compute. And it’s no more intelligence. It’s the power. Power is the biggest challenge.
This statement crystallizes the paradigm shift in AI development priorities, moving from a focus on computational capabilities to energy infrastructure. It represents a fundamental reframing of what constitutes the competitive advantage in AI.
This comment reinforced and amplified Raghav’s earlier point about energy constraints, creating a consensus among panelists about the centrality of power issues. It helped transition the discussion toward solutions and opportunities rather than just problems.
Speaker: Vineet Mittal
India offers that opportunity where we can set up multiple gigawatt data center. We can provide them green power using solar, wind and storage… Unlike US or Europe, India has a single grid.
This insight positions India not just as a cost-effective alternative but as a structurally superior solution due to its unified grid system and renewable energy potential. It challenges the assumption that developed countries are inherently better positioned for AI infrastructure.
This comment shifted the discussion from problem identification to solution positioning, introducing geopolitical and economic dimensions. It influenced the conversation toward India’s competitive advantages and policy implications for developing nations.
Speaker: Vineet Mittal
Today, when we talk about advanced cooling that’s being deployed, what we’re really talking about is moving from that air-cooled ecosystem to just a simple liquid cooling ecosystem, which was developed in the 1960s for the Apollo space mission… this is an old and proven technology.
This comment reveals that what the industry considers ‘advanced’ is actually decades-old technology, exposing a significant innovation gap. It suggests that the cooling industry has been complacent and that there’s substantial room for technological leapfrogging.
This observation introduced a critical perspective on innovation in the cooling sector, challenging assumptions about technological progress. It opened up discussion about the potential for dramatic efficiency improvements and the role of smaller companies in driving innovation.
Speaker: Nathan Blom
The biggest bottleneck in India today is the lack of synergy between the states and the center, between the departments of the government and essentially between the states and the center.
This comment identifies a systemic governance issue that could undermine India’s potential advantages in the AI infrastructure space. It provides a realistic counterbalance to the optimistic projections about India’s capabilities.
This insight introduced a sobering reality check to the discussion, tempering the enthusiasm about India’s potential with practical implementation challenges. It influenced the conversation to consider not just technical and economic factors but also governance and policy execution issues.
Speaker: Raghav Chandra
Overall Assessment

These key comments fundamentally shaped the discussion by establishing a clear narrative arc: from problem identification (energy as the primary constraint) to opportunity recognition (India’s advantages) to implementation challenges (governance and innovation gaps). The comments created a multi-dimensional analysis that moved beyond simple technical discussions to encompass geopolitical, environmental, economic, and governance perspectives. The interplay between these insights created a comprehensive framework for understanding the complex relationship between AI development and energy infrastructure, while positioning developing nations like India as potentially superior solutions to the challenges facing developed countries in AI infrastructure deployment.

Follow-up Questions
How can consumers effectively trade power based on rooftop solar and batteries through P2P trading?
This requires regulatory evolution and IT architecture development for distribution companies to enable millions of prosumers to trade power effectively
Speaker: Ashish Khanna
What IT architecture is needed for distribution companies to be ready for power trading?
Each distribution company needs specific digital infrastructure to facilitate decentralized renewable energy trading
Speaker: Ashish Khanna
How can we bridge the skills gap between AI engineers and energy engineers?
There’s a critical need to train professionals who understand both AI and energy systems for the intersection of these fields
Speaker: Ashish Khanna
What interoperable standards are needed globally for AI-energy integration?
The world lacks unified standards for how AI and energy systems will work together across different countries and regions
Speaker: Ashish Khanna
Can 24×7 solar and storage provide cost-competitive energy to hyperscale data centers?
This is crucial for determining if renewable energy can meet the massive power demands of large data centers above 100 MW
Speaker: Ashish Khanna
How can India improve center-state coordination and ease of doing business for data center development?
Despite India’s potential, bureaucratic bottlenecks and lack of coordination between government levels are hindering data center investments
Speaker: Raghav Chandra
How can India adopt advanced liquid cooling technologies given water constraints?
Water scarcity is a critical issue in India, requiring innovative cooling solutions that don’t compete with basic water needs
Speaker: Raghav Chandra
What is the timeline and framework needed for data sovereignty legislation in India?
India needs clear policies requiring Indian user data to be stored locally within specific timeframes to enable data center planning
Speaker: Vineet Mittal
How can two-phase cooling technology be scaled and adopted in data centers?
This technology could achieve PUE ratios of 1.05 compared to current 1.5, representing massive efficiency gains that need further development
Speaker: Nathan Blom
How can cross-industry expertise be combined to solve data center cooling challenges in India’s climate?
Combining knowledge from clean rooms, battery cooling, and data centers could create customized solutions for India’s high humidity environments
Speaker: Vineet Mittal

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.