Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5

Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5

Session at a glanceSummary, keypoints, and speakers overview

Summary

This transcript captures a press conference led by Indian Minister Ashwini Vaishnaw discussing the outcomes and achievements of India’s AI Impact Summit. The minister highlighted the summit’s unprecedented global participation, with major AI players, startups, and world leaders attending from around the world. He emphasized Prime Minister Narendra Modi’s vision of “Manav AI” – artificial intelligence of the humans, by the humans, for the humans – which received widespread acceptance from international participants.


Vaishnaw reported significant investment commitments totaling over $250 billion for infrastructure-related investments and $20 billion for venture capital deep tech investments. The summit achieved a Guinness World Record for involving 2.5 lakh students in the AI journey, demonstrating India’s commitment to youth engagement in technology. The minister noted that the Delhi Declaration had already secured over 70 signatories, with expectations to reach 80 countries by the summit’s conclusion, surpassing the previous summit’s 60 signatories.


The discussion covered India’s progress in AI development, including the launch of 12 foundational models exceeding the original target of two, and the expansion of GPU capacity from a planned 10,000 to 38,000 units. Vaishnaw announced plans for AI Mission 2.0, which will be significantly larger in scope than the initial mission. He addressed questions about implementation, emphasizing that real collaborations and MOUs were established rather than just paper commitments.


The minister also discussed India’s approach to AI governance, including new regulations for synthetic and generated content, data protection frameworks, and the importance of inclusive growth ensuring AI benefits reach every citizen. The summit reinforced India’s position as a trusted partner in global AI development and semiconductor supply chains.


Keypoints

Major Discussion Points:

AI Summit Success and Global Participation: Minister Vaishnaw highlighted the phenomenal success of India’s AI Impact Summit, with participation from major global AI players, startups, and over 250 billion dollars in infrastructure investment pledges. The summit achieved a Guinness World Record for involving 2.5 lakh students and is expected to have 80+ countries signing the final declaration.


“Manav AI” Vision and Responsible AI Development: Prime Minister Modi’s vision of “Manav AI” (AI of the humans, by the humans, for the humans) was widely accepted by global participants. The discussion emphasized India’s leadership in bringing responsible and ethical AI to the forefront, with major AI companies agreeing to voluntary commitments and safety guidelines.


India’s Technological Infrastructure and Sovereignty: The conversation covered India’s progress in building AI infrastructure, including exceeding GPU targets (38,000 vs. planned 10,000), developing 12 foundational AI models, and establishing AI safety institutes. There was emphasis on creating a “sovereign bouquet of models” and India’s trusted position in global semiconductor supply chains.


Implementation and Democratization of AI Benefits: Significant focus on ensuring AI benefits reach the “last person” in society through inclusive growth policies, state government collaboration, and the upcoming AI Mission 2.0. Discussion included plans for AI education in schools, democratization of AI tools, and addressing concerns about data protection and cybersecurity.


Global Governance and Regulatory Framework: The summit addressed international cooperation on AI governance, including the Delhi Declaration, frontier AI commitments, and India’s new SGI (Synthetic and Generative Intelligence) regulations that require transparency in AI-generated content.


Overall Purpose:

The discussion served as a comprehensive briefing on India’s AI Impact Summit, aimed at positioning India as a global leader in responsible AI development while showcasing the country’s technological capabilities, policy framework, and commitment to inclusive AI adoption that benefits all citizens.


Overall Tone:

The tone was predominantly celebratory and confident throughout, with Minister Vaishnaw expressing pride in India’s achievements and global recognition. The tone remained consistently optimistic and forward-looking, with occasional defensive moments when addressing political criticism (particularly regarding Congress party disruptions). The minister maintained an authoritative yet accessible demeanor when responding to technical questions, and the overall atmosphere was one of national accomplishment and ambitious future planning.


Speakers

Speakers from the provided list:


Ashwini Vaishnaw: Minister (appears to be the main speaker/minister addressing questions about AI Impact Summit, semiconductor industry, and government initiatives)


Speaker 1: Moderator/facilitator (managing the Q&A session, directing questions, and maintaining order during the press conference)


Randhir Jaiswal: Ministry of External Affairs official (thanked for MEA’s role in organizing the AI summit, mentioned working with METI as Team India)


Speaker 4: Role/title not mentioned (made a brief interjection during the session)


Audience: Multiple journalists and media representatives asking questions (includes reporters from various news organizations like ANI, Economic Times, Mint, Money Control, Business Standard, Hindustan Times, PTI, BBC, DD India, Zee News, etc.)


Additional speakers:


None – all speakers mentioned in the transcript are included in the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This transcript captures a comprehensive press conference led by Indian Minister Ashwini Vaishnaw following India’s landmark AI Impact Summit, revealing the country’s ambitious positioning as a global leader in responsible artificial intelligence development and governance. The discussion, which included extensive Q&A with journalists from various media organizations including ANI, Economic Times, Mint, and Business Standard, demonstrates India’s strategic approach to AI diplomacy, technological sovereignty, and inclusive development whilst addressing complex questions about implementation, regulation, and international cooperation.


Summit Success and Global Recognition

The AI Impact Summit achieved unprecedented international participation, establishing India as a significant player in global AI governance. Minister Vaishnaw emphasised the “phenomenal quality of discussion” across all summit components, from ministerial dialogues to the leaders’ plenary and main inauguration function. The event attracted practically every major AI player globally, alongside numerous startups showcasing their innovations.


At the conclusion of the press conference, MEA’s Randhir Jaiswal provided specific attendance figures, noting that “20 world leaders who attended this AI summit” were supported by “45 delegations represented at ministerial level from across the world” with “100 countries represented.” Vaishnaw had earlier highlighted that this was the first AI summit held in a Global South country, with significant representation from Africa and other Global South nations.


The summit’s success extended beyond mere attendance figures. When asked about concrete outcomes, Vaishnaw reported that investment pledges had already exceeded $250 billion for infrastructure-related investments and approximately $20 billion for venture capital deep tech investments, with numbers continuing to grow daily as he consulted with colleagues about exact figures. These commitments represent more than financial backing; they demonstrate global confidence in India’s role in the emerging AI landscape. The minister stressed that whilst the numbers are important, the underlying message is more significant: “the world has confidence on India’s role in the new AI age.”


The “Manav AI” Vision and Ethical Leadership

Central to India’s approach is Prime Minister Narendra Modi’s vision of “Manav AI” – artificial intelligence “of the humans, by the humans, for the humans.” This human-centric philosophy resonated strongly with international participants, with Vaishnaw noting that “practically every major AI player in the world” accepted this vision. The concept proved compelling across civilisations, countries, generations, and sectors because it prioritises humanity above technological advancement.


This vision underpinned India’s leadership in bringing responsible and ethical AI discussions to the forefront. The summit achieved a Guinness World Record by involving 2.5 lakh students in the AI journey, which Vaishnaw highlighted as a significant achievement demonstrating India’s commitment to youth engagement and democratic participation in technological development. This approach contrasts with more technocratic models of AI governance, positioning India as an advocate for inclusive and participatory AI development.


The ethical dimension was further reinforced through voluntary commitments from major AI companies. Vaishnaw highlighted this as a “major achievement,” noting the unprecedented nature of bringing all major AI players onto the same stage to agree on common principles. This represents a significant diplomatic success, particularly given the competitive nature of the global AI industry.


Technological Infrastructure and Sovereign Capabilities

When responding to questions about concrete achievements, Vaishnaw detailed how India’s AI Mission 1.0 has substantially exceeded its original targets, demonstrating the country’s rapid technological advancement. The mission initially aimed for 10,000 GPUs but has achieved 38,000, with another 20,000 planned for launch. Similarly, whilst the mission targeted two foundational models, India has developed a “bouquet of 12 models” that are multimodal and well-rated, with some performing better than OpenAI, DeepSeek, and Gemini Pro on various benchmarks. The AI safety infrastructure has also expanded beyond expectations, with 12 institutes now working in a network mode instead of the originally planned single institute.


These achievements are particularly noteworthy given resource constraints. Vaishnaw emphasised that in bilateral meetings with industry leaders, there was genuine surprise at “the quality of output with such few resources.” This frugal innovation approach challenges conventional assumptions about AI development requiring massive capital investment, suggesting that strategic resource allocation and engineering excellence can compete with frontier laboratories’ vast resources.


The development of sovereign AI models represents a crucial aspect of India’s technological independence. These models provide alternatives to foreign-developed AI systems, ensuring that India maintains control over critical AI infrastructure whilst demonstrating competitive capabilities in the global market.


Regulatory Framework and Global Standards

India’s regulatory approach has gained unexpected international acceptance, with the new Synthetic and Generated Intelligence (SGI) regulations receiving global endorsement. These regulations, set to go into effect, require transparency in AI-generated content, enabling users to distinguish between “real content or synthetic content” as Vaishnaw explained. He noted that many countries congratulated India for taking the first step in this direction and indicated intentions to implement similar frameworks, with major tech companies accepting these regulations as necessary.


The regulatory philosophy underlying these measures is straightforward, as Vaishnaw articulated: “what is illegal in the physical world is also illegal in the online world.” This principle extends constitutional mandates into digital spaces, creating consistency between offline and online governance. The minister reported no significant opposition to this approach, suggesting broad acceptance of the need for AI content transparency.


India’s data protection framework has also gained international recognition, with three countries expressing interest in adopting India’s template for their own data protection legislation. This represents a significant shift where India is leading regulatory innovation rather than following international standards, positioning the country as a regulatory trendsetter in the digital governance space.


Implementation Challenges and Inclusive Development

Despite the summit’s success, journalists raised critical concerns about how voluntary commitments and non-binding declarations would translate into concrete action. Vaishnaw addressed these implementation challenges by emphasising that “lots and lots of real action, real MOUs, real understanding” had emerged from the summit, suggesting that substantive collaborations were established beyond ceremonial agreements.


The minister’s commitment to inclusive development reflects India’s political philosophy of “Antyodaya” – serving the last person in society. He acknowledged that ensuring AI benefits reach every citizen would require “bahut mehnat” (significant effort) and close collaboration with state governments, who serve as the primary mechanism for grassroots-level implementation. This approach recognises that technological advancement must be accompanied by deliberate efforts to prevent digital divides from widening existing inequalities.


The democratisation of AI education emerged as a key priority, with plans for industry collaboration in developing curricula for schools and colleges. Vaishnaw mentioned the democratization approach, referencing a specific example of a 30-day Python program available for Rs. 199 that had been brought to his attention. The goal is to create practical, useful knowledge that serves industry needs whilst ensuring accessibility across different socioeconomic segments.


International Cooperation and Geopolitical Dynamics

The summit’s diplomatic achievements were substantial, with the Delhi Declaration progressing from 60 signatories at the previous summit to over 70 current signatories, with expectations of reaching 80+ countries. However, the discussion revealed underlying tensions in international AI governance. A BBC journalist highlighted that the U.S. delegation had “very strongly rejected calls for global governance in AI,” potentially contradicting the summit’s cooperative spirit.


This tension illustrates the complex geopolitical dynamics surrounding AI governance, where national sovereignty concerns compete with the need for coordinated international responses to AI’s global implications. India’s approach appears to focus on building consensus among willing partners rather than pursuing universal agreements that might be diluted by holdout countries.


Vaishnaw briefly mentioned that “We also had Pax Silica today which is very important for us from the semiconductor industry perspective from resilient supply chain resilient value chain perspective,” indicating another dimension of international cooperation focused on semiconductor supply chain resilience.


Future Directions and AI Mission 2.0

Looking forward, Vaishnaw announced plans for AI Mission 2.0, which will be “definitely bigger” than the initial mission. The success of Mission 1.0 in exceeding targets provides a foundation for more ambitious goals in the next phase. He indicated that the upcoming mission will focus on “new level of models, new level of common compute, new level of safety,” building on the collaborative relationships and “many collaborations agreed in the last few days” during the summit.


The semiconductor sector represents a parallel track of development, with Vaishnaw specifically mentioning the foundation laying for a new semiconductor plant in Uttar Pradesh and the Micron facility starting commercial production on the 28th. These developments support India’s broader technological ecosystem, providing the hardware foundation necessary for AI advancement.


Media Engagement and Democratic Discourse

The press conference itself demonstrated India’s commitment to transparent communication about AI policy. The systematic approach to media engagement, with questions organised by seating rows and comprehensive responses to diverse inquiries from various media organizations, reflects an open approach to public discourse about AI governance. Vaishnaw explicitly thanked the media for playing a “constructive role,” acknowledging their importance in facilitating public understanding of AI policy.


However, the discussion also revealed political tensions, with Vaishnaw criticising what he described as Congress party attempts to disrupt the summit. He praised youth participants for rejecting what he called “negative politics” and supporting the exhibition as their platform for engagement with AI technology. This political dimension was mentioned as part of his broader thanks to various stakeholders including media, organizers, MEA, Ministry of Home Affairs, and Delhi Police, highlighting how AI development has become embedded in broader political narratives about India’s technological progress and international standing.


Conclusion and Strategic Implications

The AI Impact Summit represents a significant milestone in India’s emergence as a global AI leader, combining technological capability with diplomatic influence and ethical leadership. The success in attracting international participation, securing investment commitments, and gaining acceptance for regulatory frameworks demonstrates India’s growing influence in global technology governance.


The emphasis on “Manav AI” and inclusive development distinguishes India’s approach from purely market-driven or security-focused AI strategies, potentially offering a “third way” that balances technological advancement with human welfare. The challenge now lies in translating summit achievements into concrete implementation that delivers benefits across Indian society whilst maintaining international cooperation and competitive technological capabilities.


The discussion reveals both the opportunities and complexities of AI governance in a multipolar world, where countries must balance sovereignty with cooperation, innovation with regulation, and technological advancement with social equity. India’s approach, as demonstrated through this summit and articulated in Vaishnaw’s detailed responses to media questions, suggests a model that prioritises human-centric development whilst building technological sovereignty and international partnerships.


Session transcriptComplete transcript of the session
Ashwini Vaishnaw

from the world. We had practically every major AI player in the world participating in large numbers. We had so many startups getting the opportunity to showcase their work. Overall, the quality of discussion was phenomenal. If you look at the ministerial dialogue, the leaders plenary, the main inauguration function, the summit, the quality of participation, the quality of dialogue was phenomenal. Honorable Prime Minister Narendra Modi’s vision of Manav AI, which is AI of the humans, by the humans, for the humans. I think that was very well accepted practically every major AI player in the world. The ministerial, which we had the bilateral, which… I and my colleague Sri Jitinji had. Everywhere, practically every minister resonated with this and everybody felt happy that we have brought the discussion about responsible and ethical AI to the forefront by involving two and a half lakh students in this entire journey.

We had a Guinness World Record for that involvement of the students. We also have a lot of investment pledges I was just asking Abhishekji and Krishnanji I think the number is growing each day so it’s already crossed 250 billion dollars for the infra -related investments and about 20 billion dollars for the VC deep tech investments which have been committed by investors. This is a very important sign for us. the numbers are important but what is important is the world has confidence on India’s role in the new AI age. That’s very very important for all of us because as we have seen there is always a always a need to bring out the talent that we have, bring out the energy that we have in front of the world so that the world recognizes that.

I would also like to share with you that the action summit, the previous one had about 60 signatories in the final declaration. We have already crossed 70. There are many ministers who are here and they are discussing with us. So I think by the time we close the summit tomorrow, we have, as you know, we have extended it by one more day. We believe that it will cross 80. All the major countries have already signed. If you feel that somebody has not signed, you need not speculate on that. All the important AI, people who matter in AI, they have all signed. We will be giving you the formal number tomorrow once the summit closes. That is the way it should be done.

That’s the right way of doing things. We also had many interesting episodes where I met a very young innovator today morning and even in the previous couple of days back also. Some very young people have done so much work in AI, which was very, very encouraging. Because if the youth sees that hope in this new world, the youth has that positivity. about this technology. We also found very strong endorsement of our policy of working on all the five layers and our focus on having a sovereign bouquet of models. The models which were released, I tell you, in every bilateral that I had with the industry leaders, they are really surprised at the quality of output with such few resources, with such the kind of resources which some of the frontier labs have at their disposal, and with such frugal resources our engineers and researchers have produced such good models which is what gives huge, huge endorsement to our efforts.

I would also like to thank all the team members. All the stakeholders, right from media, from the organizers, from ITPO, and special thanks to the MEA and Ministry of Home Affairs, Delhi Police. for so much effort. They have tirelessly put in making this a grand success. Thank you everybody who participated in this. And also thanks to the youth who endorsed this, who took this so positively that whatever little effort that Congress made for trying to disrupt the summit was really, really, I mean, the youth very clearly said that this is their exhibition. It is the exhibition. This is the summit for the youth who want to make the best use of it. They don’t believe in the negative politics that Congress was trying to play.

We had some bad choices here, people coming into the exhibition, and we took immediate action against anybody who tried to demean the… demean the good work that is being done by our startups and by engineers, by our people who are working in the AI field. That is something that we are a very open -minded government. We believe in taking your feedback. We believe in working with you. We believe in the goal of Vixit Bharat, and that’s why we would like to tirelessly work with you for this goal, which our Prime Minister has given that vision for our entire country, and we have to do it together. This has to be done by all the 140 crore of our citizens who believe in this common goal of Vixit Bharat, and these are steps in that direction.

Friends, tomorrow we will be also laying the foundation for our next… …semiconductor plant here in Uttar Pradesh. I’ll invite all of you to join that ceremony also and on 28th we’ll start the commercial production from Micron facility and that will be one of the largest facilities that Micron has practically more than 10 cricket fields kind of facility it’s very large and that is going to be inaugurated on 28th so all these are steps very methodical step by step moving in the direction for creating that foundation which our Prime Minister is laying for the young generation for Vixit Bharat for all of you who are watching it on TV or social media our Prime Minister Shri Narendra Modi Ji is laying the foundation for the country, which will be a developed nation by 2047.

I’ll take questions, and like in the past, we’ll follow the first row,

Speaker 1

Thank you, sir. First of all, please identify yourself and your organization’s name before asking the question. And as sir has said, start from the left. Yes, please.

Audience

Hi, sir. I am Nishant Ketu from ANI. My question is, how do you see that India’s role in… useful tools for day -to -day Python development and Python work? So we have developed a program which is like absolutely beginner… Even if you have zero knowledge in Python, you can join this program. And in 30 days, you will be becoming pro of Python. That also using AI. You will be not becoming a Python developer. You will be becoming a 10x Python developer. And the best part, like we have democratized this program. So this 30 -day program is just for Rs .199. Can you believe that? Prime Minister began this on the 16th when this program began. And today where we are.

What is the observation Prime Minister has given to you or indication? He has given to you. Sorry? Observation or indication that he has given to you about AI Impact Summit.

Speaker 1

Next. Please.

Audience

Hi, this is Deepak Ajwani from Economic Times Digital Team. I have one simple ask. Have there been certain guidelines, guardrails that have been put together by all the countries that got represented yesterday on the stage? on effective, ethical, and responsible use of AI. Is there a paper that you can bring about maybe tomorrow where all of you have agreed that this is at least the first set of blueprint which can be iterative later? Thank you. Hi, sir. Shauvik from Mint. Sir, two questions. One is on the participation from big tech companies. Have there been conversations? The global tech companies. Have there been conversations with global tech companies in terms of the role that they will play in India as far as public services are concerned?

Because each of them spoke about AI and its role in public services. And secondly, the models that were launched under the AI mission, they’ve also been backed. Is there a takeaway from the summit in terms of where they go from here? Thank you. So, Oyeek from Money Control. So, just wanted to ask you, yesterday you had, you had the frontier AI commitments. So, the declaration will also come tomorrow. So the frontier AI commitment is voluntary in nature. The Delhi Declaration, I’m assuming, is non -binding. So how do we ensure that this does not remain on paper? How do we ensure implementation?

Ashwini Vaishnaw

Can you repeat your question?

Audience

So I’m saying the frontier AI commitment is voluntary in nature, and the Delhi Declaration, whenever it comes, is non -binding. So how do we ensure that this does not remain on paper? Like the declarations, commitments made in the…

Speaker 1

Anybody else on front row? Anyone? Okay, please.

Audience

Sir, Ashish from Business Standard. All of the three previous summits had a focus area when it came to the declarations. If you could share just one line on what would be our focus area when the declaration is signed.

Speaker 1

Please. You are close? Yeah.

Audience

Hi, sir. Shubhan from the Economic Times. I understand that the declaration will be coming tomorrow, and as you mentioned, some 80 -odd countries will maybe… The list may be as high as… Now it’s 70, maybe up to 80. I just wonder… I wanted to understand, since some of the last… Some of the previous summits have seen significant difference of opinion in India, what were some of the areas where it was relatively easier to build consensus? and if possible, what were some of the areas where it took a little bit of time?

Speaker 1

Next. Last.

Audience

Hi, sir. We look AI -ready globally, but my question would be for the last person standing in India, how far and how long it will take to reach to that one last person in India? How long will it take for AI to reach there? Very good question. Hi, sir. This is Lalit from Best Media Info. My question is, we have been seeing that traditional media sectors like TV, radio, print, they have been fighting for ad ex, for advertising revenue, while digital is scaling up. Is there any way that AI or any policy can actually help bring balance in this revenue share of advertisement? My second question would be, there has been a long -pending TRP guidelines overhaul that was formulated.

Normally, you know, it was meant to bring multi -agency system into the picture and removal of landing pages. We just want to know where or in what stage that guidelines are in, and can we expect the guidelines coming in anytime soon? Sir, I am Prashant from AsiaNet News. There were very good sessions in this summit. How do you wish to take down to the grassroots level how these sessions can help the lives of common man?

Ashwini Vaishnaw

So, there are questions about where do we go from here? What will be the implementation? I’ll take all these questions one by one. I think, friends, the journey so far has been very meaningful, very methodical, starting from building the base and working through all the layers of the IC, and creating that foundational level of work. and now getting the entire world to come here, deliberate, interact with our industry. Now we’ll take the next level of our AI mission where we will be focusing and taking to a totally new level of models, a new level of common compute, new level of safety. We have so many collaborations agreed in the last few days, which is where I would like to address that point about paper versus real action.

Yes, there is lots and lots of real action, real MOUs, real understanding, which has happened in the past few days, where many of these things which concern us as well as the entire world will be working in a very collaborative manner. That is the… That is the real action which will come out of it. We will be very soon start working on the AI mission 2 .0, which will be definitely bigger than what it was in the AI mission 1 .0. Many of the goals we had set for ourselves in the mission 1 .0 are on the verge of getting completed and many of them have actually exceeded. We wanted about 10 ,000 GPUs. We have 38 ,000 already and another 20 ,000 very soon going to be launched.

We have foundational models. We were looking at two foundational models. We have a bouquet of 12 models and very multimodal, reasonably, I mean very well rated. We wanted to have an AI safety institute. We have now 12 institutes working on this in a network mode. So all these goals that we had set for ourselves are getting implemented very rapidly. So now we have to set bigger goals. And achieve them as a part of the AI mission 2 .0. our Honourable Prime Minister has always led from the front the vision Manav AI that he gave yesterday is something which everybody resonated everybody accepted in the ministerial dialogue, in the bilaterals everywhere people thought that first time they have heard a vision which is so compelling and it just cuts across every civilization every country this is meaningful for everybody every generation, every sector every country because ultimately it’s the humanity which matters the most and that’s why this vision resonated with everybody big tech participated very much in this same the participation of the startups and young innovators it was very good participation there is huge consensus on the declaration we just want to maximize the numbers we in India should be we are not going to be reading that effort.

It’s so natural to do it. And given the size of this summit which has happened, it’s natural to set a number so that the record is always here. So that’s why we are trying to maximize that in a very… In fact, Abhishek… Do a little more work. He was thinking he would take a day off. But no, he’s not going to get a day off. So do a little more work. Very important question which came is about diffusion, about the last person. How do we see the benefit? If you go there… rich countries, you will find that 5G is very patchy. It’s not very the way it is in our country. Vaisahi effort isme bhi lagayenge.

Aur isme bahut mehnat karni padhegi. And we are prepared to put that hard work, put that effort. Our Prime Minister keeps inspiring us that for us, we should not stop till the benefit reaches the last person in the society. That is our goal always. And we in BJP have always had this basic tenet as Antiyode. We believe in inclusive growth. And if you look at Honourable Prime Minister’s programs, each and every program, whether it is Jan Dhan, whether it is the Swachh Bharat, whether it is construction of toilets, whether it is Hargarh Nal Sejan, each and every program has been created and executed to bring the benefits. Thank you. We believe in inclusive growth as a basic political philosophy, and that’s why here also that same political philosophy will be reflected.

I have absolutely no doubt about it because this is a family in which inclusive growth is one of the most important tenets of our thought process. There are questions about guardrails. You might have seen the first major, first time all the big AI players came on the same stage and agreed. Voluntary is more like saying it, but of course we have discussed with them, and all of them have come to this consensus. Taking those first steps was very, very important, and I don’t want to… exaggerate this but if you ask any major policy leader in the world and I had so many meetings today each and everyone is surprised how we could pull together the entire AI industry coming forward and started openly.

Major major achievement and this kind of achievement shows how India can be leading the thought process. We also had Pax Silica today which is very important for us from the semiconductor industry perspective from resilient supply chain resilient value chain perspective and the fact that we are today seen whether in Europe or in Australia or in US or in Southeast Asia everywhere we are seen as a trusted country that itself speaks a lot about how our Prime Minister has conducted the foreign policy how our Prime Minister has developed that trust among the entire global every sector every moment every geography every part of the world we will Youth Congress I have already responded

Speaker 1

next room just a second second room please we will come one by one yes please

Ashwini Vaishnaw

MIBC related questions I will answer later today we will talk about AI mission we will talk about AI mission next time

Audience

Namaskar Sir Sir Sir Thank you. Thank you. Thank you. These are my two questions.

Speaker 1

Anybody else?

Audience

Sejal Sharma from Hindustan Times.

Speaker 1

Just a second. Yes, please. Who is asking? Second row? Yeah, please.

Audience

Congratulations on the declaration, sir. I just wanted to know, could you give us names of some of the countries that have signed the declaration already? Just a few. Not proper. Good evening, sir. I’m an independent journalist. I’m an independent journalist. So first, I want to know what are the outcomes from each of the seven working groups that were formed before the summit. Second, before the summit began, the Indian government focused on how India will lead the global south. How has that materialized during the summit? Has it materialized in form of bilateral conversations, any MOUs, any pacts being signed? And number three, today the SGI amendments are supposed to go into effect. We had all the big AI companies and big tech companies of the world here.

Has there been any discussion on that? Because the companies have been fairly critical, both off the record and on the record, about the compliance deadline as well as the three -year takedown window as well as some of the provenance -related specifics of the SGI amendments. Sir, Manas from the Times of India. Sir, has the objective of a technological framework been achieved? And how many countries are on board? And what is the reaction of the big tech? And there is that. And given the representation of the big tech companies, what is the government doing to ensure that we are not going to be the data and talent supplier?

Speaker 1

Last in the second room, I think she’s about more sauce. Thank you.

Audience

Sir, Momita from PTI. I think everybody is curious to know the contours of the New Delhi Declaration. The focus and thrust, you know, rightly for India has been impact. The use cases and how it benefits the public. If you could just give us some color on what the New Delhi Declaration contours look like. Which are the areas where consensus has already been reached, where 70 countries are coming together and supporting those causes. And how would it benefit Indians?

Speaker 1

Third room. Anybody in the third room? Okay. Please.

Audience

Momita from Outlook Business. Thank you. Yes. So I wanted to understand, recently French President mentioned about he urged India to be a part of the social media band for those under the age of 15. So has there been some sort of consensus that you reached with other countries about the same?

Speaker 1

Anybody in third row? Okay. Please.

Audience

Hello, sir. Himanshu Desai from Rajasthan Patrika. Sir, so I wanted to ask, like, what role will…

Ashwini Vaishnaw

Patrika se toh Hindi pe pushna chahiye. Mujhe bhi toh chance mile Hindi me jawab ne nahi ka.

Audience

Ji, bilkul. Sir, toh main yeh pushna chahta… Pachpang se padta ho Patrika. Sir, main yeh pushna chahta ho, jase aaj humne Dr. Mohan Yadav, Madhya Pradesh ke CM ki bhi briefing dekhi. So, matlab, like, state governments ka jo pura ka pura plan rahega, aur kaise government jo hai wo state governments ke saath mil kar kaam karegi? Like, agar hum specially…

Speaker 4

State governments… Bawal chak. Bawal chak. Thank you. Thank you.

Audience

Hi, sir. Yaku Tali from DLU Hindi. Sir, my question to you is, what is the government doing about data protection? Because we are seeing OpenAI, ChatGPT and Microsoft are taking access to all the data. Yesterday, a notification was also sent in which it was said that you can now share your contacts and then reach out to your contacts. So, don’t you think that all the Indians are taking their data?

Speaker 1

Hello, sir. Yes, right side.

Ashwini Vaishnaw

Yes, among them. Anybody else on this thing? Third row? we are working with industry on that visheshkar jo colleges aur schools mein course curriculum banna hai usme industry ke inputs lagataar aare hai bhi aur jaisi wo final hota hai wo aapke saath share bhi karenge semiconductor ka bhi industry ke saath milke kia tha telecom ka bhi industry ke saath milke kia hai to iska bhi industry ke saath milke hi karenge jaisi ki ek relevant practical useful knowledge aasake industry ke liye state governments ko bahut closely isme participate karenge kyunki ultimately janjan tak paunchne ka madhiyam rajya sarkaron ke tarike se hi ho sakta hai sarvam karib karib har benchmark pe bakiyon se kharay utra hai aur khas kar jo open AI se bhi aur deep seek se bhi aur gemini ke pro model se bhi kai benchmark pe wo better hai aap chaho to uske jo unho ne drop kia on all the globally accepted parameters.

The new regulations of SGI have been accepted and everybody has told that this is a necessity of the country and many countries in the world are already talking about bringing regulations in this direction. In fact, many countries have congratulated India and have taken the first step. And in the coming time, many more countries will watermark this. And the main purpose of this is that Is it real content or synthetic content? That transparency is necessary so that you can decide for yourself whether to trust it or not. The second thing is also very important for us to know that the SGI has been accepted by the world. that the law and constitution in the society that is illegal, is also illegal in the online world.

What is illegal in the physical world is also illegal in the online world. It is a very natural constitutional mandate. So I didn’t get anyone who opposed it. If you get someone, then do reach me. The techno -legal framework is growing very fast. I have already given a statement on children’s protection. The data protection framework is very strong. In fact, I don’t want to take names, but three countries have said that they want to make their data protection framework equal to India’s data protection framework. Already, in today’s and tomorrow’s meetings, Aaj kal aur parso 3 din ki meeting mo Already 3 desho ne kaha ki Aapka template bahut achcha hai Isi tare ke kanun ko am bhi banana chahi Mythbusters ki baad Training ke saath hi chudegi Taaki kis tare se Koi bhi point ho, uska benefit kya se Abhi chalain Fourth row Left side se Anybody in fourth row Please

Audience

Namaskar, main Sandeep ho Prabhat Khabar Jharkhand se Jis tarike se AI ko lekar Shor mach raha hai Usme Sabse jada train karne ke Zorat bachcho ko hai aaj ke daur me To kya kisi tarah ka koi module Ye course start kiya jayega Schoolo me ki saath saal Aat saal, das saal ke bachcho ko AI ki training di jayega Is tarah ki koi yojana hai Koi Koi Planning hai

Speaker 1

Next Aage Next

Audience

Thank you, sir. Arundeep from The Hindu. So just one question. You’ve had the opportunity to interact with a lot of leaders in AI and world leaders on a range of subjects. Does the government of India, after this event, believe that AGI is coming in the next two years? What is the government’s position on that, clearly? And if so, are we prepared for that as a country? And I lied, I have another question. Second question is, the next summit is going to be held in Switzerland. But given the response to this edition, is this something that we might do again in the coming future?

Speaker 1

Okay. Ajay.

Audience

Sir, the question is, what India has done in the UBI, what we have done in the UBI, democratization waha par kya? Kya hum koshish karenge AI mein? Aur is democratization ke is open source model ko puri duniya ne kis tarah se liya hai? Wo hum samajhna chahin. Thank you, sir.

Speaker 1

Fourth row, anybody else? Highest, please.

Audience

Hi, good evening, sir. Ashmit from CNBC TV18. Firstly, congratulations on the largest ever AI summit. I had two questions, sir. Amongst the companies that were here, there were also the likes of NVIDIA, AMD. One concern, India is going for a data center build -out, as was evident from the large commitments that we’ve seen. The cost of compute, the cost of chips is something that constantly kept coming up in the conversations. Has that been discussed? And are there any material assurances, gains for India under the PAC silica arrangement? That’s one. Second, you spoke earlier about diffusion. I just want to get a little clarity as a part of the mission 2 .0 that you made a reference to.

A lot of these applications for AI for social purposes are the ROI may not be immediately available for the developer. In such a case, is the government willing to step in under Mission 2 .0 under some form of support or viability gap funding? Okay. Can I ask? Okay, I’m Surabhi from the Economic Times, sir. Two questions. One is I wanted to understand from you that when we launched the first version of the India AI mission from that time to now, I think a lot has changed in the AI ecosystem. So what are going to be the main focus areas of the next phase of the AI mission? Secondly, I know you want to talk about the declaration tomorrow and not today, but I wanted to understand that you’ve had meetings with the biggest names of AI as far as AI leaders are concerned.

What are some of the things, discussion points that have come up? What have been some of the asks that they have made to you and you have made to them as far as their contributions to India are concerned?

Speaker 1

Mr. Roshan, no matter how many questions you ask, once you have answered them, fifth row, anybody? It’s less than two minutes.

Audience

Good evening, Mr. Minister. This is Arunodai Mukherjee from the BBC. I just wanted to understand and draw your attention to the U .S. delegation, which was here earlier today. They have very strongly rejected calls for global governance in AI. I wanted your response to that. And doesn’t that go against… what this entire summit was all about, charting a path which is a unified path towards global governance. How would you respond to that?

Speaker 1

Yes. Thanks. Amrit Pal.

Audience

Minister, this is Amrit Pal from DD India. The IMF chief today said that while AI could lift global growth by a percentage point and help India achieve… What is… How is the government preparing to deal with that? My question to you is, in the face of rising deep fakes and sophisticated artificial intelligence misleading information, how does the government ensure the accountability without touching ease for doing the business for startups?

Speaker 1

Backside. Brahma Prakash. Brahma Prakash, the way Zee News say. Thank you. Next. Yes, please.

Audience

So my question is related to the declaration. I do understand that you want to talk about it tomorrow, but if you could throw some light on whether there is some sort of consensus on demarking high -risk AI, or will that be left to national governments to decide and demark it? Thank you.

Speaker 1

Yes, please, on the left side. Please pass on.

Audience

Sir, my question is in regards to the Global South. Since this was the first summit to be held in a Global South country, we saw significant representation. Africa, there was an Africa air village. So my question is, can we see, you know, Global South priorities in terms of how AI should be developed? Will they be reflected in the joint statement? And what, according to you, are the major takeaways for the broader Global South? And since Prime Minister Modi also, you know, he championed the inclusion of African Union during the G20 summit. Thank you.

Speaker 1

Anyone else? Yes, please. Back side, one person is left.

Audience

Hi, sir. Jatin Grover from Mint. A couple of questions. I wanted to understand any discussions with the participating nations, maybe to create a G20 -like group, so that help us creating some sort of a binding agreements with the nations on the AI declarations. that’s one and till a few minutes back at the ATL conference you talked about having a legal framework to address the cyber crime basically arising from AI can you please elaborate more on that what kind of legal framework the government is looking at thank you

Speaker 1

anybody else wants question otherwise we are closing ok ok

Audience

hi sir from the economic times this is about the 12 foundation models that the India AI mission is backing we launched 3 of them do you have visibility on when the rest of the 9 will be launched and also have you finalized the terms of the agreements with these companies on how much the government of India will be getting in terms of equity etc

Speaker 1

last question at the back side

Audience

good evening sir I am Shreyas Bharadwaj from IIM Indore and IIT Indore I am an independent journalist but also a student of masterclasses of science and data science and management thank you for letting me speak so my question would be very open to any questions what has the government learnt in two aspects one, itte bade viman ko chalane ke liye bohat kuch challenges aayonge AI impact summit 2026 se government ne sapse zyada kya learning sikiye as a learner as a lifelong learner number two, tech me government ne sapse badi sikya liye is poore summit se that’s my question, thank you sir

Ashwini Vaishnaw

ok, thank you very much UPSC me se kam the questions ma sabse pehle packed silica lunga that’s very important because see we are trying to create the complete ecosystem develop the complete ecosystem of semiconductor industry in our country to get the ecosystem it’s very important that all the major players, the major countries where the ecosystem currently resides should also support and encourage our journey That’s why it’s very, very important that we had the packed silica sign today. From all the discussions that we had, it very clearly emerged that the world looks at India as clearly a trusted partner for semiconductor supply chain, which means the way semiconductor industry will grow in our country in the coming years, that looks like a very important, it will emerge as a major sector.

It’s a very important sector that is very visible. Very clearly, it was evident from the discussions. Same thing will apply to… Do you know in 2026, the highest -paid people in industry are not MBAs or fancy degree holders. They are agentic… By now, I recall two meetings in which people are looking at reducing, power consumption at least 50%. and reducing cost significantly. Sometimes even some people even said that a fraction of the cost of the current chips. So that kind of innovation is happening. And India will be a big beneficiary of that innovation because we are starting our design and semiconductor journey at a point where we can use all the benefits that we know about AI and optimize our design of chips according to the new age.

We are not bound by the legacy of the past. We can actually make a new beginning, which is what we have challenged our startups in the Semicon 2 .0 where we want to have a series of deep tech startups designing chips. I’ve spoken about the next steps. I’ve spoken about education, democratization, diffusion, ROI. Yes, I believe that ROI will come from the application. Most of the enterprise… use cases which are visible here in large number. I think I read one story from one of the digital versions of one of the big channels where this point was also very clearly brought out that while people are mostly focusing on the consumer facing applications but the large number of enterprise solution providers who are participating in this exhibition, that’s very important both from the jobs perspective, from the IT industry’s health perspective and from the direction that India will be taking as a major player in the AI world going forward.

Yes, we have a comprehensive plan. Every sector, as we have maintained right from day one, every sector will be benefited by this. On cyber security, so many sessions have happened. We just inaugurated one research institute between Zscaler and Airtel, and many more such initiatives are going to happen in the coming time frame. The declaration will, when the text comes out, you’ll be able to see the contours of the declaration. Global South, of course, participated in large numbers and very interested in collaborating with us, and that level of trust is there. When the next models will be launched, we’ll keep sharing with you as we progress on that. We had committed one, we have done three.

So it’s like a good, it’s a journey which we’ll keep sharing with you. Learnings, many. One was very surprising learnings that when so many good things are happening, how one small thing can be highlighted so much, it’s a personal learning for me. It was also a learning for me that… It’s a learning for me. people who are in politics, they don’t even, some of the people are opposition, they don’t even understand what the youth today wants and they try to create things which really, I mean, it’s really sad in a way and funny in another way and unko kaun samja sakta hai, I don’t know. Many learnings are there. Here, we’ll use these learnings to improve all the future and this was a very large scale.

As I said, already five lakh plus visitors have already, we were just doing the estimate, I think actual number is about six, but we are just being very conservative, everything which is measured is what we would like to share with you. That kind of participation is there and in the end, I’d like to request MEA to because they have been very important partner for us. your role has been stellar. Delhi Police also I would like to thank. All the security participants who were present throughout this and all the friends of media, you played a very constructive role. A big round of applause for the media. Thank you, friends. Kandeer.

Randhir Jaiswal

Thank you, sir. It has been a pleasure for us in the Ministry of External Affairs to work along with METI as Team India to put our best foot forward for the world. This event has been a success, may I say a grand success. We have heard world leaders who are here. We had 20 world leaders who attended this AI summit. In addition, we had 45 delegations represented at ministerial level from across the world. We also had 100 countries represented. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Ashwini Vaishnaw
22 arguments122 words per minute3272 words1600 seconds
Argument 1
India hosted a phenomenally successful AI summit with major global AI players and startups participating
EXPLANATION
Ashwini Vaishnaw emphasized that India successfully organized a major AI summit that attracted significant global participation from both established AI companies and emerging startups. He highlighted the high quality of discussions and participation across various summit events.
EVIDENCE
Practically every major AI player in the world participated in large numbers, quality of discussion was phenomenal across ministerial dialogue, leaders plenary, main inauguration function, and summit events
MAJOR DISCUSSION POINT
AI Summit Success and Global Participation
AGREED WITH
Randhir Jaiswal
Argument 2
The summit achieved a Guinness World Record for involving 2.5 lakh students in the AI journey
EXPLANATION
The summit set a world record by engaging 250,000 students in AI-related activities, demonstrating India’s commitment to involving youth in the AI transformation. This massive student participation was seen as bringing responsible and ethical AI discussions to the forefront.
EVIDENCE
We had a Guinness World Record for that involvement of the students by involving two and a half lakh students in this entire journey
MAJOR DISCUSSION POINT
AI Summit Success and Global Participation
Argument 3
The declaration signatories increased from 60 in previous summit to over 70, expected to reach 80
EXPLANATION
Vaishnaw reported significant growth in international support for the summit’s declaration, with participation increasing from 60 countries in the previous action summit to over 70, with expectations to reach 80 by the summit’s conclusion. He emphasized that all major countries and important AI stakeholders have already signed.
EVIDENCE
The action summit, the previous one had about 60 signatories in the final declaration. We have already crossed 70 and believe it will cross 80. All the major countries have already signed.
MAJOR DISCUSSION POINT
AI Summit Success and Global Participation
Argument 4
PM Modi’s vision of ‘Manav AI’ (AI of humans, by humans, for humans) was well-accepted by major AI players globally
EXPLANATION
Vaishnaw highlighted that Prime Minister Modi’s vision of ‘Manav AI’ received widespread acceptance from global AI industry leaders. This human-centric approach to AI development was well-received in bilateral meetings and ministerial dialogues.
EVIDENCE
Honorable Prime Minister Narendra Modi’s vision of Manav AI, which is AI of the humans, by the humans, for the humans was very well accepted practically every major AI player in the world. In the ministerial and bilateral meetings, practically every minister resonated with this
MAJOR DISCUSSION POINT
Prime Minister Modi’s ‘Manav AI’ Vision
Argument 5
The vision resonated across every civilization, country, generation, and sector as it prioritizes humanity
EXPLANATION
Vaishnaw explained that the ‘Manav AI’ vision had universal appeal because it places humanity at the center of AI development. He emphasized that this approach was meaningful for all stakeholders regardless of their background or sector.
EVIDENCE
First time they have heard a vision which is so compelling and it just cuts across every civilization every country this is meaningful for everybody every generation, every sector every country because ultimately it’s the humanity which matters the most
MAJOR DISCUSSION POINT
Prime Minister Modi’s ‘Manav AI’ Vision
Argument 6
This approach brings responsible and ethical AI discussion to the forefront
EXPLANATION
By involving students and promoting the ‘Manav AI’ vision, India positioned responsible and ethical AI development as a central theme of global AI discourse. This approach was seen as a significant contribution to the international AI governance conversation.
EVIDENCE
Everybody felt happy that we have brought the discussion about responsible and ethical AI to the forefront by involving two and a half lakh students in this entire journey
MAJOR DISCUSSION POINT
Prime Minister Modi’s ‘Manav AI’ Vision
Argument 7
Investment pledges exceeded $250 billion for infrastructure-related investments and $20 billion for VC deep tech investments
EXPLANATION
Vaishnaw reported substantial investment commitments made during the summit, with over $250 billion pledged for infrastructure development and $20 billion for venture capital investments in deep technology. He noted these numbers were continuing to grow daily.
EVIDENCE
It’s already crossed 250 billion dollars for the infra-related investments and about 20 billion dollars for the VC deep tech investments which have been committed by investors. The numbers are growing each day
MAJOR DISCUSSION POINT
Investment Commitments and Economic Impact
Argument 8
The world has confidence in India’s role in the new AI age, which is crucial for showcasing India’s talent and energy
EXPLANATION
Vaishnaw emphasized that the investment commitments and global participation demonstrated international confidence in India’s capabilities in the AI sector. He viewed this as validation of India’s talent and potential in the emerging AI economy.
EVIDENCE
What is important is the world has confidence on India’s role in the new AI age. There is always a need to bring out the talent that we have, bring out the energy that we have in front of the world so that the world recognizes that
MAJOR DISCUSSION POINT
Investment Commitments and Economic Impact
Argument 9
India is seen as a trusted partner for semiconductor supply chain development globally
EXPLANATION
Through the Pac Silica agreement and discussions during the summit, Vaishnaw highlighted that India has emerged as a trusted partner for global semiconductor supply chain development. This positioning is crucial for India’s semiconductor industry growth.
EVIDENCE
From all the discussions that we had, it very clearly emerged that the world looks at India as clearly a trusted partner for semiconductor supply chain, which means the way semiconductor industry will grow in our country in the coming years
MAJOR DISCUSSION POINT
Investment Commitments and Economic Impact
Argument 10
AI Mission 1.0 exceeded targets: achieved 38,000 GPUs instead of planned 10,000, developed 12 foundational models instead of 2
EXPLANATION
Vaishnaw reported that India’s first AI Mission significantly exceeded its original targets across multiple metrics. The mission delivered nearly four times the planned GPU capacity and six times the number of foundational models initially envisioned.
EVIDENCE
We wanted about 10,000 GPUs. We have 38,000 already and another 20,000 very soon going to be launched. We were looking at two foundational models. We have a bouquet of 12 models and very multimodal, reasonably, very well rated
MAJOR DISCUSSION POINT
India’s AI Mission Progress and Future Plans
Argument 11
Plans for AI Mission 2.0 will set bigger goals with focus on enhanced models, compute capacity, and safety measures
EXPLANATION
Building on the success of the first mission, Vaishnaw announced plans for an expanded AI Mission 2.0 that will establish more ambitious targets. The new mission will focus on advancing AI models, increasing compute infrastructure, and strengthening safety protocols.
EVIDENCE
We will be very soon start working on the AI mission 2.0, which will be definitely bigger than what it was in the AI mission 1.0. We wanted to have an AI safety institute. We have now 12 institutes working on this in a network mode
MAJOR DISCUSSION POINT
India’s AI Mission Progress and Future Plans
Argument 12
India’s sovereign AI models produced high-quality output with limited resources, surprising industry leaders
EXPLANATION
Vaishnaw emphasized that India’s domestically developed AI models achieved impressive results despite having significantly fewer resources compared to major international AI labs. This efficiency impressed global industry leaders during bilateral meetings.
EVIDENCE
In every bilateral that I had with the industry leaders, they are really surprised at the quality of output with such few resources, with such frugal resources our engineers and researchers have produced such good models
MAJOR DISCUSSION POINT
India’s AI Mission Progress and Future Plans
Argument 13
Real MOUs and collaborations were established during the summit, ensuring concrete action beyond paper commitments
EXPLANATION
Vaishnaw addressed concerns about implementation by emphasizing that the summit resulted in actual memorandums of understanding and collaborative agreements rather than just symbolic declarations. He highlighted that these represent genuine commitments for future cooperation.
EVIDENCE
There is lots and lots of real action, real MOUs, real understanding, which has happened in the past few days, where many of these things which concern us as well as the entire world will be working in a very collaborative manner
MAJOR DISCUSSION POINT
Implementation and Regulatory Framework
Argument 14
New SGI regulations for synthetic content transparency have been accepted globally as necessary
EXPLANATION
Vaishnaw reported that India’s new regulations requiring transparency in synthetic content generation have received global acceptance and support. He emphasized that these regulations aim to help users distinguish between real and AI-generated content.
EVIDENCE
The new regulations of SGI have been accepted and everybody has told that this is a necessity of the country and many countries in the world are already talking about bringing regulations in this direction. Many countries have congratulated India
MAJOR DISCUSSION POINT
Implementation and Regulatory Framework
Argument 15
India’s data protection framework is being adopted as a template by other countries
EXPLANATION
Vaishnaw revealed that multiple countries have expressed interest in adopting India’s data protection framework as a model for their own legislation. This recognition demonstrates the strength and comprehensiveness of India’s approach to data governance.
EVIDENCE
Three countries have said that they want to make their data protection framework equal to India’s data protection framework. Already, in today’s and tomorrow’s meetings, 3 countries have said that your template is very good
MAJOR DISCUSSION POINT
Implementation and Regulatory Framework
Argument 16
Benefits must reach the last person in society, following the principle of ‘Antyodaya’ (inclusive growth)
EXPLANATION
Vaishnaw emphasized India’s commitment to ensuring AI benefits reach all segments of society, particularly the most disadvantaged. He referenced the principle of ‘Antyodaya’ which focuses on uplifting the last person in society as a core tenet of the government’s approach.
EVIDENCE
Our Prime Minister keeps inspiring us that for us, we should not stop till the benefit reaches the last person in the society. We believe in inclusive growth as a basic political philosophy. Each and every program has been created and executed to bring the benefits
MAJOR DISCUSSION POINT
Inclusive Growth and Democratization
AGREED WITH
Speaker 1
Argument 17
State governments will play crucial roles in diffusing AI benefits to grassroots level
EXPLANATION
Vaishnaw highlighted that state governments will be essential partners in ensuring AI benefits reach the grassroots level across India. He emphasized the need for close collaboration between central and state governments for effective implementation.
EVIDENCE
State governments will participate very closely in this because ultimately reaching the people can only happen through state governments. We will work very closely with state governments
MAJOR DISCUSSION POINT
Inclusive Growth and Democratization
Argument 18
Focus on democratizing AI education and making it accessible to all sections of society
EXPLANATION
Vaishnaw emphasized the government’s commitment to making AI education and training accessible to all segments of society. This includes developing programs that can accommodate people with varying levels of technical background and ensuring widespread access to AI learning opportunities.
EVIDENCE
We are working with industry on course curriculum for colleges and schools. Industry inputs are continuously coming and we will share the final version with you. We have democratized programs, making them accessible at very low costs
MAJOR DISCUSSION POINT
Inclusive Growth and Democratization
Argument 19
Media played a constructive role in covering the event and facilitating meaningful dialogue
EXPLANATION
Vaishnaw acknowledged and praised the media’s positive contribution to the summit’s success. He recognized their role in facilitating constructive coverage and meaningful dialogue around the AI summit and its outcomes.
EVIDENCE
All the friends of media, you played a very constructive role. A big round of applause for the media. Thank you, friends.
MAJOR DISCUSSION POINT
Media Coverage and Event Organization
AGREED WITH
Speaker 1
Argument 20
Pac Silica agreement signed to strengthen semiconductor ecosystem and supply chain resilience
EXPLANATION
Vaishnaw announced the signing of the Pac Silica agreement during the summit, which aims to strengthen the semiconductor ecosystem and enhance supply chain resilience. This agreement represents a significant step in developing India’s semiconductor capabilities.
EVIDENCE
We also had Pax Silica today which is very important for us from the semiconductor industry perspective from resilient supply chain resilient value chain perspective
MAJOR DISCUSSION POINT
Semiconductor Industry Development
Argument 21
Foundation laying for new semiconductor plant and commercial production from Micron facility scheduled
EXPLANATION
Vaishnaw announced upcoming milestones in India’s semiconductor development, including the foundation laying for a new semiconductor plant and the start of commercial production from a major Micron facility. He described the Micron facility as one of the largest globally.
EVIDENCE
Tomorrow we will be laying the foundation for our next semiconductor plant here in Uttar Pradesh and on 28th we’ll start the commercial production from Micron facility and that will be one of the largest facilities that Micron has practically more than 10 cricket fields
MAJOR DISCUSSION POINT
Semiconductor Industry Development
Argument 22
India positioned to benefit from AI-optimized chip design without legacy constraints
EXPLANATION
Vaishnaw highlighted that India’s entry into semiconductor design and manufacturing at this stage allows the country to leverage AI optimization from the beginning, without being constrained by legacy systems. This positioning could provide significant competitive advantages.
EVIDENCE
India will be a big beneficiary of that innovation because we are starting our design and semiconductor journey at a point where we can use all the benefits that we know about AI and optimize our design of chips according to the new age. We are not bound by the legacy of the past
MAJOR DISCUSSION POINT
Semiconductor Industry Development
R
Randhir Jaiswal
1 argument135 words per minute85 words37 seconds
Argument 1
20 world leaders attended with 45 delegations at ministerial level and 100 countries represented
EXPLANATION
Randhir Jaiswal provided specific numbers highlighting the high-level international participation in the AI summit. He emphasized the scale of global representation, with world leaders, ministerial delegations, and broad country participation demonstrating the summit’s international significance.
EVIDENCE
We had 20 world leaders who attended this AI summit. In addition, we had 45 delegations represented at ministerial level from across the world. We also had 100 countries represented
MAJOR DISCUSSION POINT
AI Summit Success and Global Participation
AGREED WITH
Ashwini Vaishnaw
A
Audience
2 arguments154 words per minute2430 words946 seconds
Argument 1
Questions raised about specific aspects of the summit including participation, guidelines, and future implementation
EXPLANATION
Various audience members, representing different media organizations, raised detailed questions about multiple aspects of the AI summit. These included inquiries about global tech company participation, regulatory frameworks, implementation mechanisms, and the summit’s impact on various sectors.
EVIDENCE
Questions from ANI, Economic Times Digital Team, Mint, Money Control, Business Standard, Hindustan Times, PTI, BBC, and other media organizations covering topics like frontier AI commitments, Delhi Declaration, global tech company roles, guardrails, and consensus building
MAJOR DISCUSSION POINT
Media Coverage and Event Organization
Argument 2
Inquiries about the role of global tech companies in India’s public services and AI model development
EXPLANATION
Media representatives specifically questioned the government about conversations with global technology companies regarding their role in India’s public services and the development of AI models under India’s AI mission. These questions focused on practical implementation and collaboration aspects.
EVIDENCE
Questions about conversations with global tech companies in terms of their role in public services, models launched under AI mission, and where they go from here
MAJOR DISCUSSION POINT
Media Coverage and Event Organization
S
Speaker 1
2 arguments91 words per minute195 words127 seconds
Argument 1
Facilitated orderly question and answer session during the press conference
EXPLANATION
Speaker 1 served as a moderator during the press conference, managing the flow of questions from media representatives. They ensured proper identification of journalists and their organizations before questions were asked, and maintained order by directing questions row by row.
EVIDENCE
First of all, please identify yourself and your organization’s name before asking the question. And as sir has said, start from the left. Next. Please. Second row? Yeah, please. Third room. Anybody in the third room?
MAJOR DISCUSSION POINT
Media Coverage and Event Organization
AGREED WITH
Ashwini Vaishnaw
Argument 2
Ensured comprehensive media participation across multiple rows and sections
EXPLANATION
Speaker 1 systematically managed media participation by organizing questions from different seating sections and rows. They ensured that journalists from various positions in the venue had opportunities to ask questions, promoting inclusive media engagement.
EVIDENCE
Fourth row, anybody else? Fifth row, anybody? Backside. Brahma Prakash. Anyone else? Yes, please. Back side, one person is left. Last question at the back side
MAJOR DISCUSSION POINT
Media Coverage and Event Organization
AGREED WITH
Ashwini Vaishnaw
S
Speaker 4
1 argument17 words per minute10 words34 seconds
Argument 1
Provided brief interjection during the press conference
EXPLANATION
Speaker 4 made a short intervention during the press conference, though the specific content was limited to brief acknowledgments. Their participation appears to have been minimal but part of the overall event management.
EVIDENCE
State governments… Bawal chak. Bawal chak. Thank you. Thank you.
MAJOR DISCUSSION POINT
Media Coverage and Event Organization
Agreements
Agreement Points
AI Summit achieved unprecedented global participation and success
Speakers: Ashwini Vaishnaw, Randhir Jaiswal
India hosted a phenomenally successful AI summit with major global AI players and startups participating 20 world leaders attended with 45 delegations at ministerial level and 100 countries represented
Both speakers emphasized the exceptional scale and success of India’s AI summit, with Vaishnaw highlighting the quality of participation from major AI players and startups, while Jaiswal provided specific numbers showing high-level international representation with 20 world leaders, 45 ministerial delegations, and 100 countries participating.
Media played constructive role in summit coverage
Speakers: Ashwini Vaishnaw, Speaker 1
Media played a constructive role in covering the event and facilitating meaningful dialogue Facilitated orderly question and answer session during the press conference
Both speakers acknowledged the positive contribution of media, with Vaishnaw explicitly praising their constructive role in coverage and dialogue, while Speaker 1 demonstrated this through systematic facilitation of media participation and ensuring comprehensive coverage opportunities.
Importance of inclusive participation and systematic organization
Speakers: Ashwini Vaishnaw, Speaker 1
Benefits must reach the last person in society, following the principle of ‘Antyodaya’ (inclusive growth) Ensured comprehensive media participation across multiple rows and sections
Both speakers demonstrated commitment to inclusive participation – Vaishnaw through his emphasis on reaching the last person in society with AI benefits, and Speaker 1 through systematic organization ensuring all media representatives had equal opportunities to participate in the press conference.
Similar Viewpoints
Both speakers shared pride in India’s successful hosting of a major international AI summit, emphasizing the high level of global participation and the summit’s significance in positioning India as a leader in AI governance and international cooperation.
Speakers: Ashwini Vaishnaw, Randhir Jaiswal
India hosted a phenomenally successful AI summit with major global AI players and startups participating The declaration signatories increased from 60 in previous summit to over 70, expected to reach 80 20 world leaders attended with 45 delegations at ministerial level and 100 countries represented
Both the minister and media representatives showed concern for practical implementation and accessibility of AI benefits, with Vaishnaw emphasizing democratization of AI education and the audience asking detailed questions about how benefits would reach different sectors and populations.
Speakers: Ashwini Vaishnaw, Audience
Focus on democratizing AI education and making it accessible to all sections of society Questions raised about specific aspects of the summit including participation, guidelines, and future implementation
Unexpected Consensus
Global acceptance of India’s regulatory framework
Speakers: Ashwini Vaishnaw
New SGI regulations for synthetic content transparency have been accepted globally as necessary India’s data protection framework is being adopted as a template by other countries
It was unexpected that India’s regulatory approaches, particularly in synthetic content transparency and data protection, would gain such widespread international acceptance and be adopted as templates by other countries. This represents a significant shift where India is leading global regulatory standards rather than following them.
Universal resonance with human-centric AI vision
Speakers: Ashwini Vaishnaw
PM Modi’s vision of ‘Manav AI’ (AI of humans, by humans, for humans) was well-accepted by major AI players globally The vision resonated across every civilization, country, generation, and sector as it prioritizes humanity
The unexpected consensus around PM Modi’s ‘Manav AI’ vision across diverse global stakeholders, including major tech companies and different civilizations, was surprising given the typically competitive and fragmented nature of international AI governance discussions.
Overall Assessment

The transcript reveals strong consensus around India’s successful hosting of the AI summit, the importance of inclusive growth and democratization of AI benefits, the need for responsible AI governance, and the value of international cooperation in AI development. There was also agreement on the constructive role of media and the importance of systematic organization.

High level of consensus among speakers, particularly between government officials, with media representatives asking constructive questions that aligned with government priorities. The consensus suggests India has successfully positioned itself as a trusted leader in global AI governance, with implications for future international AI cooperation frameworks and India’s role in shaping global digital governance standards.

Differences
Different Viewpoints
Unexpected Differences
Opposition party disruption attempts during AI summit
Speakers: Ashwini Vaishnaw
Thanks to the youth who endorsed this, who took this so positively that whatever little effort that Congress made for trying to disrupt the summit was really, really, I mean, the youth very clearly said that this is their exhibition
While not a direct disagreement between speakers in this transcript, Vaishnaw unexpectedly brought up political opposition from Congress party attempting to disrupt the summit, which was not anticipated given the technical nature of the AI summit discussion
Overall Assessment

This transcript shows no significant disagreements between speakers as it follows a press conference format with a government minister presenting achievements and answering media questions

Minimal disagreement level – The discussion was largely consensual with the minister presenting India’s AI summit success and other speakers either supporting or facilitating the presentation. The only notable tension mentioned was external political opposition rather than disagreement among the present speakers

Takeaways
Key takeaways
India successfully hosted a landmark AI summit with unprecedented global participation – 20 world leaders, 45 ministerial delegations, and 100 countries represented PM Modi’s ‘Manav AI’ vision (AI of humans, by humans, for humans) gained widespread international acceptance and positioned India as a thought leader in responsible AI Significant investment commitments secured: $250+ billion for infrastructure and $20 billion for VC deep tech investments, demonstrating global confidence in India’s AI capabilities India’s AI Mission 1.0 exceeded all targets – achieved 38,000 GPUs vs planned 10,000, developed 12 foundational models vs planned 2, established 12 AI safety institutes India positioned as a trusted partner in global semiconductor supply chain through Pac Silica agreement Declaration signatories increased from 60 (previous summit) to 70+ with expectation to reach 80, showing growing consensus on AI governance India’s sovereign AI models demonstrated high quality output with limited resources, surprising global industry leaders New SGI regulations for synthetic content transparency gained global acceptance as necessary framework India’s data protection framework being adopted as template by other countries Strong emphasis on inclusive growth ensuring AI benefits reach the last person in society through state government partnerships
Resolutions and action items
Launch AI Mission 2.0 with bigger goals focusing on enhanced models, compute capacity, and safety measures Foundation laying for new semiconductor plant in Uttar Pradesh and commercial production start from Micron facility on 28th Finalize and release the New Delhi Declaration with expected 80+ country signatories Implement real MOUs and collaborations established during summit for concrete action beyond paper commitments Develop AI curriculum for schools and colleges in collaboration with industry inputs Work with state governments to ensure grassroots level diffusion of AI benefits Continue development and launch of remaining 9 foundational models from the committed 12 Establish comprehensive legal framework to address AI-related cyber crimes Implement democratized AI education programs making technology accessible to all sections
Unresolved issues
Specific timeline for launch of remaining 9 foundational models not clearly defined Details of equity agreements between government and AI model development companies not finalized Concrete mechanisms for ensuring binding implementation of voluntary AI commitments and non-binding declarations Specific framework for social media regulations for users under 15 years (mentioned by French President) not addressed Detailed contours and focus areas of the New Delhi Declaration deferred to next day announcement Specific measures to prevent India from becoming merely a data and talent supplier to global tech companies Timeline and specific mechanisms for AI benefits to reach grassroots level not clearly outlined Preparedness for potential AGI (Artificial General Intelligence) arrival in next two years not definitively addressed
Suggested compromises
Voluntary nature of frontier AI commitments accepted as starting point while working toward stronger consensus Maximizing declaration signatories through continued diplomatic efforts rather than rushing to closure Balancing open innovation with necessary regulatory frameworks for synthetic content transparency Collaborative approach with global partners while maintaining India’s sovereign AI model development Working with industry inputs for curriculum development while maintaining educational accessibility goals Accepting that some countries may not sign declaration while focusing on securing commitment from major AI stakeholders
Thought Provoking Comments
Honorable Prime Minister Narendra Modi’s vision of Manav AI, which is AI of the humans, by the humans, for the humans. I think that was very well accepted practically every major AI player in the world.
This comment introduces a humanistic philosophical framework for AI development that goes beyond technical specifications to focus on human-centered values. It reframes AI development from a purely technological pursuit to one grounded in human welfare and agency.
This vision became a recurring theme throughout the discussion, with Vaishnaw repeatedly referencing how this concept resonated with international participants. It established the ideological foundation for India’s approach to AI governance and influenced subsequent discussions about responsible AI development.
Speaker: Ashwini Vaishnaw
We also found very strong endorsement of our policy of working on all the five layers and our focus on having a sovereign bouquet of models… they are really surprised at the quality of output with such few resources
This comment challenges the conventional wisdom that AI excellence requires massive resources, highlighting India’s frugal innovation approach. It suggests that resource constraints can drive more efficient and innovative solutions.
This shifted the conversation from resource availability to resource optimization, demonstrating that developing nations can compete in AI through strategic approaches rather than just capital investment. It influenced discussions about democratization and accessibility of AI technology.
Speaker: Ashwini Vaishnaw
The frontier AI commitment is voluntary in nature, and the Delhi Declaration, I’m assuming, is non-binding. So how do we ensure that this does not remain on paper? How do we ensure implementation?
This question cuts to the heart of international cooperation challenges, questioning the effectiveness of voluntary commitments and non-binding agreements in addressing global AI governance issues.
This question forced a deeper discussion about the practical mechanisms for ensuring accountability in international AI cooperation. It shifted the conversation from celebrating agreements to examining their enforceability and real-world impact.
Speaker: Audience member from Money Control
For the last person standing in India, how far and how long it will take to reach to that one last person in India? How long will it take for AI to reach there?
This question highlights the critical issue of digital divide and inclusive development, challenging the assumption that technological advancement automatically benefits all segments of society.
This question prompted Vaishnaw to discuss India’s philosophy of ‘Antyodaya’ (serving the last person) and inclusive growth, shifting the conversation from technological achievements to social impact and equity considerations.
Speaker: Audience member
Does the government of India, after this event, believe that AGI is coming in the next two years? What is the government’s position on that, clearly? And if so, are we prepared for that as a country?
This question addresses one of the most significant and uncertain aspects of AI development – the timeline and implications of Artificial General Intelligence, forcing discussion of preparedness for transformative technological change.
While not directly answered in detail, this question introduced the urgency of preparing for potentially disruptive AI developments and highlighted the need for proactive policy planning rather than reactive responses.
Speaker: Arundeep from The Hindu
The U.S. delegation… have very strongly rejected calls for global governance in AI. I wanted your response to that. And doesn’t that go against what this entire summit was all about?
This comment exposes a fundamental tension in international AI cooperation – the conflict between national sovereignty and global governance needs, highlighting the challenges in achieving unified global AI policies.
This question revealed the complexity of international AI diplomacy and the limitations of consensus-building, adding a note of realism to discussions about global cooperation and forcing acknowledgment of geopolitical constraints.
Speaker: Arunodai Mukherjee from BBC
Overall Assessment

These key comments shaped the discussion by introducing multiple layers of complexity to what could have been a purely celebratory event. They moved the conversation beyond technical achievements and diplomatic pleasantries to address fundamental questions about AI’s social impact, governance challenges, and implementation realities. The comments created a more nuanced dialogue that balanced optimism about AI’s potential with realistic assessments of challenges in equity, enforcement, and international cooperation. The discussion evolved from showcasing India’s AI capabilities to examining broader questions of responsible development, inclusive access, and effective global governance – ultimately creating a more substantive and policy-relevant conversation.

Follow-up Questions
How do we ensure that voluntary AI commitments and non-binding declarations don’t remain just on paper and achieve actual implementation?
This addresses the critical gap between policy declarations and real-world enforcement of AI governance measures
Speaker: Audience member from Money Control
What were the specific areas where it was easier to build consensus versus areas that took more time in the declaration negotiations?
Understanding consensus-building challenges could inform future international AI governance efforts
Speaker: Audience member from Economic Times
What are the specific outcomes from each of the seven working groups that were formed before the summit?
The detailed results of these working groups could provide concrete deliverables and action items from the summit
Speaker: Independent journalist
Has there been discussion about SGI amendments compliance deadlines and the three-year takedown window with big tech companies?
This addresses potential regulatory conflicts between India’s new AI regulations and international tech companies’ concerns
Speaker: Independent journalist
What is the government’s position on whether AGI (Artificial General Intelligence) is coming in the next two years, and is India prepared for it?
This addresses India’s strategic preparedness for potentially transformative AI developments
Speaker: Audience member from The Hindu
Will there be some form of government support or viability gap funding for AI applications with social purposes that may not have immediate ROI?
This addresses the funding gap for socially beneficial AI applications that may not be commercially viable
Speaker: Audience member from CNBC TV18
What are the main focus areas of the next phase of the AI mission, and what specific asks have been made between AI leaders and the government?
This seeks clarity on the strategic direction and bilateral commitments emerging from the summit
Speaker: Audience member from Economic Times
How does the government respond to the U.S. delegation’s rejection of calls for global AI governance?
This addresses a fundamental disagreement on international AI governance approaches that could affect future cooperation
Speaker: Audience member from BBC
Is there consensus on demarking high-risk AI, or will that be left to national governments to decide?
This addresses a key technical and regulatory question about AI risk classification standards
Speaker: Audience member
When will the remaining 9 foundation models be launched, and what are the finalized terms of agreements with companies regarding government equity?
This seeks specific timelines and financial details of India’s AI model development program
Speaker: Audience member from Economic Times
What kind of legal framework is the government looking at to address cybercrime arising from AI?
This addresses the need for updated legal structures to handle AI-enabled criminal activities
Speaker: Audience member from Mint

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Smart Regulation Rightsizing Governance for the AI Revolution

Smart Regulation Rightsizing Governance for the AI Revolution

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on designing governance frameworks for an AI-driven world, with particular emphasis on ensuring equitable access to AI resources for smaller and developing nations. The panel, moderated by Sabina Chofu from TechUK, brought together experts from Chatham House, Mozilla, NASCOM, and Cohere to explore international cooperation challenges and opportunities.


Bella Wilkinson from Chatham House provided a realistic assessment of the current geopolitical landscape, arguing that global consensus on AI governance is unlikely given US-China tensions and weakened multilateral institutions. However, she emphasized that partial alignment on specific issues through coalition-building remains possible and pragmatic. The discussion highlighted significant barriers facing emerging economies, including limited access to compute resources, data silos, infrastructure gaps in power and connectivity, and skills shortages.


Rafik Rikorian from Mozilla advocated for open-source solutions as a path forward, drawing parallels to the Linux model where countries could contribute to shared infrastructure while maintaining sovereignty through local fine-tuning. He proposed alternative architectures like federated learning and data trusts that would enable international collaboration without requiring countries to surrender their data.


The panelists identified several promising areas for cooperation, including technical standards through frameworks like NIST and ISO, shared risk mitigation practices, and interoperability of resources. Examples discussed included Southeast Asian multilingual models, regional compute consortiums, and public-private data sharing initiatives. Halak Shirastava emphasized the importance of capacity building through shared evidence and procurement policies that open markets to global players.


The conversation concluded with optimism about increasing participation and convergence in AI governance standards over the next 12 months, despite acknowledging the significant challenges ahead.


Keypoints

Major Discussion Points:

Global AI Governance Challenges: The panel discussed the realistic limitations of achieving global consensus on AI governance in the current geopolitical environment, with Bella Wilkinson emphasizing that while complete alignment is unlikely, partial alignment on priority issues through coalition-building is possible and more pragmatic than traditional multilateral approaches.


AI Divide and Access Barriers: Rajesh Nambia highlighted the emerging “AI divide” that will be significantly larger than the previous digital divide, focusing on critical barriers for developing nations including limited access to compute resources, data quality and organization issues, infrastructure gaps (power, connectivity), and skills shortages.


Open Source as a Solution Framework: Rafik Rikorian advocated for open source models as a key mechanism for international cooperation, drawing parallels to Linux’s success and proposing that shared infrastructure with local fine-tuning could provide digital sovereignty while enabling global collaboration.


Standards and Technical Cooperation: The discussion emphasized the importance of technical standards (NIST, ISO frameworks), shared risk mitigation practices, and interoperability of resources as practical areas where international alignment is achievable, particularly benefiting smaller companies and emerging economies.


Capacity Building and Implementation: The panel addressed translating international cooperation into actual capabilities for emerging economies, emphasizing the need for shared evidence, procurement policies, sectoral governance approaches, and talent development in both technical and regulatory domains.


Overall Purpose:

The discussion aimed to explore practical approaches to AI governance in a multipolar world, focusing on how to create equitable access to AI resources and capabilities for smaller and developing nations through international cooperation, shared infrastructure, and coalition-building rather than traditional multilateral frameworks.


Overall Tone:

The discussion began with a notably realistic and somewhat pessimistic assessment of global cooperation challenges, but progressively became more optimistic and solution-oriented. The moderator explicitly noted this shift, with panelists building on each other’s ideas to present concrete examples of successful cooperation models, open source solutions, and practical implementation strategies. The tone evolved from acknowledging significant barriers to emphasizing actionable opportunities and expressing genuine excitement about progress in the coming months.


Speakers

Speakers from the provided list:


Sabina Chofu – International Policy and Strategy Lead at TechUK (sister association of NASCOM in the UK)


Bella Wilkinson – Research Fellow on the Digital Society Program at Chatham House


Rajesh Nambia – President of NASCOM (National Association of Software and Service Companies in India)


Rafik Rikorian – Chief Technology Officer for Mozilla


Halak Shirastava – Global AI and Public Policy and Regulatory Affairs at Cohere (Canadian AI developer)


Audience – Unidentified audience member who asked a question during the discussion


Additional speakers:


Navreena Singh – From Credo AI (mentioned as unable to attend due to a meeting with the president)


Full session reportComprehensive analysis and detailed insights

This panel discussion at an international AI summit examined practical approaches to AI governance and international cooperation, with particular focus on addressing barriers facing developing nations in AI adoption. The conversation, moderated by Sabina Chofu from TechUK, brought together experts from policy research, technology development, industry associations, and private sector AI companies. The discussion took place on the final day of the summit, with speakers maintaining an notably optimistic and solution-focused tone despite acknowledging significant challenges.


Reframing Global AI Governance: From Multilateral Idealism to Coalition Building

Bella Wilkinson from Chatham House opened the discussion by challenging conventional approaches to global AI governance. She argued that comprehensive multilateral cooperation on AI is fundamentally unrealistic given current geopolitical realities, including the accelerating US-China AI race and what she described as the unprecedented degradation of international institutions since World War II. The intense uncertainty surrounding frontier AI capabilities further complicates traditional diplomatic approaches.


However, rather than adopting a pessimistic stance, Wilkinson proposed a pragmatic alternative: coalition building around specific priority issues that could later be scaled through multilateral formats. This approach would focus on sovereignty and strategic autonomy messaging, allowing resource-constrained countries to adopt common approaches to data governance and pool resources while maintaining alignment on specific issues. The key insight was that collective benefits must massively outweigh what countries could achieve individually to make such coalitions viable.


This reframing proved influential throughout the discussion, with other speakers building on the coalition-building framework and focusing on practical cooperation mechanisms rather than idealistic global agreements.


Understanding the AI Divide: Beyond Digital Access

Rajesh Nambia from NASCOM provided a comprehensive analysis of what he termed the “AI divide” – a gap that he argued would be significantly larger than the previous digital divide. His analysis distinguished between mere access to technology and genuine agency in shaping it, emphasizing that the AI divide fundamentally concerns countries’ ability to maintain sovereignty and self-determination in an AI-driven world.


Nambia outlined multiple interconnected barriers facing developing nations. Compute access remains severely limited and expensive, with meaningful AI development requiring GPU clusters that are cost-prohibitive even when adjusted for purchasing power parity. Data organization presents another critical challenge, with developing countries often having siloed data across government departments, leading to poor representation in AI training datasets.


Infrastructure gaps in power and connectivity create additional burdens, while skills shortages exist not just in AI development but crucially in AI governance itself. Nambia emphasized the need for talent development in both technical capabilities and regulatory understanding, particularly for government officials who must oversee AI systems without necessarily understanding their potential applications and harms.


Importantly, Nambia advocated for an innovation-first approach to governance, arguing that while regulation is necessary, countries seeking meaningful participation in the AI ecosystem must prioritize developing capabilities over implementing restrictive regulations that could stifle the very innovation they need to avoid being left behind.


Open Source Models and Collaborative Infrastructure

Rafik Rikorian from Mozilla provided concrete examples of how international cooperation could work through open source approaches. Drawing parallels to Linux, where virtually every computer globally runs on shared code while allowing diverse implementations, Rikorian proposed that AI could follow similar collaborative models. This would enable countries to contribute to common infrastructure while maintaining sovereignty through local adaptation and fine-tuning.


Rikorian discussed federated learning as an example, referencing Google’s handwriting recognition training across Android devices, where training occurs locally while only model weights are shared centrally. This approach could enable international collaboration on healthcare or climate research without requiring countries to surrender sensitive data across borders.


He also described data trust models, citing examples of Hawaiian communities creating data collectives for genomic information used in pharmaceutical research, and radio stations in the Pacific creating similar collaborative structures. Mozilla’s Data Collaborative represents an attempt to create more ethical approaches to data sourcing that ensure attribution and compensation for data providers.


Rikorian mentioned research suggesting significant potential savings from switching to open source models, though he acknowledged the economist’s name was unclear from his notes, indicating this was preliminary information rather than definitive analysis.


Industry Perspectives on Standards and Interoperability

Halak Shirastava from Cohere brought a private sector perspective emphasizing the practical importance of technical standards and interoperability. She argued that frameworks like NIST and ISO are particularly valuable for startups and smaller companies because of their flexibility and evolutionary nature, contrasting favorably with rigid country-specific regulations that could exclude smaller players from the market.


Shirastava identified three key areas for international alignment: technical standards providing flexible compliance frameworks, shared practices around risk mitigation enabling companies to learn from each other’s experiences, and interoperability of shared resources supporting the entire AI ecosystem from large technology companies to emerging startups.


Her approach to capacity building went beyond traditional training programs to emphasize shared evidence, performance benchmarks, and cross-border procurement policy networks. She argued that emerging economies need substantive support rather than superficial engagement to participate meaningfully in AI development, including access to real performance data and implementation experiences rather than just workshops and presentations.


Practical Cooperation Models and Examples

The discussion identified several promising examples of international cooperation that could serve as templates for broader collaboration. Regional compute consortiums, such as India’s AI mission cluster shared between government, academia, and industry, demonstrate how countries can pool resources while maintaining local control. Cloud credit programs negotiated with hyperscalers provide emerging economies with access to necessary computational resources.


Bella Wilkinson mentioned language preservation and development initiatives in Southeast Asia as examples of how countries can collaborate on shared challenges while maintaining cultural sovereignty. These projects demonstrate how multilingual AI development can balance global collaboration with local adaptation needs.


Data sharing initiatives between government, academia, and industry within countries provide models for broader international cooperation, showing how different sectors can collaborate on common datasets and infrastructure while maintaining appropriate governance and oversight.


Sectoral Approaches and Implementation Challenges

The discussion revealed consensus around the importance of sector-specific governance approaches. Nambia emphasized that meaningful AI governance must recognize that potential harms in healthcare differ fundamentally from those in financial services or education. This sectoral approach requires deep understanding of specific applications and use cases rather than broad horizontal regulations that may not address real-world implementation challenges.


The conversation also highlighted the dual challenge of developing both technical AI capabilities and governance expertise simultaneously. Countries need regulatory talent that understands both the capabilities and limitations of AI systems, presenting particular challenges for nations with limited existing AI expertise.


Audience Engagement and Transparency Questions

During the question period, an audience member raised concerns about transparency and accountability in AI systems, referencing recent discussions about document releases and public access to information. The panel acknowledged these concerns while noting that transparency requirements must be balanced with practical implementation needs and competitive considerations.


Moderator Sabina Chofu noted throughout the discussion that speakers were maintaining a notably positive and solution-focused tone, which she attributed to the collaborative spirit of the summit’s final day and the practical focus on actionable cooperation mechanisms rather than abstract policy debates.


Future Directions and Optimistic Outlook

Despite acknowledging significant challenges, the discussion concluded on an optimistic note about increasing participation in AI development and governance. Shirastava noted growing excitement about AI development among both companies and countries, suggesting that community participation will continue expanding rather than contracting.


The economic arguments for collaborative approaches, including potential cost savings from open source adoption and shared infrastructure, suggest that practical considerations may drive adoption of more cooperative models. As the economics of AI development become clearer, countries may find that collaborative approaches offer better value than attempting to develop capabilities independently.


The panel successfully reframed AI governance from idealistic multilateral aspirations toward pragmatic coalition-building focused on specific technical and resource-sharing mechanisms. The convergence of speakers from different sectors around similar solutions suggests these approaches have broad stakeholder support and could represent viable paths forward for more inclusive AI governance.


However, significant implementation challenges remain, particularly around scaling coalition-building approaches beyond major economic powers, developing sufficient governance talent in countries with limited technical expertise, and balancing sovereignty concerns with the need for international cooperation. The discussion provided a foundation for addressing these challenges through practical, incremental cooperation rather than comprehensive global agreements.


Session transcriptComplete transcript of the session
Sabina Chofu

about this morning is right -signing governance for an AI -driven world. So what we’ll try to do with a pretty excellent panel, as I’m sure you’ll agree, is talk a bit about shared computes and data initiatives that hopefully give all nations access to AI resources. We’ll look a bit at how to up -level the playing field for smaller and developing nations. And we’ll talk about collaboration in key sectors like healthcare and education and climate resilience. I’ve got a perfect panel to do that with. I’m going to introduce them all first, and then we’ll dive straight into the conversation. So unfortunately, Navreena Singh from Credo AI couldn’t be with us this morning. She’s got a meeting with the president, so she’s excused.

But we do have… What I start with, just next to me here, Bella Wilkinson, who’s a research fellow on the Digital Society Program. with the Chatham House. Next to her is Rafik Rikorian. I hope I’ve pronounced that vaguely okay, who is the Chief Technology Officer for Mozilla. Next to him, we’ve got Rajesh Nambia, who is the President of NASCOM, our sister association here in India. And last but not least, we’ve got Halak Shirastava, managed, who’s Global AI and Public Policy and Regulatory Affairs at Cohere. And for those of you who don’t know me, I’m Sabina Chofu, I’m International Policy and Strategy Lead at TechUK. So we are the sister association of NASCOM back in the UK.

So without further ado, we will start with setting a bit of a global context, and who better to do that than Isabella. So from a kind of geopolitical perspective, how realistic, I guess, is alignment on AI governance across countries with… fair to say very different strategic interests right now. And where do you see maybe multilateral institutions? I know multilateralism is not a very popular theme these days, but where do you see multilateral institutions or maybe other international players playing a role in this space? So over to you.

Bella Wilkinson

Thank you, Sabina. Thanks to my fellow speakers. It’s great to be here today, really keeping the energy up on the final day of the summit. We can all do it. Let me answer your question directly and then perhaps elaborate a little bit more in detail. Global consensus on how to govern AI is a no -go. It is not going to happen in this geopolitical environment. However, partial alignment on priority issue areas is possible, and it’s pragmatic to throw our weight behind these smaller gatherings that we can then scale using the multilateral format. Now, let’s take a second. Let’s take a second to sketch out the state of play. We have some great experts in the room, on the panel, so I won’t spend too long doing this.

We have been absolutely covered in really optimistic summit rhetoric, walking into Bharat Mandapam, going to side events over the course of this week. But despite the optimism outside of these walls in the background, the US -China AI race continues to accelerate to the umpteenth degree. The capabilities of advanced and the most frontier AI systems and models, the little we know about their capabilities, mind, with huge gaps in transparency, continue to advance. And global scientists only recently have issued warnings about the state of the science and the intense uncertainty surrounding these capabilities and the impact they might be having on our communities and societies. well it’s a good thing we have strong international institutions and shared values we don’t you know it’s a really difficult time for global cooperation outside of ai you know we’re seeing i would argue since the second world war an unprecedented degradation of the international organizations the shared values the rule of law that we have all held so dearly so suffice to say it’s a difficult time for global governance it’s difficult time for the global governance of ai now institutions in the past have very much been brokers mediators and scalers of consensus on tricky governance issues and some of the governance problems we’re facing today are pretty old right i mean i’ve encountered them in previous roles at chatham house and other areas of tech i’m sure the experts on our panel have come across them and the core governance puzzle that we need to figure out is this taking into account the state of geopolitics, the uncertainty around the state of the science, the market dynamics mediated by these leading labs and intensely, intensely competitive US and Chinese ARS dynamics, how on earth do we bring rivals and competitors around the same table?

How do we bring states with a nominal or a minimal alignment of interests and incentives into the same room? Now, you started by asking me about multilateralism and institutions, but maybe let’s reframe this and talk about coalitions. In other areas of governance, what we’ve seen is intense coalition building in crisis or unstable settings around a trusted mechanism, a trusted approach, perhaps in the absence of shared values and principles. And what I’m really interested in, in the context of AI, is where coalition building can develop trust around a credible governance approach, adopt a state champion, get support from associations, from builders, from leading labs themselves, and then scale it using the multilateral format. And over the past few days, I’ve been really excited by some of this splintering to scale dynamics that I’ve seen maybe in conversations on verification, on -chip hardware, risk mitigation strategies, even anonymized collection of usage data, which came out of the commitments yesterday.

Now, what’s the messaging that can drive this coalition building in the absence of trusted institutions, in the absence of shared values? I’ll get into this later in my remarks, but I think it has to be sovereignty and strategic autonomy. Resource -constrained countries. who might decide to adopt a common data governance approach, who might decide to pool resources like compute, have to also consider a degree of governance alignment, again, at this low -hanging fruit, in order to not only withstand the dynamics of the AI race, but to ensure that the collective benefits of cooperation and governance alignment massively outweigh anything they could do individually. So I think I’ll leave it there. Slightly pessimistic take. Let’s see if there’s some more optimism on the

Sabina Chofu

Thank you so much, Bella. I don’t think it was that pessimistic. You did kind of, I think you made it sound very pragmatic in terms of, look, the world is not what we want it to be, and there isn’t the level of multilateral cooperation that we maybe used to have. But you have talked about coalition building, and it’s probably the best we can hope for in the world as it is, as opposed to the world as we’d like it right now. And Rajesh, can I turn to you next? For emergence. economy, obviously access to compute data and infrastructure are critical, but what do you see as some of the barriers most pressing, but also maybe opportunities for AI adoption in India and beyond?

Over to you.

Rajesh Nambia

First of all, thank you for having me on the panel. Pleased to be with all of you and then a few of you showed up here as well, so thank you for coming up. We wish this was the Modi inauguration last evening, very a little bit more than this crowd, but nevertheless, we’ll do with this. But you know, I believe we used to talk about digital divide for a long period of time, and I think while that had its own puts and takes, when you compare a smaller economy and smaller country with a larger one and so on, I think the AI divide is going to be much, much bigger than the digital divide which we saw, because the biggest difference is that at least in the digital divide, the you know access and so on whereas this is all about agency and then it can completely put you at a different back foot so it is such an important topic to talk about when you talk about the broader you know haves and have nots and what really goes on with the larger smaller economies and so on and I truly believe that the accessibility when you look at the broader scales it will come across multiple things starting with compute one of the largest you know piece of what we are talking about here right I think as you mentioned in terms of the race between the US and China and so on and so forth but if you leave those two countries then of course we have a big drop in terms of where the real access is going to be and I believe totally that you know the continued limited access to the broader compute facility is going to be undue putting some of these smaller countries, especially the developing ones, into a little bit of a disadvantage.

So, I think there’s a lot that can be done around it in terms of saying, you know, what is that, you know, countries can potentially do in terms of pooling and so on. But I think there is certainly an issue when it comes to compute. And, you know, not just in terms of accessibility, but also in terms of expense and so on, because at the end of the day, all of these are, even if you use the purchasing power parity, and then sort of look at what it costs for people to sort of get into the kind of level of GPUs, potentially, or GPU clusters one has to produce to even have a meaningful language model and so on.

I think that’s going to be a very different ballgame. And the second element of this whole broader issue that we’re talking about is also the data and then the organization of data, availability of data, quality of data, and so on. I think the more you get into the development, world you will find our developing world you will find that the the data itself is very siloed in many ways there are you know different state silos different department silos and so on and it gets into a point where the data which is such an important and integral part of everything to do with AI you will end up having the data which gets fed into the broader models and eventually the AI systems will necessarily not have the right representation of that population which is a huge concern I mean even especially when you you know of course India is slightly luckier in many ways in terms of us you know playing that game a little bit you know punching a little bit above our weight in some sense but but when you when you go down on the on the list of countries which do not have access to all of these I think you’re going to find it even even harder in terms of solving the data issue and the data availability data quality all of that this becomes a bigger issue and there when we talked about infrastructure gap compute gap it’s a little bit more than just the pure compute itself gpus and so on but it’s also about connectivity power uh these are the issues which uh you know we somehow take it for granted in other segments but i think you will find that power is going to be a huge uh foundation for all of that and as you know that there are multiple layers in in building any of the ai systems and one of the uh bottom most layer is going to be power and then you know what really happens to the power and if it has to be clean power then you know does it put additional tax on the on the developing world for for making sure that that power comes out clean connectivity is a huge issue even though it’s kind of broadly solved in in some sense with all the um satellite options and so on but we continue to have the kind of connectivity you need to run a truly inclusive ai system is going to be very different from those uh you know people have thought otherwise and then of course we can go on and on in terms of the the other layers of the power and then of course we can go on and on issue, the availability of skills and ensuring that you have the right skills not just to leverage AI but also to build AI, I mean there are two different type of capabilities that you need to produce in any country so these are the issues and how do you make sure that you have a broader the opportunity itself would be to sort of look at this and say are there other ways of collaborating other ways of partnering and so on, because you know these especially when you go down the line, list of countries we have close to 200 countries or so in the world and when you leave the top 5 or 10 and then you go below and then you keep going down the list, it becomes harder I mean I don’t think that everybody is going to be producing a full blown, large language model and things that they need to sort of do it for themselves at that point in time the question will be can you really partner, can you really leverage some of the common systems that can be done across these countries and so on and so on

Sabina Chofu

Thank you. I mean, you’ve done a brilliant job of putting all the free problems we’ve got and then saying you’ve got a long list afterwards in terms of cooperation. But I love the touch of optimism there at the end. It’s like, you know, if you lift a country out of the room, you still have a hundred and whatever, 85 that need to figure it out. So I liked a lot of that framing. And thanks for touching on

Rafik Rikorian

I mean, unsurprisingly, being someone from Mozilla, I’ll probably go with the open source angle as one of the opportunities to actually align the talent, align the capabilities, and actually do shared infrastructure. I mean, maybe I’ll draw two analogies to think about, and then we can go more deep into those as it applies to AI. But for all practical purposes, every computer on the planet runs Linux. There are a few iPhones here and there on top of it. But the Linux model, I think, is a good one for all of us to think about, that every computer… Every country, every nation in the world, almost every company in the world, contributes to the single code base which has been deployed across these billions of computing devices across the planet.

And there are lots of derivative work that happens from it. So like a company like Google can then take that and make it into Android. A company like a vending machine company can deploy Linux onto a Raspberry Pi and run inside their vending machine. So I think there’s an analogy here of being able to use shared infrastructure, shared software infrastructure as a collaboration mechanism that we can all pool resources together but still have sovereignty on top of it. So we can still all be contributing to this common core but then fine -tune our way to our own particular implementations. And I think that if we take that and then marry it with a web analogy of in the early 90s of the original web, you needed to ask for permission in order to deploy a website.

And by permission I mean effectively you had to go buy yourself a Solaris box and then you had to go buy yourself you had to buy yourself a Windows NT. server, you’re trying to configure an ActiveX scenario. And the beauty of what Mozilla and Firefox did, we’re not the only ones who did it, but the beauty of what they did there is a forced openness throughout the stack that enabled anyone without permission to build whatever they wanted. And I think we need to find a similar moment. So in that world, we went from the Windows NT stack and all of IIS to the LAMP stack. And the LAMP stack has these gorgeous analogies of just like anyone can build on Linux.

When Facebook needed PHP to move faster, they did massive improvements on PHP, which then trickled down to all of us. So people can contribute in different ways across it. That’s not the world we’re currently living in AI. We’re living in this world where there are a few frontier model companies that are effectively doing governance for all of us in some way, shape, or form. And I agree with my colleague that that’s an untenable situation. I do live in San Francisco, but you don’t want four people in San Francisco. government’s decisions for the entire world that doesn’t make a lot of sense. So I do think if we can find the LAMP stack equivalent model for AI, and this is actually what I’ve turned all of Mozilla towards, of just like how do we define open standards, how do we define open interfaces so that the vibrancy of the open source community can come together and actually build solutions that work for every single person, every single community, every single government on the planet.

You can sort of build upon, you can contribute to the common base, but then build upon it and take it in a way that makes it more aligned with your country’s values or your company’s values or your individual values, and you can fine -tune your solution out of that. So I think there is an analogy here around how open source could actually provide digital sovereignty across all the different levels. Give us agency as a person, give opportunities for flexibility at a corporation level, and then give. Give countries the ability to own their version of the stack. That could actually be quite beautiful if we can actually figure out how to do that in an appropriate way.

Sabina Chofu

I tried to give you a dose of optimism you have given me a dose of optimism but I’m absolutely shocked you talked about open source thanks so much Ralphie and I did appreciate you brought up the standards because I’m going to talk to Halak and we’re going to go a bit into collaboration and standards here so obviously with the myriad of AI governance frameworks I’m going to turn to you on the question of where do you see potential for alignment on standards maybe some interoperability some maybe risk management framework so keep us on the hopeful path please

Halak Shirastava

I am here to provide the hopeful perspective let me start out by saying that I lead global public policy at Cohere. Cohere is a Canadian AI developer we build models and we have agentic AI our solution is called North so in my role I look across the global regulatory framework that means If our startup wants to, you know, do business in a certain country, I try to understand the regulatory landscape of that country, and then I advise our company if it’s favorable or not. When we’re talking about governance and frameworks that are existing, my perspective is I think it’s not there yet, but I have a more promising view of it. I think that in certain principles, we are converging to where we need to go, and there are strong opportunities.

Technical standards is one of them. You know, there are frameworks like NIST and ISO frameworks. For startups, these are key. The reason they’re key is because they’re flexible and they’re evolving. If we just go country by country, what that’s going to do is price out smaller companies. But if we have an international framework that is evolving and flexible and, you know, we’re going to be able to do that, you know, also including industry coalition, which a lot of the model developers are a part of. But also, like, other stakeholders can be a part of as well. I think it really helps. The second thing I would say is around shared practices, around risk mitigation. So I think there’s strong opportunity there as we come together and share documents or, you know, evaluations around misuse or model capabilities or impact of models.

I think, you know, like I said, we have a way to go, but we are moving closer to that. And then the third thing I would say is interoperability of shared resources. This is key, key, key. We have a big ecosystem. So, yes, there is big tech involved, but there are smaller players. And every single day there’s new startups that are wanting to emerge and wanting to have a go -to -market strategy. And the only way this is possible is if industry and all of industry, big and small, the whole ecosystem starts sharing documents and documentation around, you know, red teaming or evals or multilingual benchmarks and things like that to come to some sort of consensus.

Sabina Chofu

Thanks so much. I’m really enjoying this positive vibe we’re going with. And, you know, that combination, I think it kind of links really nicely back to what Bella was saying around coalitions, you know, build on themes, right? It’s like where do we think we have common ground and what we think we can build on. So I really, really enjoyed that contribution. Rajesh, can I turn to you next? Because I did wonder what all this stuff means for, you know, in kind of smaller and developing economies. And maybe if you have any examples of shared standards, pooled resources, any of the stuff that Halak was talking about, public -private models, or anything that you’ve seen that looks promising, that looks like it could deliver.

Thank you.

Rajesh Nambia

You know, as we said, the moment you look at shade models, there are multiple reasons why we want to do this. And one, of course, as we’ve talked about, the cost involved in doing some of that. I think that itself is becoming cost prohibitive and hence there may not be even an option for many of the countries but to sort of have this shade model. We also find that in the regional compute consortiums that, you know, folks can potentially create and you often see examples of where, for example, a standard data set and stuff like that being shared by, not just by, you know, even within a country. It could be between government, academia, and then industry sort of sharing the same sort of data sets, making sure that they’re able to leverage that in some sense.

Compute is clearly something which continues. continues to be the, you know, we shared resources in many of the, even in India, for example, you know, our own AI mission has created this cluster where it can be broadly leveraged by both industry, academia, and the government in terms of ensuring that they’re able to get access to the right set of GPU, set of GPU, GPU forms, and ensuring that they’re able to use that and then take it forward. So, public -private sharing of data, certainly the compute consortium, and then cloud credits, I think that’s something which sovereigns have been able to work with the hyperscalers, especially in terms of getting a lot of, you know, cloud credits for the GPUs, especially, right, because, which is needed for even if you, it’s not about building a frontier model, but it’s even to leverage the frontier model, build some reasoning models on top of it, and ensuring that you’re able to build an application which is meaningful, not that every time you need a powerful GPU, but there are occasions where you definitely you would need and then hence you know using some of those cloud credits will become a big need and then of course when you switch to regulations and so on and ensuring that how do you make sure that even having a policy is something which is shared across you know you don’t want to reinvent the wheel every single time so do you have a method by which you could leverage the existing you know look at what is out there in the world and then sort of leverage it and then try to reuse it because what you don’t want is to have this 100 versions of the same thing with a few nuances here and there so that’s something which I think companies will try and create a model as well.

Sabina Chofu

Thank you so much and I’m gonna kind of turn over to

Audience

Yes. Yes. Looking forward to a truth transparency and accountability -driven world. It takes 30 years for FC files to come out in a place like America, the developed world. Is that the speed of the system till it collapses and till we start a new world? Are we resigned to that fate?

Sabina Chofu

Yeah, so I can’t really see the link between the Epstein files and the… 30 years since the world was destroyed by Aaron Mulder in 2001. You don’t do the truth to come out. So you don’t have the system speed. Yes. Sure. Thank you. So on… Just to kind of build on what Rajesh was saying there on kind of also the capability. So maybe if we move into a bit of cross -border cooperation and Bella, if I can maybe turn to you just to build on those points. Because obviously what we are seeing across the developing world in particular, often it’s kind of the institutional capacity that’s a bit of an issue there. Yeah. kind of doing all the engagement and all the investments and all the, you know, you kind of still run into.

What are some of the, and I saw you were taking notes furiously, so I’m sure you have reflections on what has been said so far. But also, what are some of the resources

Bella Wilkinson

dependencies, figure out what they want to invest in and what dependencies they’re willing to accept, wanting to build strong institutions, again, that can mainline AI directly into public service delivery, and as you said, enable cross -border cooperation, might take a step back and figure out which foreign capabilities or foreign services they’re willing to accept at some levels of the stack and where they’d like to invest in indigenous solutions. And I mentioned open source earlier because this has come up time and time again, and I’m sure it’s going to be absolutely no surprise to our audience here today. An example which has really stuck with me and Rafi, I’d be really interested in your thoughts on this, is the Southeast Asian Languages Under One Network model, so a multilingual sea lion LLM.

And this is something we’ve called, again, in a really interesting collaboration with AI Safety Asia, open models. With local adaptation, really balancing. again inputs from open source models potentially provided by foreign providers with adaptation to a local context and so i think leaving the summit what i’m really going to be interested in is i think this connection between drawing on i guess inputs from the open source community fine -tuning and locally adapting their contributions and then perhaps doing so not only in the service of again strong robust institutions at the national level who are ai ready but also on this kind of collective cross -border level i hope that makes sense

Sabina Chofu

it does and i’m gonna let rafi kind of uh fit into that as well because you’ve uh you’ve uh segwayed really nicely into into his uh part but also um if you can also touch upon feel free to react to what uh bella has said but also if you can also touch upon on on the what you’ve seen as best practice in international and cross -border collaboration maybe in healthcare climate resilience audience education anywhere you’ve seen good stories to tell please do share

Rafik Rikorian

i mean i do think a lot about the local fine -tuning and i think that that’s actually a really powerful concept of like we can all contribute to a core and then locally fine -tune for our values and our needs and i think that this has shown up in a bunch of different ways and i’m interested personally in all these alternative i don’t even want to call it our alternative but like other architectures that enable this to be possible because in some ways we’re kind of being being fed a regime that says it’s not possible but i think like architecturally it actually is in a bunch of different ways so i love the indigenous data model like looking at what different indigenous peoples have done around data collectives for their local areas so there’s a group of people for example in hawaii that is doing this for their genomic data because genomic data is really useful for pharmaceutical models and so like they’ve been looking for ways so that they can both monetize but also the provenance of their data as it goes through these pharmaceutical models.

So there are some professors out of UCSD starting to build actually what these data trusts could look like for Hawaiian people, and I think that that model could be replicated in lots of different parts of the world. Mozilla is actually attempting to do a bunch of this. So we’re creating something that we call the Mozilla Data Collaborative, or Collective, sorry. And what the Collective is meant to be is it’s meant to be a marketplace of ethically sourced but provenance -traced data sets so that you can bring your data. It will actually help you scrub it, clean it, et cetera, and also make sure you have the appropriate licenses on it so that people can come find the data sets that they want to train their models but make sure that attribution is given, compensation is given, et cetera.

So we’re literally in conversations with almost every radio station on the planet to try to get their recordings and their transcripts onto the marketplace, not for Mozilla to make money. In fact, we actually want the radio stations. to have a monetization path for all the data that they’re sitting on. simply have it scraped by big model providers to try to soak that into their systems. Instead, require that it be licensed, require that compensation be given. So I think there are models there. And on the computational side, I think there’s also a lot of interesting things showing up around federated learning opportunities. For those of you who don’t know what federated learning is, think of, you know, Google did this very famously when they trained their handwriting model across everyone’s Android phones.

So your handwriting is very personal and private. Your handwriting is on your device. And Google is able to train a handwriting recognition model that didn’t require them to get access to your data, because part of the training happened on your phone, and then the model wait through shipped it back up for centralized training. And I think something like that actually could be an interesting model for international collaboration of like, I can bring my data to the game, my healthcare data, my values data, my language data, but not have to release it to a different company, or sorry, a different country. Instead, allow you to do it in a different way. And I think that’s a really interesting model.

Thank you. of the training on my compute, on my infrastructure, and only ship model weights back up, and actually then create bigger models across borders and across geographies that could actually take into account different healthcare scenarios, different value systems, et cetera, in there. So I think that there are these interesting alternative architectures that we can actually start leaning into, these data trust models, these federated learning models, that actually could be massive enablers for cooperation and allow us to build these foundational things that we can then fine -tune and bring to our local context.

Sabina Chofu

Thanks so much, Rafi. That fine -tuning seems to be definitely a thing in this conversation, how you kind of built for different cultures and countries. And Halak, maybe I can come to you next, because we keep talking about kind of international cooperation and coordination. But I’m wondering, how do you translate that? played that, you know, chit -chat into actual skills, capacity, capability for emerging economies. And, you know, I mean, we are in a very international AI Impact Summit. So, you know, kind of how do we bring that from we talk about governance to all this international policy actually delivering for emerging economies?

Halak Shirastava

It’s a good question. I think let me start out by saying capacity building isn’t just, you know, running workshops or basically talking to regulators about, you know, this should be done. What it is is capacity building, I think, for emerging economies especially is critical because – hold on. Let me think. Okay, so emerging economies have unequal access to data, information, and technology, right? So what are we trying to solve for here in terms of capacity building? The first thing I would say is shared evidence. So what we need is we need players to help into this capacity building system with documents, results, performance, benchmarks, to lift up other players. That, I think, would be number one.

The second thing I think is key and sometimes overlooked is the value of, like, procurement policies. And I agree with Isabella. What if we had, like, an industry coalition, like a cross -border network, where they’re solving for procurement policies or procurement rules? And what this does is this brings in global players. So now what you’re doing is you’re opening up your country to different markets. The next thing I would say is, like, you know, a lot of – Let me think. Let me put it this way. So there are developers who develop the technology, and then there are deployers. They buy the technology, and they use it. So, for example, like a public sector agency. Why is it so – Thank you.

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you Thank you. Thank you. economist Frank Nagel has a report recently that approximately 24 billion U .S. dollars are being wasted by not switching to open source models right now. So the economics are starting to make a lot of sense. So I think once all these stars align, it becomes almost obvious what an answer could look like for local governments around open source AI models, et cetera. So I’m really excited for that in the next 12 to 18 months.

Sabina Chofu

Thank you. Rajesh?

Rajesh Nambia

No, I agree with both of what’s been said so far, but I also want to give a sense of I think it’s when you look at AI governance and people tend to sort of lead with regulatory regulation first. I believe that countries and especially the countries which we talked about in terms of more from an inclusion point of view, you’ve got to lead with innovation first mindset because I think regulation is required and certainly needed, but I think innovation is probably needed more in some sense. And also when you look at the AI governance, governance across all of that. we do while there could be horizontal governance which will apply to every AI systems I think the more meaningful governance that you’re going to find when you get into sectoral governance meaning when you look at the AI systems for health care and you’ll find there are the understanding of a harm in the health care segment is very different from financial services and so on so how do you get into those sectoral areas then you can have meaningful governance structure and last but not the least you need to have the right talent and people who can actually who understand all of this in both in public sector and people who are supposedly governing all of this that is something which is it’s not the talent in terms of broader AI model building and people who are building AI systems but how do you make sure that the talent in the governance space in the governments and people who are actually regulating it they don’t understand the real harm and then it’s going to be a bigger issue and especially when it comes to the you know the list of countries that we talked about always when you get deeper down the list you will find that the talent is going to be here and there and you will find that the talent is going to be here and there and you will find that be here and there and you will find that the talent is going to be here and there and you will find that the talent is going to be here and there and you will find that the talent issue in terms of understanding.

Sabina Chofu

Thank you. And, you know, as someone who lives in Brussels, I’ll make sure to take that message back. Halak.

Halak Shirastava

Okay, so what am I most excited about, I guess, in the next 12 months? I mean, in the last few days, you’ve seen companies really, really excited about AI, but what you’ve also seen is countries very excited about AI. So what does this mean in governance? It means that the community and the participation is only going to increase. I don’t see it going backwards. And so, as technology is evolving, more players are going to have a voice in the system, and the standards and the ITU bodies or the ISO bodies, and I think because of this convergence, we are going to, as society, just, like, increase our, like, literacy of not only AI, but technology, but also bring it into whatever we’re in, if we’re in the private sector, if we’re in the public sector.

And because of that, I think we’re going to have to Yeah, I think a lot of progress will be made in the next 12 months, and you’ll see it as it converges.

Sabina Chofu

Thank you so much. Thanks to all the panel. Thanks for being here, and enjoy the rest of your day. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
B
Bella Wilkinson
3 arguments155 words per minute979 words377 seconds
Argument 1
Global consensus on AI governance is unrealistic in current geopolitical environment, but partial alignment on priority issues is possible through coalition building
EXPLANATION
Wilkinson argues that while complete global consensus on AI governance won’t happen due to current geopolitical tensions, countries can still achieve meaningful cooperation by focusing on specific issue areas where alignment is possible. She advocates for coalition building around trusted mechanisms that can then be scaled using multilateral formats.
EVIDENCE
She cites the ongoing US-China AI race, degradation of international organizations since WWII, and examples of coalition building around verification, chip hardware, risk mitigation strategies, and anonymized usage data collection from recent commitments.
MAJOR DISCUSSION POINT
Coalition building as alternative to multilateral consensus
AGREED WITH
Sabina Chofu
Argument 2
Local fine-tuning of global models allows countries to adapt AI systems to their specific values and contexts
EXPLANATION
Wilkinson supports the concept that countries can take open source AI models and adapt them locally to reflect their specific cultural values and contexts. This approach balances leveraging global AI capabilities with maintaining local sovereignty and relevance.
EVIDENCE
She mentions the Southeast Asian Languages Under One Network model (sea lion LLM) as an example of multilingual AI developed through collaboration with AI Safety Asia using open models with local adaptation.
MAJOR DISCUSSION POINT
Balancing global AI capabilities with local adaptation
AGREED WITH
Rafik Rikorian
Argument 3
Coalition building around sovereignty and strategic autonomy messaging can drive cooperation where collective benefits outweigh individual capabilities
EXPLANATION
Wilkinson suggests that resource-constrained countries should focus on sovereignty and strategic autonomy as key messaging to build coalitions. Countries that adopt common approaches to data governance and pool resources like compute need governance alignment to ensure collective benefits exceed what they could achieve individually.
EVIDENCE
She notes that resource-constrained countries considering common data governance approaches and pooled compute resources must also consider governance alignment to withstand AI race dynamics.
MAJOR DISCUSSION POINT
Strategic messaging for international AI cooperation
R
Rajesh Nambia
7 arguments195 words per minute1953 words598 seconds
Argument 1
The AI divide will be much bigger than the digital divide, creating significant disadvantages for smaller and developing economies
EXPLANATION
Nambia argues that the gap between AI haves and have-nots will be more severe than previous digital divides because AI is fundamentally about agency rather than just access. This creates a situation where smaller economies could be put at a significant disadvantage compared to larger ones.
EVIDENCE
He explains that unlike the digital divide which was about access, the AI divide involves agency and can put countries at a ‘different back foot,’ with major drops in access when looking beyond the US and China.
MAJOR DISCUSSION POINT
Scale and impact of AI inequality globally
Argument 2
Limited access to compute resources, expensive GPU clusters, and power infrastructure create major barriers for developing nations
EXPLANATION
Nambia identifies compute access as one of the largest barriers for developing countries, noting that even with purchasing power parity adjustments, the cost of GPUs and GPU clusters needed for meaningful AI development remains prohibitively expensive. Additionally, power infrastructure requirements add another layer of complexity.
EVIDENCE
He mentions the continued limited access to compute facilities, the expense of GPU clusters even when adjusted for purchasing power parity, and the need for clean power as a foundational requirement for AI systems.
MAJOR DISCUSSION POINT
Infrastructure barriers to AI development
Argument 3
Data availability, quality, and organization issues in developing countries lead to poor representation in AI systems
EXPLANATION
Nambia explains that in developing countries, data is often siloed across different state departments and organizations, making it difficult to create comprehensive datasets. This results in AI systems that don’t properly represent the populations they’re meant to serve.
EVIDENCE
He describes how data in developing countries is ‘very siloed’ across different state and department silos, leading to AI systems that lack proper representation of the population.
MAJOR DISCUSSION POINT
Data quality and representation challenges
Argument 4
Connectivity and clean power requirements add additional burdens on developing economies
EXPLANATION
Nambia argues that beyond compute resources, developing countries face fundamental infrastructure challenges including reliable connectivity and clean power supply. These foundational requirements are often taken for granted but represent significant barriers for inclusive AI systems.
EVIDENCE
He notes that power is ‘going to be a huge foundation’ for AI systems, and questions whether requiring clean power puts additional tax on developing countries. He also mentions connectivity issues despite satellite options.
MAJOR DISCUSSION POINT
Foundational infrastructure requirements for AI
Argument 5
Regional compute consortiums, shared datasets between government-academia-industry, and cloud credits from hyperscalers show promising collaboration approaches
EXPLANATION
Nambia identifies several practical models for cooperation including regional consortiums for compute sharing, collaborative data sharing between different sectors within countries, and arrangements with cloud providers for GPU access credits. These approaches help address cost and access barriers.
EVIDENCE
He cites India’s AI mission creating GPU clusters shared by industry, academia, and government, and mentions sovereigns working with hyperscalers to get cloud credits for GPU access needed for building applications on frontier models.
MAJOR DISCUSSION POINT
Practical cooperation models for AI development
AGREED WITH
Rafik Rikorian, Halak Shirastava
Argument 6
Countries should lead with innovation-first mindset rather than regulation-first, and focus on sectoral governance for meaningful AI oversight
EXPLANATION
Nambia argues that countries, especially those focused on inclusion, should prioritize innovation over regulation initially. He also advocates for sector-specific governance approaches rather than horizontal governance, as understanding of harm varies significantly across sectors like healthcare versus financial services.
EVIDENCE
He explains that while regulation is needed, innovation is ‘probably needed more’ for inclusive countries, and notes that understanding of harm in healthcare is ‘very different from financial services.’
MAJOR DISCUSSION POINT
Balancing innovation and regulation in AI governance
AGREED WITH
Halak Shirastava
DISAGREED WITH
Halak Shirastava
Argument 7
Talent development in governance and regulatory understanding is critical, especially for countries with limited AI expertise
EXPLANATION
Nambia emphasizes that beyond technical AI talent, there’s a crucial need for people who understand AI governance in both public and private sectors. This includes government officials and regulators who can properly understand potential harms and implement effective oversight.
EVIDENCE
He distinguishes between talent for ‘broader AI model building’ versus talent ‘in the governance space in governments’ who understand ‘real harm,’ noting this becomes a bigger issue as you go down the list of countries.
MAJOR DISCUSSION POINT
Governance capacity building needs
R
Rafik Rikorian
3 arguments189 words per minute1391 words439 seconds
Argument 1
Current AI governance is effectively being done by a few frontier model companies, which is an untenable situation for global decision-making
EXPLANATION
Rikorian argues that having a small number of frontier AI companies in San Francisco making governance decisions for the entire world is problematic and unsustainable. He advocates for more distributed and democratic approaches to AI governance that don’t concentrate power in the hands of a few private entities.
EVIDENCE
He states ‘you don’t want four people in San Francisco making governance decisions for the entire world that doesn’t make a lot of sense’ and describes the current situation where ‘a few frontier model companies are effectively doing governance for all of us.’
MAJOR DISCUSSION POINT
Concentration of AI governance power
Argument 2
Open source models like Linux provide a template for shared AI infrastructure where countries can contribute to common code base while maintaining sovereignty
EXPLANATION
Rikorian draws parallels between the Linux operating system model and potential AI development, suggesting that countries could contribute to shared AI infrastructure while maintaining the ability to customize and control their own implementations. This approach would allow for both collaboration and sovereignty.
EVIDENCE
He explains that ‘every computer on the planet runs Linux’ and describes how companies like Google can take Linux and make Android, while others can deploy it in vending machines, showing how shared infrastructure can support diverse applications.
MAJOR DISCUSSION POINT
Open source as model for AI cooperation
AGREED WITH
Bella Wilkinson
Argument 3
Data trusts and federated learning models enable international collaboration while keeping sensitive data local and ensuring proper attribution and compensation
EXPLANATION
Rikorian proposes alternative architectures like data trusts and federated learning that allow countries to collaborate on AI development without sharing sensitive data across borders. These models enable training on distributed data while maintaining privacy and ensuring data providers receive proper attribution and compensation.
EVIDENCE
He describes Mozilla’s Data Collective marketplace for ethically sourced data, conversations with radio stations for licensed content, and Google’s federated learning approach for handwriting recognition that trained models without accessing personal data directly.
MAJOR DISCUSSION POINT
Privacy-preserving international AI collaboration
AGREED WITH
Rajesh Nambia, Halak Shirastava
H
Halak Shirastava
4 arguments69 words per minute931 words798 seconds
Argument 1
Technical standards frameworks like NIST and ISO are key for startups because they’re flexible and evolving, unlike country-by-country regulations that price out smaller companies
EXPLANATION
Shirastava argues that international technical standards provide a more accessible path for smaller AI companies compared to navigating multiple national regulatory frameworks. These standards are flexible and evolving, making them more suitable for the dynamic AI landscape and more affordable for startups to comply with.
EVIDENCE
She mentions NIST and ISO frameworks as examples and explains that country-by-country approaches ‘price out smaller companies’ while international frameworks that are ‘flexible and evolving’ help startups navigate the regulatory landscape.
MAJOR DISCUSSION POINT
International standards vs. national regulations for AI companies
AGREED WITH
Rajesh Nambia
DISAGREED WITH
Rajesh Nambia
Argument 2
Shared practices around risk mitigation and interoperability of resources are essential for the entire AI ecosystem
EXPLANATION
Shirastava emphasizes the importance of the entire AI ecosystem, including both large and small players, sharing documentation and practices around risk assessment, model evaluations, and multilingual benchmarks. This collaboration is essential for building consensus and enabling smaller players to participate effectively.
EVIDENCE
She mentions sharing ‘documents or evaluations around misuse or model capabilities’ and ‘red teaming or evals or multilingual benchmarks’ as examples of practices that need to be shared across the ecosystem.
MAJOR DISCUSSION POINT
Ecosystem-wide collaboration on AI safety practices
AGREED WITH
Rajesh Nambia, Rafik Rikorian
Argument 3
Capacity building requires shared evidence, performance benchmarks, and cross-border procurement policy networks rather than just workshops
EXPLANATION
Shirastava argues that effective capacity building for emerging economies goes beyond traditional training approaches and requires substantive sharing of evidence, results, and performance data. She also emphasizes the value of coordinated procurement policies that can open markets to global players.
EVIDENCE
She explains that capacity building ‘isn’t just running workshops’ but requires ‘shared evidence,’ ‘documents, results, performance, benchmarks’ and mentions the value of ‘cross-border network’ for ‘procurement policies or procurement rules.’
MAJOR DISCUSSION POINT
Substantive approaches to AI capacity building
Argument 4
Increasing participation from both companies and countries will drive convergence in standards and improve overall AI literacy across sectors
EXPLANATION
Shirastava expresses optimism that growing engagement from both private companies and governments in AI will lead to better standards convergence and increased AI literacy across society. This increased participation will strengthen the governance framework and improve understanding across different sectors.
EVIDENCE
She observes that ‘in the last few days, you’ve seen companies really, really excited about AI, but what you’ve also seen is countries very excited about AI’ and predicts this will increase participation in standards bodies like ITU and ISO.
MAJOR DISCUSSION POINT
Growing multi-stakeholder engagement in AI governance
S
Sabina Chofu
3 arguments142 words per minute1209 words508 seconds
Argument 1
Coalition building represents a pragmatic approach to AI governance in the current geopolitical environment, offering a realistic alternative to traditional multilateral cooperation
EXPLANATION
Chofu acknowledges that while the world lacks the multilateral cooperation of the past, coalition building around specific themes and common ground offers a practical path forward. She emphasizes building on areas where there is consensus rather than trying to achieve universal agreement.
EVIDENCE
She references Bella’s discussion of coalitions and notes the importance of identifying ‘where do we think we have common ground and what we think we can build on’
MAJOR DISCUSSION POINT
Pragmatic approaches to international AI cooperation
AGREED WITH
Bella Wilkinson
Argument 2
Emerging economies face critical barriers in accessing compute, data, and infrastructure necessary for AI adoption, requiring targeted solutions for inclusion
EXPLANATION
Chofu highlights that for emerging economies, access to fundamental AI resources like compute power, quality data, and basic infrastructure represents significant challenges. She frames this as both a barrier and an opportunity that needs to be addressed through international cooperation.
EVIDENCE
She specifically asks about ‘barriers most pressing, but also maybe opportunities for AI adoption in India and beyond’ and references the need to ‘up-level the playing field for smaller and developing nations’
MAJOR DISCUSSION POINT
AI accessibility challenges for developing nations
Argument 3
Translating international AI governance discussions into actual skills, capacity, and capability building for emerging economies requires moving beyond rhetoric to practical implementation
EXPLANATION
Chofu emphasizes the need to bridge the gap between high-level international policy discussions and concrete capacity building that delivers tangible benefits to emerging economies. She questions how to move from governance conversations to actual skill development and capability enhancement.
EVIDENCE
She asks ‘how do you translate that…chit-chat into actual skills, capacity, capability for emerging economies’ and references the need to bring ‘international policy actually delivering for emerging economies’
MAJOR DISCUSSION POINT
Implementation gap in AI governance initiatives
A
Audience
1 argument122 words per minute53 words26 seconds
Argument 1
Current systems lack sufficient transparency and accountability, with truth-revealing processes taking decades, raising concerns about system sustainability and the need for reform
EXPLANATION
The audience member expresses frustration with the slow pace of transparency in governance systems, using the example of classified files taking 30 years to be released. They question whether societies are resigned to such slow speeds of accountability and whether this will lead to system collapse and renewal.
EVIDENCE
They reference ‘FC files’ taking ’30 years to come out in a place like America, the developed world’ as an example of slow transparency processes
MAJOR DISCUSSION POINT
Speed of transparency and accountability in governance systems
Agreements
Agreement Points
Open source models and local fine-tuning enable countries to maintain sovereignty while leveraging global AI capabilities
Speakers: Bella Wilkinson, Rafik Rikorian
Local fine-tuning of global models allows countries to adapt AI systems to their specific values and contexts Open source models like Linux provide a template for shared AI infrastructure where countries can contribute to common code base while maintaining sovereignty
Both speakers agree that open source approaches combined with local adaptation allow countries to benefit from global AI development while maintaining control over their specific implementations and values
International technical standards are preferable to fragmented national regulations for AI governance
Speakers: Halak Shirastava, Rajesh Nambia
Technical standards frameworks like NIST and ISO are key for startups because they’re flexible and evolving, unlike country-by-country regulations that price out smaller companies Countries should lead with innovation-first mindset rather than regulation-first, and focus on sectoral governance for meaningful AI oversight
Both speakers favor flexible international standards over rigid national regulations, emphasizing that innovation should lead regulation and that sectoral approaches are more meaningful than horizontal governance
Shared resources and collaboration models are essential for addressing AI access barriers in developing countries
Speakers: Rajesh Nambia, Rafik Rikorian, Halak Shirastava
Regional compute consortiums, shared datasets between government-academia-industry, and cloud credits from hyperscalers show promising collaboration approaches Data trusts and federated learning models enable international collaboration while keeping sensitive data local and ensuring proper attribution and compensation Shared practices around risk mitigation and interoperability of resources are essential for the entire AI ecosystem
All three speakers agree that various forms of resource sharing – from compute consortiums to data trusts to shared practices – are necessary to enable broader participation in AI development
Coalition building around specific issues is more realistic than global consensus on AI governance
Speakers: Bella Wilkinson, Sabina Chofu
Global consensus on AI governance is unrealistic in current geopolitical environment, but partial alignment on priority issues is possible through coalition building Coalition building represents a pragmatic approach to AI governance in the current geopolitical environment, offering a realistic alternative to traditional multilateral cooperation
Both speakers acknowledge that while comprehensive global AI governance is unrealistic, focused coalitions around specific issues offer a pragmatic path forward
Similar Viewpoints
Both speakers recognize that developing economies face significant structural disadvantages in AI access and that meaningful capacity building requires substantive resource sharing rather than superficial training approaches
Speakers: Rajesh Nambia, Halak Shirastava
The AI divide will be much bigger than the digital divide, creating significant disadvantages for smaller and developing economies Capacity building requires shared evidence, performance benchmarks, and cross-border procurement policy networks rather than just workshops
Both speakers are concerned about concentration of power in AI governance and advocate for more distributed, inclusive approaches that enable broader participation from smaller players
Speakers: Rafik Rikorian, Halak Shirastava
Current AI governance is effectively being done by a few frontier model companies, which is an untenable situation for global decision-making Technical standards frameworks like NIST and ISO are key for startups because they’re flexible and evolving, unlike country-by-country regulations that price out smaller companies
All three speakers support models that combine global collaboration with local adaptation, allowing countries and organizations to benefit from shared resources while maintaining control over their specific implementations
Speakers: Bella Wilkinson, Rafik Rikorian, Halak Shirastava
Local fine-tuning of global models allows countries to adapt AI systems to their specific values and contexts Open source models like Linux provide a template for shared AI infrastructure where countries can contribute to common code base while maintaining sovereignty Shared practices around risk mitigation and interoperability of resources are essential for the entire AI ecosystem
Unexpected Consensus
Innovation should precede regulation in AI governance
Speakers: Rajesh Nambia, Halak Shirastava
Countries should lead with innovation-first mindset rather than regulation-first, and focus on sectoral governance for meaningful AI oversight Technical standards frameworks like NIST and ISO are key for startups because they’re flexible and evolving, unlike country-by-country regulations that price out smaller companies
This consensus is unexpected because it goes against the common policy approach of establishing regulatory frameworks before allowing innovation to proceed. Both speakers from different backgrounds agree that in AI, innovation should lead and regulation should follow with flexible, sector-specific approaches
Open source approaches can address both sovereignty and inclusion concerns simultaneously
Speakers: Bella Wilkinson, Rafik Rikorian
Local fine-tuning of global models allows countries to adapt AI systems to their specific values and contexts Open source models like Linux provide a template for shared AI infrastructure where countries can contribute to common code base while maintaining sovereignty
This consensus is unexpected because sovereignty and inclusion are often seen as competing priorities – countries wanting to maintain control versus wanting to participate in global systems. Both speakers show how open source models can satisfy both needs simultaneously
Overall Assessment

The speakers demonstrated strong consensus around pragmatic, collaborative approaches to AI governance that balance global cooperation with local sovereignty. Key areas of agreement include the value of open source models with local adaptation, the preference for flexible international standards over rigid national regulations, the necessity of resource sharing mechanisms for developing countries, and the realistic focus on coalition building rather than seeking global consensus.

High level of consensus on practical approaches, with speakers from different sectors (policy research, technology, industry association, private sector) converging on similar solutions. This suggests these approaches have broad stakeholder support and could be viable paths forward for international AI cooperation. The consensus implies that successful AI governance will likely emerge from bottom-up coalition building around specific technical and resource-sharing mechanisms rather than top-down multilateral agreements.

Differences
Different Viewpoints
Approach to AI governance – regulation vs innovation priority
Speakers: Rajesh Nambia, Halak Shirastava
Countries should lead with innovation-first mindset rather than regulation-first, and focus on sectoral governance for meaningful AI oversight Technical standards frameworks like NIST and ISO are key for startups because they’re flexible and evolving, unlike country-by-country regulations that price out smaller companies
Nambia advocates for innovation-first approaches with sectoral governance, while Shirastava emphasizes the importance of international technical standards and frameworks for regulatory compliance
Scale of governance approach – sectoral vs horizontal
Speakers: Rajesh Nambia, Halak Shirastava
Countries should lead with innovation-first mindset rather than regulation-first, and focus on sectoral governance for meaningful AI oversight Technical standards frameworks like NIST and ISO are key for startups because they’re flexible and evolving, unlike country-by-country regulations that price out smaller companies
Nambia argues for sector-specific governance approaches noting that harm understanding varies across sectors, while Shirastava focuses on horizontal international standards that work across sectors
Unexpected Differences
Role of current AI companies in governance
Speakers: Rafik Rikorian, Halak Shirastava
Current AI governance is effectively being done by a few frontier model companies, which is an untenable situation for global decision-making Increasing participation from both companies and countries will drive convergence in standards and improve overall AI literacy across sectors
Unexpected because both work in the AI industry but have opposing views – Rikorian sees current company involvement in governance as problematic concentration of power, while Shirastava views increasing company participation as positive for standards convergence
Overall Assessment

The discussion showed relatively low levels of fundamental disagreement, with most speakers aligned on core challenges and the need for international cooperation. Main disagreements centered on implementation approaches rather than goals.

Low to moderate disagreement level. The speakers generally agreed on the problems (AI divides, need for cooperation, capacity building) but differed on solutions and priorities. This suggests good potential for finding common ground, as the disagreements are more tactical than strategic. The consensus on challenges combined with diverse solution approaches could actually strengthen policy development by providing multiple pathways forward.

Partial Agreements
All speakers agree on the need for international cooperation and shared resources in AI development, but disagree on the mechanisms – Wilkinson emphasizes coalition building around sovereignty messaging, Rikorian advocates for open source models as the primary vehicle, while Shirastava focuses on technical standards and industry collaboration
Speakers: Bella Wilkinson, Rafik Rikorian, Halak Shirastava
Coalition building around sovereignty and strategic autonomy messaging can drive cooperation where collective benefits outweigh individual capabilities Open source models like Linux provide a template for shared AI infrastructure where countries can contribute to common code base while maintaining sovereignty Shared practices around risk mitigation and interoperability of resources are essential for the entire AI ecosystem
Both agree on the importance of local adaptation and regional cooperation, but Wilkinson emphasizes fine-tuning global models for local contexts while Nambia focuses more on practical resource-sharing mechanisms like compute consortiums and cloud credits
Speakers: Bella Wilkinson, Rajesh Nambia
Local fine-tuning of global models allows countries to adapt AI systems to their specific values and contexts Regional compute consortiums, shared datasets between government-academia-industry, and cloud credits from hyperscalers show promising collaboration approaches
Both recognize the critical importance of capacity building, but Nambia emphasizes the need for governance talent in government sectors while Shirastava focuses on substantive evidence sharing and procurement policy coordination
Speakers: Rajesh Nambia, Halak Shirastava
Talent development in governance and regulatory understanding is critical, especially for countries with limited AI expertise Capacity building requires shared evidence, performance benchmarks, and cross-border procurement policy networks rather than just workshops
Takeaways
Key takeaways
Global consensus on AI governance is unrealistic in the current geopolitical environment, but coalition building around specific priority areas offers a pragmatic path forward The AI divide will be significantly larger than the digital divide, creating substantial disadvantages for smaller and developing economies due to barriers in compute access, data quality, infrastructure, and skills Open source models and collaborative frameworks (similar to Linux) can provide shared AI infrastructure while allowing countries to maintain sovereignty through local fine-tuning and adaptation Technical standards frameworks like NIST and ISO are more practical for international cooperation than country-by-country regulations, especially for startups and smaller players Successful cooperation models include regional compute consortiums, shared datasets between sectors, federated learning, data trusts, and cloud credit programs Capacity building requires shared evidence and benchmarks rather than just workshops, and countries should prioritize innovation-first approaches over regulation-first mindsets Sectoral governance (healthcare, finance, etc.) is more meaningful than horizontal governance across all AI systems due to different harm profiles Talent development in AI governance and regulatory understanding is critical, particularly for developing nations with limited expertise
Resolutions and action items
Mozilla is creating a Data Collaborative marketplace for ethically sourced, provenance-traced datasets to enable fair compensation and attribution Mozilla is pursuing conversations with radio stations globally to license their recordings and transcripts rather than allowing free scraping Industry players should contribute shared evidence, performance benchmarks, and documentation to lift up other players in the ecosystem Development of cross-border procurement policy networks to open markets to global players Focus on building technical standards through international frameworks rather than fragmented national regulations
Unresolved issues
How to effectively scale coalition-building approaches to include the majority of the world’s ~200 countries beyond the top 5-10 economies Specific mechanisms for ensuring equitable access to compute resources and addressing the growing AI divide How to balance sovereignty concerns with the need for international cooperation and shared resources Methods for ensuring adequate representation of developing world populations in AI training data Practical implementation of federated learning and data trust models at scale How to develop sufficient AI governance talent in countries with limited technical expertise Addressing the fundamental tension between innovation-first and regulation-first approaches across different national contexts
Suggested compromises
Coalition building around specific technical areas (verification, chip hardware, risk mitigation) rather than seeking comprehensive global consensus Shared infrastructure models where countries contribute to common AI foundations but maintain sovereignty through local fine-tuning Flexible technical standards frameworks that evolve with industry input rather than rigid national regulations Federated learning approaches that allow international collaboration while keeping sensitive data within national borders Data trust models that enable monetization and attribution for data providers while allowing broader access for AI development Sectoral governance approaches that recognize different risk profiles across industries rather than one-size-fits-all regulations Public-private partnerships for sharing compute resources, datasets, and cloud credits to reduce barriers for developing economies
Thought Provoking Comments
Global consensus on how to govern AI is a no-go. It is not going to happen in this geopolitical environment. However, partial alignment on priority issue areas is possible, and it’s pragmatic to throw our weight behind these smaller gatherings that we can then scale using the multilateral format.
This comment is deeply insightful because it cuts through the optimistic summit rhetoric to present a stark geopolitical reality. Wilkinson acknowledges the fundamental constraints of current international relations while offering a pragmatic alternative – coalition building around specific issues rather than comprehensive global agreements. This reframes the entire governance discussion from idealistic to realistic.
This comment set the pragmatic tone for the entire discussion. It shifted the conversation away from broad multilateral aspirations toward practical coalition-building strategies. Subsequent speakers built on this framework, with Rajesh discussing regional consortiums and shared resources, and others focusing on specific technical standards rather than comprehensive governance frameworks.
Speaker: Bella Wilkinson
I think the AI divide is going to be much, much bigger than the digital divide which we saw, because the biggest difference is that at least in the digital divide, the access and so on whereas this is all about agency and then it can completely put you at a different back foot.
This observation is profound because it distinguishes between mere access (digital divide) and fundamental agency (AI divide). Nambia identifies that AI isn’t just about having technology – it’s about having the power to shape and control it. This insight elevates the discussion beyond technical infrastructure to questions of sovereignty and self-determination.
This comment deepened the conversation by introducing the concept of ‘agency’ as distinct from ‘access.’ It influenced subsequent discussions about sovereignty, with other speakers picking up on themes of local fine-tuning, indigenous data models, and the importance of countries maintaining control over their AI development rather than just consuming foreign AI services.
Speaker: Rajesh Nambia
For all practical purposes, every computer on the planet runs Linux… I think there’s an analogy here of being able to use shared infrastructure, shared software infrastructure as a collaboration mechanism that we can all pool resources together but still have sovereignty on top of it.
This analogy is brilliant because it provides a concrete, successful model for international technological cooperation that maintains sovereignty. The Linux example demonstrates how countries can contribute to and benefit from shared infrastructure while retaining control over their implementations. It offers a tangible pathway forward rather than abstract cooperation concepts.
This comment introduced a paradigm shift from viewing AI cooperation as zero-sum to seeing it as potentially collaborative. It sparked discussions about open-source models, federated learning, and local fine-tuning throughout the rest of the conversation. Other speakers began referencing specific examples like the Southeast Asian Languages model and data collectives, building on this foundational concept.
Speaker: Rafik Rikorian
We’re living in this world where there are a few frontier model companies that are effectively doing governance for all of us in some way, shape, or form… you don’t want four people in San Francisco making governance decisions for the entire world.
This comment crystallizes a critical democratic deficit in AI governance that often goes unstated. By highlighting how a small number of private companies are making decisions that affect billions globally, Rikorian exposes the fundamental legitimacy crisis in current AI governance structures.
This observation reinforced the urgency around finding alternative governance models and gave moral weight to the technical solutions being discussed. It connected the technical discussions about open source and federated learning to broader questions of democratic governance and global equity, elevating the stakes of the conversation.
Speaker: Rafik Rikorian
I believe that countries and especially the countries which we talked about in terms of more from an inclusion point of view, you’ve got to lead with innovation first mindset because I think regulation is required and certainly needed, but I think innovation is probably needed more in some sense.
This insight challenges the conventional wisdom that governance should lead with regulation. Nambia argues that for developing countries, fostering innovation should take priority over regulatory frameworks. This perspective recognizes that over-regulation could stifle the very capabilities these countries need to develop to participate meaningfully in the AI ecosystem.
This comment introduced a nuanced perspective on the relationship between innovation and regulation, particularly for developing economies. It influenced the discussion toward more flexible, adaptive governance approaches and reinforced earlier points about the need for countries to build indigenous capabilities rather than just consuming foreign AI technologies.
Speaker: Rajesh Nambia
Overall Assessment

These key comments fundamentally shaped the discussion by establishing a realistic, pragmatic framework for AI governance that moved beyond idealistic multilateral aspirations. Wilkinson’s opening reality check set the tone for practical coalition-building, while Nambia’s distinction between access and agency deepened the analysis of what’s truly at stake for developing nations. Rikorian’s Linux analogy provided a concrete model for collaborative sovereignty, shifting the conversation from theoretical to actionable. Together, these insights created a coherent narrative arc: from acknowledging geopolitical constraints, to understanding the stakes for developing nations, to identifying viable pathways forward through open-source collaboration and innovation-first approaches. The discussion evolved from pessimistic realism to cautious optimism, with each speaker building on these foundational insights to explore specific mechanisms for inclusive AI governance.

Follow-up Questions
How can we bring rivals and competitors around the same table in AI governance given current geopolitical tensions?
This addresses the core governance puzzle of facilitating cooperation between states with minimal alignment of interests, particularly in the context of US-China AI competition
Speaker: Bella Wilkinson
How do we define open standards and open interfaces for AI to enable global collaboration?
This is crucial for creating a ‘LAMP stack equivalent’ for AI that would allow countries to maintain sovereignty while contributing to shared infrastructure
Speaker: Rafik Rikorian
What would effective data trust models look like for different regions and communities?
Building on examples like Hawaiian genomic data collectives, this explores how communities can maintain control over their data while participating in AI development
Speaker: Rafik Rikorian
How can federated learning be implemented for international AI collaboration in sensitive sectors like healthcare?
This would allow countries to contribute data and compute resources without releasing sensitive information across borders
Speaker: Rafik Rikorian
What specific procurement policies could enable cross-border AI cooperation for emerging economies?
This addresses how policy frameworks can open markets and enable global players to participate in emerging economy AI development
Speaker: Halak Shirastava
How can sectoral AI governance be developed for different industries like healthcare and financial services?
This recognizes that meaningful governance requires understanding sector-specific harms and applications rather than horizontal approaches
Speaker: Rajesh Nambia
What training and capacity building programs are needed for government officials to understand AI governance?
This addresses the talent gap in public sector understanding of AI systems and their potential harms, particularly in developing countries
Speaker: Rajesh Nambia
How can the $24 billion in potential savings from switching to open source AI models be realized?
This economic argument for open source adoption needs practical implementation strategies to achieve the projected cost savings
Speaker: Rafik Rikorian
What mechanisms can ensure equitable access to compute resources and cloud credits for developing nations?
This addresses the fundamental infrastructure barriers that create the ‘AI divide’ between developed and developing countries
Speaker: Rajesh Nambia
How can multilingual AI models like the Southeast Asian Languages Under One Network be scaled and replicated?
This explores how successful regional AI collaborations can serve as templates for other geographic and linguistic communities
Speaker: Bella Wilkinson

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Transforming Health Systems with AI From Lab to Last Mile

Transforming Health Systems with AI From Lab to Last Mile

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on the responsible development and implementation of AI in healthcare, featuring a demonstration of an end-to-end AI healthcare solution and a panel of global health experts and funders. Vikalp Sahni from EkaCare demonstrated how AI can address three key healthcare challenges: fragmentation of information, difficulty in collecting patient history, and reducing doctors’ administrative burden. The demonstration showed a 65-year-old diabetic patient named Neeti using AI to summarize her health records, communicate symptoms in her local language, book appointments, and receive care through an AI-enhanced electronic medical records system that could detect drug allergies and generate prescriptions.


The panel discussion brought together regulators, funders, and health experts to address the balance between accelerating innovation and ensuring safety. Dr. Richard Rukwata, a medical regulator from Zimbabwe, discussed how AI could help streamline regulatory processes while maintaining accountability. The panelists emphasized that technology represents only 10% of successful AI implementation, with the remaining 90% involving people and ecosystems. They stressed the critical importance of keeping humans in the loop and conducting rigorous real-world evaluations of AI systems.


A major announcement was made regarding a collaborative funding initiative between major health foundations including Wellcome Trust, Gates Foundation, and Novo Nordisk Foundation. This partnership aims to generate real-world evidence on AI integration in healthcare systems, particularly in low- and middle-income countries. The initiative will focus on rigorous evaluations of AI systems integrated into clinical decision-making, examining costs, effectiveness, and unexpected challenges. The discussion concluded with hopes for next year’s summit to feature honest conversations about what works and what doesn’t in AI healthcare implementation, while maintaining the essential human element in medical care.


Keypoints

Major Discussion Points:

AI-powered healthcare solutions demonstration: Vikalp Sahni presented EkaCare’s end-to-end AI system that addresses healthcare fragmentation, from patient symptom collection through multilingual AI assistants to automated medical record generation and prescription management with safety alerts.


Regulatory challenges in the AI era: Discussion of how regulators must balance accelerating innovation with maintaining safety standards, including the potential for AI tools to help regulators themselves process applications more efficiently while maintaining accountability.


Funding collaborative for real-world AI evidence: Announcement of a major joint funding initiative by global health foundations (Wellcome Trust, Gates Foundation, Novo Nordisk Foundation) to generate rigorous real-world evidence on AI integration in healthcare systems, particularly in low- and middle-income countries.


Human-in-the-loop approaches and safety considerations: Emphasis on the critical importance of maintaining human oversight in AI healthcare applications, including multi-agent architectures, continuous medical team involvement, and transparent decision-making processes, especially in high-anxiety situations like maternal care.


Data privacy and ethical implementation: Discussion of technical and policy approaches to protecting sensitive health data while enabling AI innovation, including federated learning, encryption, and adherence to regulatory frameworks like HIPAA and India’s DPDP Act.


Overall Purpose:

The discussion aimed to explore responsible AI development and implementation in healthcare, bringing together technology developers, regulators, and major global health funders to address how AI can improve healthcare delivery while maintaining safety, privacy, and human-centered care. The session focused on moving beyond hype to practical, evidence-based approaches for integrating AI into health systems.


Overall Tone:

The discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s potential in healthcare but was tempered by acknowledgment of serious challenges and risks. The tone became increasingly focused on practical solutions and partnerships, with speakers emphasizing the need for rigorous evaluation, human oversight, and responsible development. The session concluded on a hopeful note about future collaboration while maintaining realistic expectations about the complexities involved in healthcare AI implementation.


Speakers

Speakers from the provided list:


Vikalp Sahni: Works at EkaCare, involved in building end-to-end healthcare solutions using AI technology


Sindura Ganapathi: Conference moderator/host, has veterinary background, works with regulatory agencies, was involved in G20 meetings from India side


Charlotte Watts: Executive Director of Solutions at Wellcome Trust, extensive career in healthcare, HIV, gender-based violence, epidemiology, mathematics, former UK government official who participated in G20 meetings


Trevor Mundel: Dr. Dr. Trevor Mundel (medical degree and Ph.D. in mathematics), Rhodes Scholar, extensive experience in pharmaceutical industry and global health, works as a funder of innovation


Richard Rukwata: Dr. Richard Rukwata, Director General of Medicines Control Authority of Zimbabwe, Chief Regulator, involved in regulatory harmonization work in Africa


Monika Sharma: Dr. Monica Sharma, Lead for No One Artists India Foundation, background in biomedical field and science innovation, extensive experience in funding programs including Newton Fund, IRTG, and India’s BioPharma Mission Program


Participant: Multiple unidentified audience members who asked questions during the session


Additional speakers:


None identified beyond the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion explored the responsible development and implementation of artificial intelligence in healthcare, featuring both a practical demonstration of AI healthcare solutions and a strategic panel discussion among global health experts, regulators, and major funding organisations. The session, moderated by Sindura Ganapati, addressed critical questions about balancing innovation acceleration with safety requirements whilst maintaining human-centred approaches to healthcare delivery.


The session began with personal reflections from panelists about their feelings regarding AI development. Sindura referenced the Anthropic CEO blog and shared her own mixed emotions about AI advancement, while Monica Sharma from No One Artists India Foundation shared an anecdote about her 6.5-year-old child asking whether AI robots would be good or bad, highlighting the human concerns surrounding AI development.


AI Healthcare Solution Demonstration

Vikalp Sahni from EkaCare presented a compelling end-to-end AI healthcare solution designed to address three fundamental challenges in modern healthcare delivery. The demonstration centred on a case study of Neeti, a 65-year-old diabetic patient with a February 14th appointment, illustrating how AI can transform the entire healthcare journey from initial symptom assessment to final prescription delivery.


The first challenge addressed was the fragmentation of healthcare information and delivery systems. Traditional healthcare often involves disconnected processes from appointment booking to vital sign collection, creating inefficiencies and potential gaps in care. EkaCare’s solution provides seamless integration across all touchpoints, enabling patients to navigate the healthcare system more effectively.


The second challenge focused on the difficulty patients face in communicating their medical history. Rather than fumbling through physical files and struggling to recall complex medical information, the AI system allows patients to photograph medical records, which are then digitally processed and summarised. The system incorporates ABHA, the digital identity that the Indian government provides, enabling comprehensive patient health record management.


The third challenge addressed the critical issue of doctors spending excessive time on administrative tasks rather than patient interaction. The demonstration showed how AI-powered medical scribes (EkaScribe) can capture doctor-patient conversations, convert them into structured medical notes, and automatically populate electronic medical records, freeing physicians to focus on patient care and counselling.


The technical demonstration revealed capabilities allowing patients to communicate symptoms in their local language whilst receiving contextually appropriate prompts to guide the conversation. The system demonstrated advanced safety features, including drug allergy detection that prevented potentially harmful prescriptions, automatically alerting the physician when prescribed medications conflicted with patient medical history. In the demonstration, the system changed a prescription from amoxicillin to clindamycin based on the patient’s allergy profile.


Sahni emphasised the importance of multiple AI agents working together rather than single-agent systems, particularly for complex healthcare workflows. This approach involves multiple AI agents working collaboratively, including grounding agents whose role is ensuring other agents remain within appropriate boundaries. This technical architecture, combined with continuous human oversight through a dedicated medical team of 10 members (which is growing), represents a sophisticated approach to maintaining safety whilst leveraging AI capabilities.


Regulatory Perspectives and Challenges

Dr Richard Rukwata, Director General of Zimbabwe’s Medicines Control Authority, provided crucial insights into the regulatory challenges facing AI implementation in healthcare. He articulated the fundamental tension regulators face: intense pressure from industry to accelerate approval processes whilst maintaining ultimate responsibility when things go wrong. This dual pressure creates a particularly challenging environment in the rapidly evolving AI landscape.


Rukwata highlighted how AI could potentially serve as a solution to regulatory bottlenecks rather than merely creating new challenges. His organisation is currently working with a Gates Foundation grant to develop AI applications for screening marketing authorisation applications. The vision involves creating neutral AI tools that serve both regulators and industry, helping both parties reach common positions more efficiently without favouring either side.


The regulatory perspective revealed an interesting paradox: whilst AI creates new complexities requiring oversight, it simultaneously offers tools to make regulatory processes more efficient and consistent. Rukwata noted that AI systems don’t have emotional biases or preferences, potentially making them valuable for creating more objective evaluation processes.


However, the discussion also acknowledged that regulatory jobs may be among the last to be replaced by AI, as society requires human accountability when things go wrong. Rukwata jokingly referenced the “Moonshot podcast” in noting that regulatory positions might offer job security in an AI-dominated future, underscoring the fundamental need for human responsibility in AI governance.


Global Health Funding Collaboration

A major announcement during the session revealed a groundbreaking collaborative funding initiative between three organisations: Wellcome Trust, Gates Foundation, and No One Artists India Foundation. This partnership represents a significant shift towards coordinated approaches in AI healthcare funding, addressing the fragmentation that has historically characterised global health innovation support.


Charlotte Watts from Wellcome Trust explained that the initiative specifically targets the critical evidence gap between promising AI efficacy studies and rigorous real-world evaluations. Whilst numerous studies demonstrate AI’s potential in controlled environments, there’s a significant shortage of randomised controlled trials assessing AI interventions when actually implemented in healthcare systems.


The funding call focuses particularly on low- and middle-income countries, recognising that these settings often face the greatest healthcare challenges whilst having the least resources for implementing and evaluating new technologies. The initiative will support rigorous evaluations examining not just clinical outcomes but also system integration challenges, cost-effectiveness, and unexpected implementation barriers.


Trevor Mundel from Gates Foundation emphasised that global health has been constrained by over-reliance on modelling and simulation due to lack of primary data. This collaborative funding approach aims to generate the real-world evidence necessary to move beyond theoretical models to practical implementation guidance.


Monica Sharma from No One Artists India Foundation highlighted how the coordinated approach reduces fragmentation and creates shared standards, eliminating the burden on researchers and developers who previously faced different criteria, timelines, and expectations from multiple funders. This alignment represents a commitment to shared standards and recognition that real-world evaluation is foundational rather than optional.


Human-Centred AI Development and Implementation Challenges

A recurring theme throughout the discussion was the critical importance of maintaining human involvement in AI healthcare systems. Trevor Mundel made a particularly insightful observation that whilst people frequently acknowledge that technology represents only 10% of AI applications, with the remaining 90% involving people and ecosystems, discussions invariably return to focusing on technology.


The discussion explored various models for human involvement, from technical architectures with medical team oversight to research standards requiring ethical clearance and anonymity protections. The human-centred approach extends beyond technical implementation to address emotional and psychological aspects of healthcare delivery. A participant raised important questions about building AI agents that are not only intelligent but reassuring in high-anxiety environments like maternal and infant care, highlighting the need for AI systems that can provide emotional support.


Vikalp Sahni identified key technical challenges including building systems that work across multiple languages and generating verifiable data for large-scale model training. He also raised the important question of who evaluates AI capabilities being built in healthcare, highlighting the need for standardised evaluation frameworks.


A particularly insightful contribution came from a participant working on geospatial AI models for tuberculosis case finding, who distinguished between clinical decision support and operational decision support. This participant highlighted that whilst patients entering healthcare systems generally receive care, there are “silent patients” in communities who remain undetected and underserved, expanding the discussion beyond optimising existing healthcare delivery to addressing fundamental equity and access issues.


Data Privacy and Balancing Speed with Safety

The discussion addressed critical concerns about data privacy and ethical implementation of AI in healthcare. Vikalp Sahni outlined EkaCare’s approach to data privacy, emphasising adherence to established frameworks including HIPAA for healthcare data and India’s Data Protection and Privacy Act, noting that customers increasingly require continuous certification from privacy authorities.


The technical discussion explored emerging approaches like federated learning, which allows local data to remain private whilst contributing to model improvement. Trevor Mundel noted that whilst this approach shows promise, regulatory frameworks haven’t fully addressed whether such data contribution constitutes disclosure under current policies.


A central tension throughout the discussion involved balancing the urgency of addressing healthcare challenges with the need for careful, safe AI implementation. Trevor Mundel articulated this paradox, suggesting that “completely focusing on fast might be slow” because premature deployment could create setbacks that ultimately delay beneficial applications. He drew parallels to self-driving vehicle development, where single fatal accidents can derail entire programmes despite potentially superior safety records.


Interactive Elements and Future Directions

The session included interactive elements, with Sindura mentioning QR codes for audience engagement and sharing her background as a veterinarian, suggesting potential applications in pet care. The discussion concluded with participants sharing aspirations for next year’s AI Summit in Geneva.


Trevor Mundel expressed hope for seeing the next iteration of patient-facing AI agents that would be completely transparent, allowing users to understand decision-making processes whilst maintaining confidence in critical areas like drug contraindications. Charlotte Watts, who mentioned previous interactions with Sindura during G20 meetings, hoped to see funded partners presenting operational results rather than funders discussing plans, representing a shift from theoretical discussions to practical implementation experiences.


Richard Rukwata called for increased collaboration between industry and regulators, recognising that both parties ultimately want safe, effective healthcare solutions. Monica Sharma concluded with a powerful reminder that regardless of AI advancement, human doctors should retain final decision-making authority, emphasising the importance of maintaining the human element in healthcare.


The session concluded with Sindura presenting a souvenir from India, maintaining the collaborative and international spirit of the discussion.


Conclusion

This comprehensive discussion represented a mature approach to AI in healthcare that acknowledges both transformative potential and significant challenges. The session successfully moved beyond typical technology demonstrations towards substantive examination of implementation challenges, regulatory requirements, and human-centred design principles.


The collaborative funding announcement represents a significant step towards coordinated, evidence-based approaches to AI healthcare development. The emphasis on human oversight, real-world evidence generation, and careful implementation reflects a field that’s learning from other technology sectors whilst recognising the unique sensitivities of healthcare applications. The discussion demonstrated that successful AI implementation requires not just technological advancement but sophisticated understanding of regulatory frameworks, funding mechanisms, human psychology, and system integration challenges, all whilst preserving human agency and maintaining safety standards.


Session transcriptComplete transcript of the session
Vikalp Sahni

All of us here, we would have visited doctors at some point in time or have been sick. Anyone who has never visited a doctor, please raise hand. So practically everyone. So let’s imagine how was your experience when you visit a doctor? How do you express your symptoms? How the doctor interacts with you and how the interaction happens with the medical systems where EMRs comes in? What we are trying to show today and what we’ve built at EkaCare is an end -to -end solution that solves three key challenges that we face today. One is the fragmentation of information and clear delivery, be it right from taking an appointment or taking a vitals. The second is how easily and comfortably you can tell about your history rather than fumbling through lots of files and how easily it can be collected, collated and being.

And the last but not the least. we would want doctors to spend time with us and not with machines writing about prescriptions, rather talking to us, counseling us, connecting with us. So the solution that we have built solves for all these three challenges. Obviously, thanks to the advancement in AI, we have been able to do a lot of this due to the capabilities that we have built in -house. So I’m going to narrate a story. This story is of Neeti. She’s a 65 -year -old female, has diabetes, and she wants to now see how she can do the whole end -to -end care delivery. To start off, Neeti is quite digital savvy. She actually has created her ABHA address.

ABHA is the digital identity that India government provides. This digital identity allowed her to collect a lot of her medical records. records into the app, which is her PHR or patient health record app. She has also taken many photographs so that the AI can read through these photographs and collects her medical history in a digital format so that it can be summarized. Now what happens is Neeti wants to talk to an AI, which is a med assist or an assistant for Neeti. She goes ahead, she just picks up a prompt, says summarize my health. What is happening now is all of Neeti’s health is getting summarized. You would know, Neeti would know these are the kind of things that has come up from the medical records.

Also, there is a prompt that Neeti would get, which is very, very relevant to the kind of things that Neeti is supposed to talk about. But today Neeti came for a very different purpose. And now in a local language, she’s talking to the bot. And she’s talking to Neeti. And Neeti is actually telling that In English, she’s expressing that she has fever and there is a wound in her foot. What AI would start doing now is try to understand more about this specific condition. Where is the wound? Is it swelled? Are there any kind of smell that is coming in? And all of this is happening in the local language that Neeti understands. More importantly, it is not letting Neeti to only type or talk.

There are these prompts that are coming in that will ease off the interaction of a 65 -year -old female. After collecting more information, such as mobile number, the AI would identify that this is an important case and this needs a doctor’s intervention. But which doctor’s intervention? With which clinic? On which day? All of this information will now get collected. This will be displayed. So in this case, Neeti is being told that there is an availability of these two doctors on 14th of February. But she can always say that, okay, I want to do it in a different day. Pick up the doctor. As soon as she picks up the doctor, the appointments gets created. Neeti can actually do all of this by typing or by acting on the prompts as well.

So this is how all the information that Neeti wanted to share with the doctor gets collected, gets summarized, and now appointment is created. The next story goes to when Neeti visits the doctor’s clinic. And when Neeti visits the doctor’s clinic, this is the doctor’s view where a doctor is looking at a classical EMR screen. but how this EMR screen is fitted with these AI utilities that can help a doctor to get the better outcome is what we want to demonstrate. If you see all the current EMR and the current prescription for Neeti is completely empty. There is nothing there. Doctor is looking at the past history of Neeti as well as what are the current ailments and current issues that has been listed.

AI also ensured that it not only figures out the important information for patient, but here a doctor is also able to understand and get to know more about Neeti that there is an uncontrolled diabetes. So this is the kind of person that he’s dealing with. But more importantly, it would be very hard for a doctor to start filling all of these information. During the consultation, doctor just starts the audio -based EkaScribe, which is now doing the interaction between doctor and the patient, recording the interaction between doctor and the patient. These interactions gets converted into medical notes and these medical notes are verifiable medical notes that doctors would see. Again, this entire thing has come out just by the interaction between the doctor and the patient.

Doctor has to just do copy to EMR pad. As soon as the copy to EMR pad happens, this entire information gets filled, whatever has been discussed, all the medication that doctors wanted to do. But here we go and see that during the consultation, doctor prescribed amoxicillin. But the patient’s medical history said that he is or she is allergic to amoxicillin. The capable AI -based EMR is now alerting that the patient is allergic to amoxicillin. So, the patient is allergic to amoxicillin. So, the patient is allergic to it. without actually going deeper, a doctor can very easily go ahead now and change this medication to provide for a better outcome as well as to reduce the medical errors.

So it’s changed from amoxicillin to clindamycin. As it changed, the prompt also changed. If you look at the information, all filled, the PDF view of the patient will have the entire medications, everything created in the local language. There is a translation of all the remarks, advices, everything in the language that patient understands. And at the click of a button, this information goes and sits into the patient’s PHR app, creating another node into her medical system. That can be used for the further consultation and any kind of other ailments. So that’s what is the power of AI and the utilities that we are seeing today. The care process right from being fragmented to being consolidated, understanding the patient’s entire medical history to making sure that the doctor’s time is saved while he’s seeing more patients and more medical data is captured.

Today, all of that is possible. But yes, there are challenges. How to build these things at scale for multiple languages, how to generate that data so that your models are verifiable at those large scale. Who is evaluating these capabilities that are being built? All of these are challenges that we as a developer space. And I’m looking forward to building more and working more in this domain.

Sindura Ganapathi

I’ll ask you to take a seat. When you said, is there anyone who has not visited a doctor, instinctively I was asking, does veterinary doctor count? Because I’m a veterinarian by background. And then it’s only a half joke, actually. In the pet care industry, there is real value and business to be made there. So just a thought. And on a more serious note, you could change the name of the lady and adjust age, et cetera. That could be my mother. And I deal with this personally as a caregiver, has all these conditions, deal with so many papers. And every interface you mentioned is a leaf out of my personal life. So thank you for thinking about building a solution here.

I will invite my. Panelists one by one, please join us in the on the stage. First, Dr. Richard Rukwata. He is the chief regulator. He is the director general of Medicines Control Authority of Zimbabwe. I have very high regard for regulators because I have been working on our regulatory agency and the streamlining, and I can see how difficult job that is. And the fact that you have seen this through for ML3 recognition, that’s a wonderful accomplishment. Congratulations. Congratulations on that. Not an easy job. And also, you are involved in the regulatory harmonization work of Africa, and there is a lot of interesting thoughts you will be hopefully able to share on Next, I would like to invite Professor Charlotte Watts.

Last we saw was in G20. Hopefully, it brings back memories. Yes. Happy ones. I’d like to keep it that way. She has had extensive career in both health care, HIV, gender -based violence, epidemiology, mathematics, and a deep experience working in the government, UK government, which was the capacity she came for G20 meetings, which I was involved in from India side. So it’s a pleasure to have you back, Charlotte. And now she’s working at Wellcome Trust as Executive Director of Solutions. I would love to hear more about how you are thinking about these things. And next I would like to invite Dr. Monica Sharma. I happened to meet her just now, and she is the lead for No One Artists India Foundation.

And welcome. And… And her background is also in this both biomedical field, science innovation field, but also has extensive experience working in putting together funding programs, whether it is Newton Fund, whether it is IRTG, Germany’s International Research Training Groups, or India’s BioPharma Mission Program. So all of these, I’m sure, will come in very handy in your current role and would love to hear from you on thoughts related to the topic today. And last but not least, my dear friend and mentor, Dr. Trevor Mundell. I should say Dr. Dr. Trevor Mundell. He is both a – he has an unusual background. People who work with him smile when I say unusual. And Trevor, he did medical degree and then he figured he wanted a Ph .D.

in mathematics. So and is a Rhodes Scholar and has extensive experience in pharmaceutical industry as from early research to development and decade plus experience in global health. With that, we will get started. First to begin with, so I think hopefully you all have mics. For me personally, coming here after having read the blog that went out very famously by CEO of Anthropic, I came in with a very bleak feeling, to be very honest. It’s. Kind of depressing. what are we creating but I have to say last two three days has been energizing seeing all the chaos in terms of interactions people talking to each other hustle just hustle and people excited about the product they are building it brought me memories of vegetable market where I grew up from where people are like life there right so people are trying to sell something people are trying to buy something people are talking and the reason I talk about that as a happy thing is it’s nice to see so many human beings that’s what came to my mind in this in the backdrop of that blog so just love to hear from you what was your feeling as human beings I think this this seeing anything that you want to particularly share last two three days You have been here.

You saw all of this. What did that make you feel? Because I think going forward, this feeling of human beings, I think, will have a currency of its own. Anybody wants to volunteer and say something? An open ended question.

Charlotte Watts

Yes, I’m happy to jump in. So I just got here actually yesterday. So I actually missed, I think, the early start of the week, which I heard was fantastic because you had the youth here. As well as, you know, older people who’ve been in the global health or the global sort of sphere or in the AI world for longer. So that that mix and the drive of the kind of energy, I think, is what I was hearing people tell me about the start of the week. But now I’ve just been here. Yeah. Sort of last night and today. And for me, what I feel quite reassured about, I wonder if it’s so, you know, the change.

is so profound and so I suppose it I was sort of wary because there’s so much hype um and then clearly the risks are being articulated but what I feel reassured about in going to a number of sessions is just actually we’re starting to have the more meaningful conversations about what this really means that is getting beyond that either hyper cell or hyper fear to actually how do we navigate this space and also how do we navigate this as a global community because this is not something that’s one country’s problem to fix so so actually I’m feeling that you know this is this is a really important conference and and we’re starting to get into the we need and we’re starting to get into the nitty -gritty of of how on earth we move forward in in the in the best way that really

Sindura Ganapathi

Anybody else want to share? Trevor and then Monica.

Trevor Mundel

Well, Sundara, you know, what I’ve heard frequently in this meeting, and I hear it quite often in the AI application space, is that technology is just 10 % of the exercise in applications of AI. And the rest is really around people and ecosystems. And as soon as people say that, they then go back to talk about technology. So, I am interested in how we do more than just pay lip service to this notion that we really need to think about the ecosystem and the people involved, probably more than the technology itself. And defining the actual role for humans in the loop is going to be, I think, you know, as important as any of the technological advances.

Monika Sharma

So, Sundara, I don’t have an experience from the summit as such, because I’ve just arrived here. But I want to share a very relevant experience from this model. morning so while i was coming here i have a six and a half years old who just saw ai on my you know computer and he said where are you going i said yeah i have a meeting to attend he said ai and i was like oh he’s able to see i said so you know what is here he said yeah it’s artificial intelligence and i said what else you know about it he said yeah soon they’re going to be robots robots doing everything for us and i was like no but still you would need me and i found like oh my god that’s not a good start of a conversation like like everybody is influenced by this so thank you so much for bringing that human back to to this summit yeah that’s what i thought i’ll add as a you know a conversation from my household this morning thank

Sindura Ganapathi

you yeah no i i charlotte i hope you’re right that there is a lot of hype there now i’m praying for hype after reading how many of you have read that what i referred to the blog you by deriva modi by anthropic seal place of Okay. Okay. Few hands. I am not even sure whether I want to urge you to go read because it really makes you think. And there were some people who are in the field. They said, I am choosing not to read it because I don’t want to know. No. So it’s a good thing to hear this, that this human in the loop and the way we responsibly develop, because that’s the theme we want to explore, especially in the context of health.

That, I think, is a good segue, Dr. Richard, I want to ask you, start with you. Job of a regulator, I said, is hard. The reason I experienced it firsthand, having now very close. We work with our regulatory system, et cetera, where you have two extreme pressures on a regulator and the one it needs to move fast. It needs to be less like everybody wants it to be. regulation and you want to speed up innovation and any every day gets counted and you are held to the metric. That’s one extreme. The other extreme is, boy, if anything goes wrong, who is the first person? Who approved this? Who allowed it to come out? So, these are two extreme things and usually in a slower cycle, you are able to have some time.

So, how are you thinking about it in the age of both the eye, but in general, in reconciling these two extremes of demands put on

Richard Rukwata

Yes. Thank you for that insight. I have to think on my feet here, but you’re quite right. It’s a matter of industry wanting more results from the regulator. Thank you. investment and also wanting to retain or rather wanting the regulator to retain responsibility when things go wrong. I remember watching a very interesting podcast, I think it was called Moonshot, and in this episode they were saying, well, if all the jobs are taken by AI, regulatory jobs will be the last to remain because people always have, people should always have somebody to blame, right? We can’t say, oh, you know, somebody was harmless. No, AI did it. No, that would never work. So worst case scenario, I’ll be the last person there so that they can hang me when something goes wrong.

At least I have that job security to think about. But really, with respect to what is happening as far as industry’s expectations are concerned, we see a lot of potential in AI. We’re currently working with Grant from the Gates Foundation. on an application for screening applications for marketing authorizations. I think those in our industry, the pharma industry, know that this is the biggest source of angst amongst industrialists that regulators take too long, and we are seen as an impediment to progress, actually. So we also blame industry. We’re saying, well, you know, you submit, you know, incomplete applications and then blame it on us. So we’re hoping that with technology we’ll have, you know, applications in the near future that can work for both sides of the fence, right?

Neutral applications that don’t necessarily speak to one side, but they enable all of us to at least reach a common position very quickly. This is the beautiful thing about computers, right? They don’t feel any type of way. They don’t feel any type of way about you. They don’t necessarily like you. They don’t dislike you. so we’re hoping that this will allow us to do I was just saying not yet so we’re hoping that as we work more towards the development of these tools we’ll be able to see more traction from industry so that we become a more efficient part of the supply chain from development to market and not to be seen as the barrier to entry in this field.

Thank you.

Sindura Ganapathi

That’s very helpful and also there is both a challenge for a regulator when this AI speeds up the cycle of innovation brings new complexities but also itself a very good tool in either summarizing a complex application or building models that allows a few people to actually have the same capability as a well developed pharma so be on the same page so lots of interesting possibilities here which in India we’re also thinking about. along those lines all three of you are coming from one type of shared commonality which is funding innovation and as a funder of innovation you are also in not too dissimilar way are trying to balance promoting innovation while upholding safety and minimizing risk etc so i would like to hear from each of you because each of you are different kind of funders how you are thinking about balancing these two in the funding programs and scouring innovation and speeding that up you can go in any order you can thumb wrestle

Charlotte Watts

trevor’s pointing to me but i went first last time but now i can go um i mean we we fund uh so you know we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we we a range of innovations with ambition of improving and saving lives.

Increasingly, we are funding innovation

Trevor Mundel

you know on the acceleration front we look at it you know on the acceleration front we look at it in that every month we don’t have the next generation malaria vaccine. You know, and certainly every year we’re seeing hundreds of thousands of deaths in young children. Every year we don’t have the enhanced personal coaching in education. We see a generation that is losing opportunities. So we feel a tremendous pressure, I know, from the funder side in terms of how do we speed the availability, the access to a tool which looks like it might be a solution to some of those vexing problems. But I think that it really behoves us here to think about completely focusing on fast might be slow.

And we have to have this moment of reflection because what could derail the good application of AI? You know, you think about it in the health area, which is so sensitive, the few errors, like on the regulatory front, relatively few errors that could occur, the, you know, unfortunate outcome for a patient. which can be attributed to a system which was probably misused by, you know, the people who are using it maybe, but nevertheless will be attributed to AI. And that leads to a tremendous deceleration and things not moving ahead. We take the lesson of the self -driving vehicles, you know, where they may be incredibly good drivers and better than the average human at driving, but one fatal accident puts that whole enterprise at risk.

So I think from the funder’s perspective, we need to have a situation where maybe taking a little bit of a reflective and a slower approach might be fast.

Sindura Ganapathi

Monika?

Monika Sharma

So I represent No Notice Foundation and we support health, people and planet both. So at this point, sitting with… funders, global funders. with yourself. And I think it sends a strong message how important AI is at the moment with respect to health. So while we are trying to address as funders different parts of the ecosystem addressing health, but having AI bringing evidence to this really matters. So I think I really feel that having a joint approach towards it is kind of strengthening the whole ecosystem of AI.

Sindura Ganapathi

QR code allows you to look at it and all the details I believe are there. I have not tried it. I would very quickly like to hear from any of you or all of you. What are you trying to what are you hoping from this? And after this, you know, usually panels, I find panels very boring, by the way. So and as a person sitting there or as a person trying to sit here and trying to give Gyan in two minutes. So I would love to make it more interactive. So get your questions and there is still time. So right after this, hopefully I would like to see you interacting and sharing your thoughts and sharing questions.

I’ll be coming to you. So who wants to say your hope from this call?

Charlotte Watts

Yeah, so we’re really excited about this announcement today. You’ll see it’s the big health research and innovation foundations coming together. to jointly support what is a major initiative. And essentially, what we want to do here is say, how do we generate real -world evidence on what does it really mean and are we really seeing real -world health impacts once we start to integrate AI into different health systems? So we have lots of exciting opportunities that are showing the efficacy of particular application, but what this call really wants to support is rigorous evaluations of where AI systems are integrated into clinical decision -making. Our focus is on low – and middle -income countries. We are interested in really asking a range of questions.

What does it mean for the health system? Are these new initiatives actually operable? Can it be integrated into what often is quite a big bureaucracy of a health system? what are the costs associated with that? Are these interventions actually cost effective? In the end, ministries of health have to make decisions based on affordability. So how do we learn more about the costs of this transition? And what are the things we didn’t expect? Right. And what we see, you know, if we look at the evidence base, we’ve got a lot of exciting evidence of interventions that show promise. We’ve only got a relative handful of rigorous randomized controlled trials that are actually assessing interventions when they’re implemented.

So there’s a massive gap there. And then we’re also now starting to see in different contexts anecdotal evidence of where AI has been integrated, but it’s actually butted against the system. And actually that opportunity isn’t realizing and sort of is showing it’s easier said than done. So basically this investment is to try and address that evidence gap. And I just want to call out that Jay Powell is here and APHRC. who are key partners on this in supporting the implementation and for APHRC, the contextualization of the work that we hope to be supporting in Africa.

Sindura Ganapathi

Wonderful. Thank you. Anything else, Trevor, you want to add?

Trevor Mundel

Well, I just want to say thanks to our partners that welcomed Novo Nudist Foundation on this initial effort. I hope it’s the start of even more in the future over there because the global health world has been plagued by this lack of primary data. You know, us and others have funded a lot of modeling simulation around global health problems. But you cannot transcend the lack of primary data at the end of the day. And AI is too important for that to be the constraint that impedes implementation at the end of the day.

Monika Sharma

I thought maybe. It’ll be good to also add that how as we fund this together. envision this as a commitment towards shared standards. So while we’ll be working together as part of this call, we are saying that the real world evaluation is not optional. It is the foundation. And by aligning together, we are kind of defining what good looks like so that we reduce the burden on countries and developers who would otherwise face a patch of I would say patchwork of expectations. And secondly, I would say that by joining hands, we are reducing fragmentation in a rapidly evolving field. And now that we are coordinated, we are getting away with the risk of duplication, the quality that we want to see in the applications or in the products.

And we make sure that the investments that we do are getting into the real world. I mean, they do create an impact because of the coordination that is part of this whole process. And I also say that when we sit together, it adds the seriousness. to the ecosystem that what we are doing is not a side experiment. This is something that we are creating as infrastructure for a long -term process that I would say governments have been asking for it. And the best part as a researcher, I would say, is that we don’t, like the researchers don’t have to navigate three different timelines. Okay, yeah, that’s great for that. And no three different criterias. We just have not like one agreed aligned criteria.

And I would say that no three different deadlines, no timeline. So it makes really life easy as a researcher, I would say. I hope you get some interesting calls from it.

Sindura Ganapathi

So if we have questions, is there a mic going around? I hope there is. If not, I’ll give you mine so I don’t have to answer your questions. And please, there is one hand up there. Okay, please direct your question. Including to Vikalp, if you have questions. Yep. Let’s start with the gentleman at the back and then you’re up next.

Participant

Thank you, folks. Very interesting. My questions around data privacy and data privacy by design. And the lady mentioned three different parameters. Could you elaborate more on how data privacy can be incorporated, at least at a policy level?

Sindura Ganapathi

Anyone, anyone wants to take that question, at least in the context of this call, I guess you can are in general. Yeah. How are you handling this? Yeah.

Vikalp Sahni

So I think health data is quite sensitive and I mean, more sensitive data rather when it comes to country, when it comes to individual, when it comes to even places such as police, military, et cetera. So. So it’s a pretty valid question. Some of the things that we as an organization try to follow is the general guidelines that has been provided by the competent authorities, such as be it HIPAA on the healthcare data or DPDP, which is the Act for Data Privacy in India. And more importantly, if we look at the data exchanges, such as NHA in India have also created clear guidelines. I think following those guidelines and getting yourself tested against those guidelines are fundamentally important.

And it has become so sensitive that today a lot of our customers, they do ask us whether you have a continuous, continuous like sort of applicable certificates from these. privacy authorities as well as these privacy -based frameworks. So that’s how we solve for it. And I think it’s a good thing. In health, it is fundamentally critical. And the technology, how it is growing, I think there are multiple other ways as well, like an end -to -end encryption and so on and so forth, where we can use it to keep things private.

Sindura Ganapathi

So there are two aspects to it. One is technological, and another is policy. There are other sessions entirely focused on people who are working on it. So I wouldn’t put you in the shoes to answer that. But on the technological front, both Charlotte, if you want to address, or Trevor, on what are some of the things, model learning without data being exchanged, or synthetic data, so many aspects of it which have been at the forefront. And Charlotte, whatever you want to add.

Charlotte Watts

I mean, I just… I just wanted to say, in terms of the evaluation… that we want to support through this funding. We’re very much expecting clearly an anonymity of, you know, basically for those evaluations to adhere to high quality research standards. So the kind of bars and checks and controls that you’d expect if you’re doing any sort of research study on health and the sort of ethical guidance and clearance procedures that you need to adhere to. So for us, that’s just an important part of any aspect of research that we support and that we’ll be supporting in this initiative. And that includes issues of privacy and other things.

Sindura Ganapathi

Do you want to say anything about the technological emergence of any new technology that has been helping with preserving data privacy, but not the innovative learnings and improvements of the models?

Trevor Mundel

Yeah, Sundar, you know, so I think that for us, it’s. There’s no compromise on. patient data, privacy from the clinical trial, as Charlotte has mentioned over here. But AI does raise a lot of other issues that go almost beyond that. So, for instance, you know, the various models of federated learning that people have introduced where you can have locally private data, but you contribute to the evolution of a model, which improves because it has access to a very diverse data source. Now, has that actually been regulated? We had an example of one of our grantees who produced a very good system for using ultrasound to diagnose certain chest diseases, and it was based on a federated contribution from different groups that kept their own data local and private, but they contributed to the model.

And, you know, that hasn’t really been tested, and all of the policies around is that a disclosure, which is acceptable now in the age of AI, I think it’s something that we may want to. So, I think it’s something that we encourage with the right framework.

Sindura Ganapathi

Thank you. Do you have the mic? Okay. Then if you have another mic, you can take it to the gentleman, madam, and then after you.

Participant

My question is to Professor Watts. You mentioned about clinical decision support. So the context from an Indian healthcare setting, as you’re well aware, is majority of our health is run at the front line. So there’s also an element of operational decision support as such. So there’s a bunch of geospatial AI models that we are working with Google for geospatial inferencing in the tuberculosis space, mostly active case finding and then diagnostic network optimization. So my question is, from an evidence perspective, we obviously are doing some retrospective analysis, and we plan to follow it up with a prospective analysis as such, although it’s a single user. So I’m wondering if you have any thoughts on that. I think it’s a great question.

I think it’s a great question. I think it’s a great question. but would this be of interest and what is your level of inclination to operational decision support because I’m a physician myself, I’m a medical informaticist as a PhD. I can tell you for one thing for sure is the patients who come into the system, they’re for the most part taken care of but those are all the silent patients who are out there undetected in the community. So what’s your inclination indeed in this research grant for such solutions?

Charlotte Watts

It’s a wonderful question because essentially I come from public health. So our interest, I think our collective interest is actually how do we in particular focus our evaluations and generate evidence where there’s the greatest opportunity to improve health and to strengthen systems and some of that aspect might be actually how do you other opportunities to other opportunities to really help to improve health and to strengthen systems and some of outreach and improved care for the underserved and so we’re not going to say you know this works and this fits and this isn’t fit but ultimately we are interested in how does that integrate within the system in the call we mentioned the importance of looking at interventions as the primary care level not only at tertiary care and I think the things that will resonate in our interest is is really are there areas where actually the opportunity is big enough that actually it merits that assessments to say is this really translating into tangible health impacts and is the return on that actually affordable and is it something that could be scaled so that issue of you know how does it connect with the system is an important part of the question as well that we’re interested in.

Trevor Mundel

Now I do think it’s a very important question because you’re probably all aware of the constraints that we face now in the global health space in terms of funding. some of the exciting new technologies that are coming along, whether it be at the level of the Global Fund or of Gavi, who both have not met quite the standard that we would like to in their replenishments. So there’s just a reduced amount of funding available for those critical commodities that could be life -changing. And when we get a TB vaccine, which we hope we might have in, say, three years, how are we going to afford to actually put that out to the people who need it?

So it’s exactly the kind of targeting that you’re talking about in terms of risk targeting that can make all the difference in terms of taking now the lesser amount that we can afford, but putting it to where the need is the greatest. And that matching, which the AI systems and that geospatial targeting that you’re talking about, is exactly the solution that we need to promote and understand how it works.

Sindura Ganapathi

So, person who has a mic, and then you can hand it to the person after you ask the question.

Participant

It has been like a great session. So how do we go about building AI agents that are not only intelligent, but reassuring in very high anxiety environments like maternal and infant care? How do we go about that? I would love to hear your thoughts because we’re building something on the same.

Sindura Ganapathi

When you say high anxiety, just so that.

Participant

High anxiety for maternal and infant care, because even myself as a new mother, I feel that there are a lot of open areas where the mother doesn’t know what to do. Right. And it’s an open field. And the pediatricians, Gainax and the mother support system is very low. When you go down to tier two and tier three cities. How do we go about building that? I would love to get some thoughts.

Sindura Ganapathi

Take it.

Vikalp Sahni

So I think one of the things that we have done while we build, a lot of these agentic pipelines for doctors, for users is. having human in the loop while the development is happening is extremely, extremely important. And that’s what Trevor also mentioned, because today, how this can go and where it can lead is not something that you can fully control. And so there are these systems that are specifically designed where an anonymous de -identified conversations are practically being distilled to see if the agents are working together in tandem. The second thing, and that’s more technical that we have sort of figured out is the models are quite capable. But when you are running them with a single goal, or a single agent with a single prompt, that practically, at times narrows down the whole worldview.

But if you are running multiple agents collaborating together where there is a grounding agent whose job is to make sure that the other agent is not sort of going beyond what the boundaries are, I think that is fundamental in healthcare. If it is just a single agent, single prompt and a very, so, and that’s what we should avoid because it’s a quite deep workflow, especially if we look at maternal health and things where the mental health comes into being. It’s fundamentally important that we follow some good technical principle of creating a multi -agent architecture, but more importantly, have a human in the loop. Because we, as a company, hasn’t been able to find a way to get out of it.

That’s why we have like a strong 10 member medical team, which is also growing where these are doctors working with the technology.

Sindura Ganapathi

Thank you. And unfortunately, I have been told we are out of time, but speakers will be available. If you can please come up to them. And one very quick thing, just before we go, anything you want to share, what you would like to see next year when we come back to AI Summit? I just heard that it is being hosted in Geneva. So we are all showing up there. We have all these aspirations. What would it look like when we show up there to say, OK, this year we did something together? Anything that comes to your mind?

Trevor Mundel

You know, I’d love to see the next iteration of VicAlp’s patient facing agent, and that would be an agent that would be able to guide you in your health pathway, would be completely transparent. And that I would actually understand why it made its decisions. And I would have 100 % confidence. that in that anxiety -provoking situation, it never made an error related to guidances, drug contraindications. It was always correct in those things. And I wouldn’t have to be concerned about that. That’s the next iteration that I’d love to see next year.

Sindura Ganapathi

Next year, maybe.

Charlotte Watts

And what I would like to have next year is instead of all of us as funders sitting up here, I would like to see some of the partners that we’re funding who are doing work to really understand what this looks like operationally and to have really honest conversations about what’s working and what’s not working. And so we’re moving away from the hype to really actually starting to get into the nitty -gritty of what this could be and can be.

Richard Rukwata

Okay, so quickly, I would like to see a situation where there’s more collaboration between industry and regulators because ultimately we’re on the same side. We want the same thing. better quality safe and effective medicines for all our people so development in that area would be very exciting

Sindura Ganapathi

final word to you

Monika Sharma

i think i i would still love to see that no matter how much evidence we generate from ai no matter what we do we still have that last uh word from the doctor who is sitting there and never forget the human angle while we navigate the ai space that’s what i always want to thank you so much

Sindura Ganapathi

yes thank you so much next time we meet i hope we all feel as optimistic as we do and some some more thank you so much for attending here thank you speakers thank you speakers we just have a souvenir for you from india side for the session thank you so much okay where we go Yeah. Yeah. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you you you Thank you. Thank you. Thank you. Thank you. you

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
V
Vikalp Sahni
2 arguments140 words per minute1769 words753 seconds
Argument 1
End-to-end AI solution addresses fragmentation, medical history collection, and doctor-patient interaction time
EXPLANATION
Sahni argues that their AI solution solves three key healthcare challenges: fragmentation of information and care delivery, difficulty in collecting and organizing patient medical history, and the need for doctors to spend more time with patients rather than with machines writing prescriptions. The solution leverages AI capabilities to streamline the entire healthcare process from appointment booking to prescription generation.
EVIDENCE
Demonstrated through the story of Neeti, a 65-year-old diabetic patient who uses ABHA digital identity to collect medical records, interacts with an AI assistant in local language to describe symptoms (fever and foot wound), gets appointment scheduling assistance, and receives care where the doctor uses AI-powered EMR with voice recording that converts to medical notes and alerts for drug allergies (amoxicillin allergy alert).
MAJOR DISCUSSION POINT
AI Solutions for Healthcare Delivery
Argument 2
Multi-agent architecture with human oversight is essential for complex healthcare workflows like maternal care
EXPLANATION
Sahni emphasizes that single-agent AI systems with single prompts can narrow the worldview and be problematic in healthcare. He advocates for multi-agent architectures where multiple AI agents collaborate, including a grounding agent that ensures other agents stay within appropriate boundaries, combined with human-in-the-loop oversight during development.
EVIDENCE
Technical approach includes anonymous de-identified conversations being analyzed to ensure agents work properly, and maintaining a strong 10-member medical team of doctors working with the technology. Sahni notes they haven’t found a way to eliminate human oversight entirely.
MAJOR DISCUSSION POINT
Human-Centered AI Development
AGREED WITH
Trevor Mundel, Monika Sharma
T
Trevor Mundel
3 arguments169 words per minute981 words346 seconds
Argument 1
Technology represents only 10% of AI applications; people and ecosystems are the remaining 90%
EXPLANATION
Mundel argues that while people frequently acknowledge that technology is just 10% of AI applications and the rest involves people and ecosystems, they immediately revert to discussing only technology. He emphasizes the need to move beyond lip service and actually focus on defining roles for humans in the loop and building proper ecosystems around AI applications.
EVIDENCE
Observation that this sentiment is heard frequently in AI application spaces and at the current meeting, but people don’t follow through on actually addressing the human and ecosystem components.
MAJOR DISCUSSION POINT
Human-Centered AI Development
AGREED WITH
Vikalp Sahni, Monika Sharma
Argument 2
Taking a reflective, slower approach to AI implementation might ultimately be faster by avoiding setbacks from premature deployment
EXPLANATION
Mundel argues that while there’s pressure to accelerate AI deployment due to urgent health needs (like malaria vaccines), rushing could be counterproductive. He warns that a few AI-related errors in sensitive health applications could derail the entire field, similar to how self-driving vehicle accidents have impacted that industry, even when AI systems may perform better than humans overall.
EVIDENCE
Examples include hundreds of thousands of deaths in young children each year without next-generation malaria vaccines, and the analogy of self-driving vehicles where they may be better drivers than humans but one fatal accident puts the entire enterprise at risk.
MAJOR DISCUSSION POINT
Regulatory Challenges and AI Integration
AGREED WITH
Charlotte Watts
DISAGREED WITH
Charlotte Watts
Argument 3
Primary data collection is crucial as global health has been constrained by lack of real-world evidence beyond modeling
EXPLANATION
Mundel emphasizes that the global health field has been plagued by lack of primary data and has relied too heavily on modeling and simulation. He argues that AI implementation is too important to be constrained by this same limitation, and that generating real-world evidence is essential for proper AI deployment in healthcare.
EVIDENCE
Notes that he and other funders have supported extensive modeling and simulation around global health problems, but acknowledges that modeling cannot transcend the fundamental need for primary data collection.
MAJOR DISCUSSION POINT
Funding Innovation and Evidence Generation
AGREED WITH
Charlotte Watts
C
Charlotte Watts
4 arguments189 words per minute1321 words417 seconds
Argument 1
Joint funding initiative aims to generate real-world evidence of AI integration in health systems, particularly in low- and middle-income countries
EXPLANATION
Watts describes a collaborative funding initiative between major health research foundations to support rigorous evaluations of AI systems integrated into clinical decision-making. The focus is on understanding real-world health impacts, system integration challenges, cost-effectiveness, and unexpected outcomes when AI is implemented in actual health systems rather than just controlled studies.
EVIDENCE
The initiative addresses a massive evidence gap where there are many promising AI interventions but only a handful of rigorous randomized controlled trials assessing real-world implementation. There’s also anecdotal evidence of AI systems failing when they encounter actual health system bureaucracies.
MAJOR DISCUSSION POINT
Funding Innovation and Evidence Generation
AGREED WITH
Monika Sharma, Richard Rukwata
DISAGREED WITH
Trevor Mundel
Argument 2
Research evaluations must follow high-quality standards including anonymity and ethical clearance procedures
EXPLANATION
Watts emphasizes that any research supported through their funding initiative must adhere to high-quality research standards, including proper anonymity protections, ethical guidance, and clearance procedures. These standards are fundamental requirements for any health research they support, including AI-related studies.
EVIDENCE
References the standard bars, checks, and controls expected in any health research study, and the ethical guidance and clearance procedures that researchers must follow.
MAJOR DISCUSSION POINT
Data Privacy and Safety Considerations
Argument 3
Moving beyond hype to meaningful conversations about practical AI implementation and global collaboration
EXPLANATION
Watts expresses relief that the conference is facilitating substantive discussions about AI implementation that move beyond both excessive hype and excessive fear. She emphasizes the importance of having these conversations as a global community since AI challenges cannot be solved by individual countries alone.
EVIDENCE
Her observation from attending conference sessions where participants are getting into the ‘nitty-gritty’ of how to navigate AI implementation properly, rather than staying at the surface level of either promotion or fear.
MAJOR DISCUSSION POINT
Human-Centered AI Development
AGREED WITH
Trevor Mundel
Argument 4
Focus on primary care level interventions, not just tertiary care applications
EXPLANATION
Watts indicates that their funding call specifically mentions the importance of evaluating AI interventions at the primary care level rather than only focusing on tertiary care applications. This approach aligns with reaching underserved populations and strengthening health systems where the greatest health impact opportunities exist.
EVIDENCE
Specific mention in their funding call about primary care focus, and emphasis on interventions that could improve health for underserved populations and strengthen health systems.
MAJOR DISCUSSION POINT
Operational and Clinical Decision Support
DISAGREED WITH
Participant
R
Richard Rukwata
2 arguments143 words per minute445 words186 seconds
Argument 1
Regulators face dual pressure to accelerate innovation while maintaining responsibility for safety outcomes
EXPLANATION
Rukwata describes the challenging position of regulators who face pressure from industry to move faster and approve innovations quickly, while simultaneously being held responsible when anything goes wrong. He notes the irony that regulatory jobs may be the last to be replaced by AI because people always need someone to blame when problems occur.
EVIDENCE
References a podcast called ‘Moonshot’ that suggested regulatory jobs would be the last to remain because people need someone to blame rather than accepting ‘AI did it’ as an explanation. Also mentions industry complaints about regulators taking too long while regulators blame industry for submitting incomplete applications.
MAJOR DISCUSSION POINT
Regulatory Challenges and AI Integration
Argument 2
AI tools can serve as neutral applications helping both regulators and industry reach common positions more efficiently
EXPLANATION
Rukwata explains that they are working on AI applications for screening marketing authorization applications, hoping that neutral AI tools can help resolve the tension between regulators and industry. He emphasizes that computers don’t have biases or feelings, which could help both sides reach common ground more quickly.
EVIDENCE
Currently working with a Grant from the Gates Foundation on an AI application for screening marketing authorization applications. Notes that this addresses the biggest source of conflict between regulators and industry regarding approval timelines.
MAJOR DISCUSSION POINT
Regulatory Challenges and AI Integration
AGREED WITH
Charlotte Watts, Monika Sharma
M
Monika Sharma
2 arguments166 words per minute629 words227 seconds
Argument 1
Coordinated funding approach reduces fragmentation and creates shared standards for AI evaluation
EXPLANATION
Sharma argues that by aligning multiple funding organizations together, they can establish shared standards for AI evaluation, reduce the burden on countries and developers who would otherwise face inconsistent expectations, and ensure that real-world evaluation becomes a foundation rather than an option. This coordination also reduces duplication and improves investment impact.
EVIDENCE
Describes how coordinated funding eliminates the need for researchers to navigate three different timelines, criteria, and deadlines, making life easier for researchers and ensuring investments create real-world impact.
MAJOR DISCUSSION POINT
Funding Innovation and Evidence Generation
AGREED WITH
Charlotte Watts, Richard Rukwata
Argument 2
Human involvement remains essential regardless of AI advancement levels
EXPLANATION
Sharma emphasizes that no matter how much evidence is generated from AI or how advanced the technology becomes, the final decision should always rest with human doctors. She stresses the importance of never forgetting the human element while navigating the AI space.
EVIDENCE
Personal anecdote about her 6.5-year-old child’s understanding of AI and robots doing everything, which prompted her reflection on the continued need for human involvement.
MAJOR DISCUSSION POINT
Human-Centered AI Development
AGREED WITH
Vikalp Sahni, Trevor Mundel
S
Sindura Ganapathi
1 argument86 words per minute1852 words1280 seconds
Argument 1
Conference energy demonstrates human connections remain vital in AI development discussions
EXPLANATION
Ganapathi draws an analogy between the conference atmosphere and a vegetable market from her childhood, emphasizing the positive human energy of people talking, hustling, and engaging with each other about their products and ideas. She sees this human interaction as particularly valuable in the context of AI development discussions.
EVIDENCE
Describes the conference as having ‘chaos in terms of interactions people talking to each other hustle just hustle and people excited about the product they are building’ and compares it to a vegetable market where ‘people are trying to sell something people are trying to buy something people are talking.’
MAJOR DISCUSSION POINT
Human-Centered AI Development
P
Participant
1 argument162 words per minute385 words142 seconds
Argument 1
Interest extends beyond clinical decision support to operational support, including geospatial AI for tuberculosis case finding
EXPLANATION
A participant argues that in healthcare settings like India where most care happens at the frontline, there’s significant value in operational decision support beyond just clinical decision support. They describe working on geospatial AI models for tuberculosis active case finding and diagnostic network optimization, emphasizing that while patients who enter the system are generally cared for, many silent patients remain undetected in communities.
EVIDENCE
Specific example of working with Google on geospatial AI models for tuberculosis space, including active case finding and diagnostic network optimization, with plans for both retrospective and prospective analysis.
MAJOR DISCUSSION POINT
Operational and Clinical Decision Support
DISAGREED WITH
Charlotte Watts
Agreements
Agreement Points
Human oversight and involvement is essential in AI healthcare systems
Speakers: Vikalp Sahni, Trevor Mundel, Monika Sharma
Multi-agent architecture with human oversight is essential for complex healthcare workflows like maternal care Technology represents only 10% of AI applications; people and ecosystems are the remaining 90% Human involvement remains essential regardless of AI advancement levels
All three speakers emphasized that despite AI advancement, human involvement remains crucial – Sahni advocates for human-in-the-loop during development with medical teams, Mundel stresses that people and ecosystems are 90% of AI applications, and Sharma insists that final decisions should always rest with human doctors
Real-world evidence and rigorous evaluation are critical for AI healthcare implementation
Speakers: Charlotte Watts, Trevor Mundel
Joint funding initiative aims to generate real-world evidence of AI integration in health systems, particularly in low- and middle-income countries Primary data collection is crucial as global health has been constrained by lack of real-world evidence beyond modeling
Both speakers agree on the urgent need for real-world evidence rather than just theoretical models – Watts describes a collaborative funding initiative to evaluate AI systems in actual health systems, while Mundel emphasizes that global health has been plagued by lack of primary data and over-reliance on modeling
Coordinated approach reduces fragmentation and improves outcomes
Speakers: Charlotte Watts, Monika Sharma, Richard Rukwata
Joint funding initiative aims to generate real-world evidence of AI integration in health systems, particularly in low- and middle-income countries Coordinated funding approach reduces fragmentation and creates shared standards for AI evaluation AI tools can serve as neutral applications helping both regulators and industry reach common positions more efficiently
All three speakers advocate for coordination to reduce fragmentation – Watts through joint funding initiatives, Sharma through aligned funding standards, and Rukwata through neutral AI tools that help regulators and industry reach common ground
Cautious implementation approach is necessary to avoid setbacks
Speakers: Trevor Mundel, Charlotte Watts
Taking a reflective, slower approach to AI implementation might ultimately be faster by avoiding setbacks from premature deployment Moving beyond hype to meaningful conversations about practical AI implementation and global collaboration
Both speakers emphasize moving beyond hype toward careful, thoughtful implementation – Mundel warns that rushing could cause setbacks similar to self-driving vehicles, while Watts appreciates the conference’s move toward substantive discussions rather than excessive hype or fear
Similar Viewpoints
Both speakers see AI as a solution to reduce inefficiencies and improve collaboration in healthcare systems – Sahni through comprehensive patient care solutions and Rukwata through neutral regulatory tools
Speakers: Vikalp Sahni, Richard Rukwata
End-to-end AI solution addresses fragmentation, medical history collection, and doctor-patient interaction time AI tools can serve as neutral applications helping both regulators and industry reach common positions more efficiently
Both emphasize the importance of AI applications at the primary care and community level rather than just high-level tertiary care, focusing on reaching underserved populations and addressing public health challenges
Speakers: Charlotte Watts, Participant
Focus on primary care level interventions, not just tertiary care applications Interest extends beyond clinical decision support to operational support, including geospatial AI for tuberculosis case finding
Unexpected Consensus
Regulatory jobs as the last to be replaced by AI
Speakers: Richard Rukwata
Regulators face dual pressure to accelerate innovation while maintaining responsibility for safety outcomes
Rukwata’s humorous but insightful observation that regulatory jobs may be the last to remain because people always need someone to blame represents an unexpected consensus on the fundamental human need for accountability in AI systems, even from a regulator’s perspective
Personal experiences driving professional perspectives
Speakers: Sindura Ganapathi, Monika Sharma
Conference energy demonstrates human connections remain vital in AI development discussions Human involvement remains essential regardless of AI advancement levels
Both speakers drew from personal experiences (vegetable market analogy and conversation with child about AI) to emphasize human elements in AI development, showing unexpected consensus that personal, human perspectives are valuable in technical discussions
Overall Assessment

The speakers demonstrated strong consensus on the need for human-centered AI development, real-world evidence generation, coordinated approaches to reduce fragmentation, and cautious implementation strategies. There was particular alignment on the importance of human oversight, primary care focus, and moving beyond hype toward practical implementation.

High level of consensus with complementary perspectives rather than conflicting views. The implications suggest a mature, responsible approach to AI in healthcare that prioritizes safety, evidence, and human involvement while recognizing the transformative potential of the technology. This consensus could facilitate collaborative efforts in AI healthcare development and regulation.

Differences
Different Viewpoints
Speed vs. caution in AI implementation
Speakers: Trevor Mundel, Charlotte Watts
Taking a reflective, slower approach to AI implementation might ultimately be faster by avoiding setbacks from premature deployment Joint funding initiative aims to generate real-world evidence of AI integration in health systems, particularly in low- and middle-income countries
While both speakers acknowledge the need for evidence-based AI implementation, Mundel explicitly advocates for a slower, more reflective approach to avoid setbacks, whereas Watts focuses on accelerating evidence generation through coordinated funding initiatives. Mundel warns that rushing could be counterproductive, while Watts emphasizes the urgency of generating real-world evidence.
Primary focus of AI evaluation scope
Speakers: Charlotte Watts, Participant
Focus on primary care level interventions, not just tertiary care applications Interest extends beyond clinical decision support to operational support, including geospatial AI for tuberculosis case finding
Watts emphasizes evaluating AI interventions at the primary care level rather than tertiary care, while the participant argues for broader operational decision support including geospatial AI for community-level case finding. The participant specifically highlights that silent patients in communities need attention beyond those who enter the health system.
Unexpected Differences
Role of AI in reducing vs. maintaining human oversight
Speakers: Richard Rukwata, Monika Sharma
AI tools can serve as neutral applications helping both regulators and industry reach common positions more efficiently Human involvement remains essential regardless of AI advancement levels
Rukwata sees AI as potentially reducing friction and making processes more efficient by being neutral and unbiased, while Sharma insists that human doctors must always have the final word regardless of AI advancement. This represents a fundamental disagreement about whether AI should reduce human involvement (Rukwata) or maintain it as essential (Sharma).
Overall Assessment

The main areas of disagreement center around the pace of AI implementation (speed vs. caution), the scope of AI evaluation (clinical vs. operational focus), and the appropriate level of human oversight in AI systems. While speakers generally agree on the importance of evidence-based approaches and human involvement, they differ significantly on implementation strategies and priorities.

Moderate disagreement level with significant implications for AI governance and implementation strategies. The disagreements reflect fundamental tensions in the field between innovation acceleration and risk mitigation, between different application domains, and between varying philosophies about human-AI interaction. These disagreements could impact funding priorities, regulatory approaches, and the development of AI standards in healthcare.

Partial Agreements
All speakers agree on the critical importance of human involvement in AI systems, but they disagree on implementation approaches. Mundel emphasizes defining human roles in ecosystems, Sahni focuses on technical multi-agent architectures with medical team oversight, while Watts emphasizes research standards and ethical procedures.
Speakers: Trevor Mundel, Vikalp Sahni, Charlotte Watts
Technology represents only 10% of AI applications; people and ecosystems are the remaining 90% Multi-agent architecture with human oversight is essential for complex healthcare workflows like maternal care Research evaluations must follow high-quality standards including anonymity and ethical clearance procedures
Both speakers acknowledge the tension between speed and safety in AI implementation, but propose different solutions. Rukwata sees AI as a neutral tool to resolve regulator-industry conflicts and speed up processes, while Mundel advocates for deliberately slowing down to avoid errors that could derail progress.
Speakers: Richard Rukwata, Trevor Mundel
AI tools can serve as neutral applications helping both regulators and industry reach common positions more efficiently Taking a reflective, slower approach to AI implementation might ultimately be faster by avoiding setbacks from premature deployment
Takeaways
Key takeaways
AI in healthcare requires a multi-agent architecture with human oversight rather than single-agent systems, especially for complex workflows like maternal care Technology represents only 10% of AI applications – the remaining 90% involves people and ecosystems, requiring focus beyond just technological advancement Regulators can use AI as neutral tools to bridge gaps between industry and regulatory bodies, potentially reducing friction in approval processes A coordinated funding approach among major health foundations can reduce fragmentation, create shared standards, and eliminate duplicate evaluation criteria for researchers Real-world evidence generation is critical – there’s a significant gap between promising AI efficacy studies and rigorous evaluations of integrated AI systems in actual healthcare settings Taking a slower, more reflective approach to AI implementation may ultimately be faster by avoiding setbacks from premature deployment and safety issues Data privacy in healthcare AI must adhere to established frameworks like HIPAA and local data protection acts, with continuous certification requirements Human involvement remains essential regardless of AI advancement levels, particularly in final decision-making roles
Resolutions and action items
Joint funding initiative launched by major health foundations (Wellcome Trust, Gates Foundation, Novo Nordisk Foundation) to support rigorous evaluations of AI integration in health systems Focus funding specifically on low- and middle-income countries to generate real-world evidence of AI health impacts Establish shared standards and aligned criteria for AI evaluation to reduce burden on countries and developers Support evaluations at primary care level, not just tertiary care applications Implement federated learning approaches that allow local data privacy while contributing to model improvement Next year’s AI Summit in Geneva should feature funded partners presenting operational results rather than just funders discussing plans
Unresolved issues
How to effectively balance speed of innovation with safety requirements in AI healthcare applications Specific technological solutions for preserving data privacy while enabling AI learning and improvement Regulatory frameworks for new AI approaches like federated learning that haven’t been fully tested in policy contexts How to build reassuring AI agents for high-anxiety healthcare environments like maternal and infant care Addressing the ‘silent patients’ in communities who remain undetected and underserved by current healthcare systems How to afford and deploy new healthcare technologies given reduced global health funding constraints Ensuring AI systems remain transparent and explainable in their decision-making processes
Suggested compromises
Accepting that a slower, more reflective approach to AI implementation might be necessary to ensure long-term success and avoid setbacks Using AI as neutral tools that serve both regulators and industry rather than favoring one side Implementing multi-agent AI architectures with human oversight as a middle ground between fully automated and fully manual healthcare processes Coordinated funding approach that balances innovation acceleration with rigorous safety evaluation requirements Federated learning models that allow data contribution to AI improvement while maintaining local data privacy and control
Thought Provoking Comments
Technology is just 10% of the exercise in applications of AI. And the rest is really around people and ecosystems. And as soon as people say that, they then go back to talk about technology. So, I am interested in how we do more than just pay lip service to this notion that we really need to think about the ecosystem and the people involved, probably more than the technology itself.
This comment cuts through the typical AI hype by highlighting a fundamental contradiction in how AI discussions are conducted. It’s insightful because it calls out the gap between what people claim to prioritize (human-centered approaches) versus what they actually focus on (technology), forcing participants to confront this inconsistency.
This observation reframed the entire discussion from being technology-centric to human-centric. It influenced subsequent speakers to emphasize human oversight, regulatory frameworks, and real-world implementation challenges rather than just technical capabilities. The comment served as a philosophical anchor that kept bringing the conversation back to practical, human-centered considerations.
Speaker: Trevor Mundel
Completely focusing on fast might be slow… what could derail the good application of AI? You know, you think about it in the health area, which is so sensitive, the few errors, like on the regulatory front, relatively few errors that could occur… leads to a tremendous deceleration and things not moving ahead. We take the lesson of the self-driving vehicles… one fatal accident puts that whole enterprise at risk.
This paradoxical insight challenges the conventional wisdom that speed is always beneficial in innovation. By drawing parallels to self-driving cars, it introduces a sophisticated understanding of how public perception and trust can make or break technological adoption, especially in sensitive areas like healthcare.
This comment fundamentally shifted the discussion from ‘how fast can we deploy AI?’ to ‘how can we deploy AI responsibly?’ It influenced other speakers to emphasize evaluation, evidence generation, and careful implementation. The regulatory perspective from Dr. Rukwata and the funding approach discussion all built upon this foundation of cautious optimism.
Speaker: Trevor Mundel
I would like to see some of the partners that we’re funding who are doing work to really understand what this looks like operationally and to have really honest conversations about what’s working and what’s not working. And so we’re moving away from the hype to really actually starting to get into the nitty-gritty of what this could be and can be.
This comment is particularly insightful because it acknowledges the current state of AI discourse as being dominated by ‘hype’ and calls for radical honesty about both successes and failures. It represents a mature approach to innovation that values learning from setbacks as much as celebrating wins.
This comment validated the earlier concerns about moving beyond superficial discussions and established a framework for future conversations. It influenced the audience question about operational decision support and reinforced the theme that real-world evidence and honest evaluation are crucial for meaningful progress.
Speaker: Charlotte Watts
The patients who come into the system, they’re for the most part taken care of but those are all the silent patients who are out there undetected in the community.
This observation is profound because it shifts focus from improving existing healthcare delivery to addressing the invisible healthcare gap. It highlights how AI’s greatest potential might lie not in optimizing current systems but in reaching underserved populations who never access healthcare at all.
This comment expanded the scope of the discussion beyond clinical decision support to population health and equity. It prompted responses from both Charlotte and Trevor about targeting resources more effectively and using AI for outreach and risk identification, fundamentally broadening the conversation’s scope.
Speaker: Participant (physician and medical informaticist)
I would still love to see that no matter how much evidence we generate from AI, no matter what we do, we still have that last word from the doctor who is sitting there and never forget the human angle while we navigate the AI space.
This closing comment encapsulates the central tension of the entire discussion – the balance between technological capability and human judgment. It’s insightful because it doesn’t reject AI but insists on preserving human agency and responsibility, which is crucial for maintaining trust and accountability in healthcare.
As the final substantive comment, it served as a philosophical bookend to Trevor’s earlier observation about human-centered approaches. It reinforced the discussion’s evolution from technology-focused to human-centered thinking and left participants with a clear principle to guide future AI development in healthcare.
Speaker: Monika Sharma
Overall Assessment

These key comments collectively transformed what could have been a typical AI showcase into a nuanced discussion about responsible innovation. The conversation evolved from demonstrating technological capabilities to examining the complex ecosystem needed for successful AI implementation in healthcare. Trevor’s early observations about the technology-versus-people paradox set the tone for deeper reflection, while Charlotte’s call for honest evaluation and Monika’s emphasis on preserving human judgment provided practical frameworks for moving forward. The participant’s insight about ‘silent patients’ expanded the scope beyond clinical optimization to population health equity. Together, these comments created a mature dialogue that acknowledged both AI’s potential and the critical importance of human-centered, evidence-based approaches to healthcare innovation.

Follow-up Questions
How to build AI systems at scale for multiple languages and generate verifiable data for large-scale models?
This addresses the technical challenges of scaling AI healthcare solutions across diverse linguistic populations while ensuring model reliability and accuracy.
Speaker: Vikalp Sahni
Who is evaluating the AI capabilities being built in healthcare?
This highlights the need for standardized evaluation frameworks and oversight mechanisms for AI healthcare applications.
Speaker: Vikalp Sahni
How do we define the actual role for humans in the loop beyond just paying lip service to this concept?
This addresses the critical need to move beyond theoretical discussions about human involvement to practical implementation of human oversight in AI systems.
Speaker: Trevor Mundel
How can federated learning models that keep data local but contribute to model improvement be properly regulated?
This explores the regulatory gaps around new AI training methodologies that could preserve privacy while enabling model advancement.
Speaker: Trevor Mundel
What are the real-world health impacts when AI systems are integrated into different health systems?
This addresses the evidence gap between AI efficacy studies and actual implementation outcomes in healthcare settings.
Speaker: Charlotte Watts
What are the costs and cost-effectiveness of integrating AI into health systems, particularly for ministry of health decision-making?
This is crucial for understanding the economic viability and scalability of AI healthcare solutions for government health programs.
Speaker: Charlotte Watts
What unexpected challenges arise when AI systems are integrated into existing health bureaucracies?
This seeks to identify implementation barriers and system integration issues that may not be apparent in controlled studies.
Speaker: Charlotte Watts
How can data privacy be incorporated at a policy level, particularly regarding privacy by design principles?
This addresses the need for comprehensive policy frameworks that protect sensitive health data while enabling AI innovation.
Speaker: Participant
How do we build AI agents that are not only intelligent but reassuring in high-anxiety environments like maternal and infant care?
This explores the human-centered design challenges of creating AI systems that can provide emotional support and reassurance in sensitive healthcare contexts.
Speaker: Participant
What does operational decision support look like for AI systems focused on community health and active case finding?
This examines how AI can support public health operations beyond clinical decision-making, particularly for underserved populations.
Speaker: Participant
How can we develop more collaboration between industry and regulators in AI healthcare development?
This addresses the need for improved partnerships to streamline AI healthcare innovation while maintaining safety standards.
Speaker: Richard Rukwata

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit

Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on the intersection of Artificial Intelligence (AI) and Digital Public Infrastructure (DPI), examining how governments and organizations can leverage these technologies for inclusive development while maintaining appropriate safeguards. The panel, moderated by C.V. Madhukar from CoDevelop, included representatives from government, international development organizations, and the private sector, discussing India’s leadership in DPI and its potential application to AI implementation.


Dr. Hans Wijayasuriya emphasized that governments must prioritize inclusion, integrity, and safeguards when implementing AI systems, noting that DPI foundations must be mature before AI can be effectively layered on top. He stressed that AI should not redefine DPI but rather serve as scaffolding to accelerate service delivery while maintaining sovereignty through technological neutrality. Robert Opp from UNDP highlighted the importance of embedding safeguards from the beginning of any AI-DPI implementation, emphasizing that efficiency alone should not be the driving metric if it leads to exclusion of vulnerable populations.


Sangbu Kim from the World Bank discussed how DPI can prevent siloed approaches in the AI era, noting that the evolution from supplier-centric to user-centric computing makes DPI more relevant than ever. Saibal Chakraborty from Boston Consulting Group drew parallels between India’s DPI journey and its emerging AI strategy, explaining how AI is being treated as shared public infrastructure similar to how DPI was developed, with initiatives like providing affordable GPU access to startups.


The panelists agreed that the next three to five years will be critical for establishing the right balance between innovation and safeguards, with particular emphasis on making AI accessible to underserved populations through voice-first capabilities and multilingual interfaces.


Keypoints

Major Discussion Points:

Government Implementation of AI with DPI: Focus on three key pillars – inclusion (ensuring AI doesn’t increase divides and supports voice-first, multilingual capabilities), integrity (building AI on mature DPI foundations with clean data and reliable APIs), and sovereignty (maintaining national control over technology while enabling innovation)


Safeguards and Responsible AI Development: Emphasis on implementing safeguards frameworks from the beginning of AI projects, including bias detection, explainability, human-in-the-loop systems, and ensuring that efficiency doesn’t override inclusion as the primary metric


India’s AI Infrastructure Model: Discussion of India’s approach to treating AI as shared public infrastructure, similar to their DPI success, including providing affordable GPU access (under $1/hour), government data access, and targeted funding for socially sensitive sectors beyond fintech


Scaling AI for Development: Exploration of how AI can accelerate sustainable development goals through population-scale solutions, with initiatives like UNDP’s “100 Pathways” project to identify scalable AI use cases and the World Bank’s shift toward demand-driven, user-centric approaches


Future Policy and Innovation Ecosystem: Discussion of the delicate balance governments must strike between data sovereignty and enabling innovation, the need for accountable institutions at both national and state levels, and creating controlled access to government data for AI training


Overall Purpose:

The discussion aimed to explore how AI can be integrated with Digital Public Infrastructure (DPI) to accelerate development outcomes, particularly focusing on lessons from India’s DPI success and how they can be applied to responsible AI implementation globally.


Overall Tone:

The tone was consistently optimistic and forward-looking throughout the conversation. Speakers expressed excitement about AI’s potential while maintaining a pragmatic focus on safeguards and responsible implementation. There was notable pride in India’s DPI achievements and confidence that similar success could be replicated in the AI space. The discussion maintained a collaborative, solution-oriented atmosphere with speakers building on each other’s points rather than presenting conflicting viewpoints.


Speakers

Speaker 1: Role/Title not specified, appears to be an event organizer or host


Saibal Chakraborty: Managing Director and Senior Partner, Boston Consulting Group


Sangbu Kim: Vice President for Digital, World Bank


C.V. Madhukar: Chief Executive Officer of CoDevelop, serving as the moderator for this session


Robert Opp: Representative from UNDP (United Nations Development Programme), working on safeguards, Global Digital Compact, and sustainable development goals


Dr. Hans Wijayasuriya: Government representative (appears to be from Sri Lanka based on context), dealing with national government policy on AI and DPI


Additional speakers:


Arjun: Mentioned as having introduced the panelists, likely another event organizer or host


Full session reportComprehensive analysis and detailed insights

This panel discussion at the India AI Impact Summit examined the intersection of Artificial Intelligence (AI) and Digital Public Infrastructure (DPI), bringing together perspectives from government, international development organisations, multilateral banks, and the private sector. The conversation, moderated by C.V. Madhukar from CoDevelop, explored how nations can harness AI’s transformative potential while maintaining robust safeguards and ensuring inclusive development outcomes.


Madhukar opened by highlighting India’s unique positioning in the global AI landscape: “India is saying, look, there will be a Chinese model, there will be an American model, there will be a whole bunch of other innovations that are going on. What do we do to embrace and use all of this for our benefit?”


Government Implementation Framework: The Three Pillars Approach

Dr. Hans Wijayasuriya, representing the Sri Lankan government perspective, articulated a framework for government AI implementation centered on three fundamental pillars: inclusion, integrity, and sovereignty. His approach to inclusion emphasized that government-introduced technologies must actively reduce rather than exacerbate existing divides, highlighting AI’s capabilities in enabling voice-first interactions and real-time translation to expand access for previously excluded populations.


On integrity, Dr. Wijayasuriya presented a crucial insight: “AI will not redefine DPI. AI would, or at least where we look at it from now, maybe I’ll be wrong in six months from today, but the DPI foundations must be in place first.” This positions AI as scaffolding rather than replacement for existing infrastructure, requiring mature data architectures, clean registers, and reliable APIs before effective AI deployment.


His discussion of sovereignty defined it not as technological isolation but as maintaining national capability and control over critical building blocks, including vendor neutrality and cloud independence. He noted the particular challenges for smaller nations: “being small, you’re on the wrong side of the AI divide unless you’re economically in a very powerful position,” citing Singapore as “an outlier, a small country with a lot of economic power.”


Dr. Wijayasuriya emphasized AI’s scaling properties, noting that “bias, opacity, at scale would mean harm at scale as well. So everything AI scales.” He highlighted AI’s potential for API orchestration and enabling “unconstrained scenarios” where “AI can do a billion scenarios.”


International Development Perspective: Safeguards and Implementation

Robert Opp from UNDP emphasized that the population-scale reach of DPI amplifies both opportunities and risks. His key insight was that “if efficiency is your only metric, then you will probably rush ahead and leave people out,” reframing how success should be measured in AI and DPI implementations.


UNDP has developed safeguards work supported by CoDevelop and the Gates Foundation, with frameworks now being implemented at national levels. Opp stressed the importance of early integration of safeguards rather than retrofitting them after deployment, and outlined requirements for inclusive AI systems including multilingual platforms, multimodal interfaces, and bias detection mechanisms.


He announced UNDP’s “100 Pathways” initiative, developed in partnership with Xstep and other organizations, taking a use-case driven approach to discovering and scaling responsible AI applications across different development contexts. Opp also noted UNDP’s internal organizational focus: “there is an internal to the organization level, which is how do we ensure that UNDP itself has capabilities for leveraging AI to maximum effect.”


Multilateral Banking Evolution: From Supply to Demand

Sangbu Kim, Vice President for Digital at the World Bank, highlighted a fundamental shift in development challenges. Despite over 90% of sub-Saharan Africa having 3G+ mobile coverage, the challenge has moved from infrastructure supply to demand creation and value generation.


Kim described an evolution in computing paradigms “from the very supplier-centric approach through the user-centric approach,” positioning DPI as well-suited for the AI era due to its user-centric design principles. The World Bank’s approach now emphasizes creating demand through government programs and developing specific use cases rather than focusing primarily on infrastructure deployment.


His emphasis on “small AI”—referring to practical, life-changing applications rather than model size—reflects a pragmatic approach prioritizing tangible improvements in citizens’ daily lives.


Private Sector Innovation and Public Infrastructure

Saibal Chakraborty from Boston Consulting Group highlighted how India’s DPI success is being replicated in AI. He noted that India is “a country of 120 unicorns, and every unicorn, some way or the other, leverages the DPIs,” demonstrating the transformative power of shared public infrastructure.


However, Chakraborty identified a critical market failure: while India has abundant venture capital funding, 90% flows into fintech and e-commerce, leaving climate, education, and MSME sectors underfunded. He described this as requiring “a tricky balancing act” where valuable government data needs controlled exposure to enable innovation without compromising security or sovereignty.


The India AI Mission’s approach treats AI as shared public infrastructure, with Saibal noting that “more than 38,000 GPUs are now available at, you know, less than rupees 60 per hour,” dramatically lowering barriers to AI development. He mentioned that “central government and the states are thinking of building fund of funds, which actually then, you know, encourage VCs to co-invest, focusing on those socially sensitive sectors.”


Key Challenges and Future Directions

The discussion revealed consensus on several critical priorities for AI and DPI integration. Voice-first capabilities emerged as particularly significant for enabling digital inclusion for populations previously left behind by text-based interfaces.


The challenge of data governance featured prominently, with speakers acknowledging the need for sophisticated frameworks that enable controlled access to valuable government datasets while maintaining appropriate protections. Dr. Wijayasuriya emphasized that institutional capacity represents a foundational requirement for effective implementation.


Looking ahead, the panelists identified the need for sophisticated institutional frameworks capable of balancing innovation with safeguards, comprehensive data governance, and mechanisms for ongoing monitoring and adjustment of AI systems. The discussion presented an optimistic yet realistic vision of AI’s potential to accelerate development outcomes while acknowledging the significant governance, equity, and sovereignty challenges that must be addressed to realize this potential responsibly.


Session transcriptComplete transcript of the session
Speaker 1

economy. Saibal Chakraborty, Managing Director and Senior Partner, Boston Consulting Group. The moderator, C .V. Madhukar, Chief Executive Officer of CoDevelop. And Mr. Sangbi Kim, Vice President for Digital, World Bank, who will be joining us in a bit. Thank you. right I will let the moderator take it forward

C.V. Madhukar

thank you so much thank you Arjun and thank you all to the panelists the last session of the last day is a bit challenging so we will try and keep this focused and at the end if you have any questions we have time it would be great to have any questions if you want to have any discussions I think we have heard a lot about AI and DPI in the last four days I don’t want to belabor the point I think what we can celebrate for sure in India is that in terms of DPI and the thinking that we take to the world and to our own problems is amongst the best in the world.

And as we look at the journey on AI, which is just beginning for most of the world, what I see is if I look at the US, for instance, there is one spectrum of conversation, which is AGI and beyond. And on the other end, there is a despondency and worry and concern about jobs and privacy and a whole bunch of other important considerations. I think where India is, is somewhat different. India is saying, look, there will be a Chinese model, there will be an American model, there will be a whole bunch of other innovations that are going on. What do we do to embrace and use all of this for our benefit? I think that optimism and sense of can do and must do, I think is very exciting, and I think it’s been palpable in the last four or five days.

So to discuss the power of AI and DPI and AI in DPI, we have a wonderful panel as Arjun has introduced already. I will start with Dr. Hans, if you don’t mind. I think you represent a national government and you are living through these choices on a day -to -day basis. As a pragmatic government that has to think about sovereignty, inclusion, safeguards, what are the main considerations that are on top of your mind?

Dr. Hans Wijayasuriya

Thanks for the question. You’re right, I think those three words are very key. When you’re talking from a government perspective and therefore national, inclusion, integrity or the safety of the citizens. and also safeguards. I’ll just run through these three very quickly. Why inclusion? When it’s government and you introduce a new capability, you have to be sure that that capability does not increase divides, that it actually reduces divides. So you have to be very sensitive on the inclusion angle. And, for instance, AI, together with DPI, can stretch inclusion through its voice -first capabilities, we talked about cloud -first, API -first, et cetera. Now we can really seriously talk about voice -first and also the translation capabilities.

And accessibility has to be also broadened in terms of multi -modality and also, where necessary, include a human in the loop in the service delivery cycle as well. So some of the inclusion dimensions. Then more to integrity, which is a tougher one, And I think that’s a really good point. And I think that’s a really good point. An important point when you’re looking at integrity is to start from the premise that AI will not redefine DPI. AI would, or at least where we look at it from now, maybe I’ll be wrong in six months from today, but the DPI foundations must be in place first. DPI should be mature, and your approach to implementing DPI should be mature, and then you apply AI as a scaffolding on top of that foundation to accelerate your build and delivery.

So what are those foundations? Clean data and data maturity, maturity of your data architectures, clean registers, for instance, also reliable APIs, APIs which are not susceptible to cyber attacks, et cetera, and also the institutional capacity. The institutional capacity behind the DPI delivery. So these are foundations which should be in place. Now, on top of that, you apply AI and there are several unique features of AI which would deliver you super experience to the citizen. And I’ll come to that in a short time. The last one would be the safeguards, bias detection, and also the augmentation to consent. Because when you have AI systems in place, your consent could be AI generated as well. So you need to be careful about the augmentation you require, explainability, and human in the loop as well in terms of your safeguards.

So we need to be conscious that bias, opacity, at scale would mean harm at scale as well. So everything AI scales. So we need to be conscious that bias, opacity, at scale would mean harm at scale as well in terms of your safeguards. So we need to be conscious that bias, opacity, at scale would mean harm at scale as well in terms of your safeguards. Sovereignty is all about capability. It’s not isolation. it’s about building the capability in terms of being neutral and having a neutral capability in terms of vendors, in terms of cloud, in terms of technologies so that you have control, that the state has control over those building blocks and can choose across technology delivery parts as well as core technologies also data classification, protection, privacy basics, so I think if you go back to the foundations of protecting your citizens and their data you can’t go wrong and AI will only be an accelerator finally the experience angle, because a government would be very keen to deliver super experience and where does AI come in here I believe there are some specific strengths of AI when combined with DPIs and that is in terms of in particular API orchestration picking the right API to call at the right time also the fact that AI enables unconstrained scenarios so while linear methodologies would be constrained to say 4 or 5 scenarios when a citizen needs some help AI can do a billion scenarios and each scenario could be implemented using a unique set of API calls and unique set of DPI access and this can be orchestrated using AI so can AI make a big difference a leap forward in service delivery?

I believe so can governments and the sovereign use it or should they? Definitely but we need to be conscious of those 4 dimensions that I just mentioned of inclusion, integrity, safeguards, sovereignty all of these are important and we are still coming together to deliver a perfect experience

C.V. Madhukar

Thank you, Dr. Hans. I will give a bit of a breather to Sangbu. Thank you, Sangbu. Great to have you. I’ll go to Robert first and then… Robert, as you at UNDP have led a very important work on safeguards, worked on the Global Digital Compact, engaged a number of countries on sustainable development goals and how AI and DPI can be an accelerant to all of those outcomes if you want. From your vantage point as you look at the AI revolution that’s unfolding upon us, what captures, what’s top of mind for UNDP?

Robert Opp

Yeah, no. So I think that the reason we have been so excited about digital public infrastructure as an approach overall is that it really does bring some very particular characteristics. And one of those, maybe the most important, is the population scale. and so it is something that can reach so many people so quickly if you get it right. We also have been learning that if you don’t get it right, then you can have problems and challenges at scale and so in the DPI space, one of the things that concern us the most is how do we ensure that as countries are building their DPI, how do we make sure that we are putting the safeguards in place?

And this is work that has been supported generously by Codevelop and Gates Foundation and others and has led over the last year and a half or so to the creation of a universal DPI safeguards framework which we’re now implementing or supporting a number of countries at national level to implement. But… But when we talk about that, what does it actually mean? And Dr. Hans referred to some of these important things. And so we are talking about the DPI as a whole. And so we are talking about the DPI as a whole. And so we are talking about the DPI as a whole. And it’s about what we’re learning is that the earlier in the process that you can start discussing the safeguards, the better off you’ll be down the road in terms of inclusion.

So if efficiency is your only metric, then you will probably rush ahead and leave people out. But if inclusion is your driving KPI, then you really need to make sure that you’re sitting down at the beginning and planning and designing with people in mind. When it comes to AI, then, it’s basically the same thing. And we need to be really careful, as Dr. Hans was saying, that we are putting the inclusion aspects, the safeguard aspects, at the center of our planning from the very beginning. And so that means, and you referred to a couple of these, but, you know, a multilingual platform, multimodal that can support people with disabilities, making sure that you are correcting for people.

So that’s one of the things that we’re doing. And I think that’s a really good thing. And I think that’s one of the things that we’re doing. And I think that’s one of the things that we’re doing. And I think that’s one of the things that we’re doing. And I think that’s one of the things that we’re doing. And I think that’s one of the things that we’re doing. And I think that’s one of the things that we’re doing. And I think that’s one of the things that we’re doing. And I think that’s one of the things that we’re doing. And I think that’s one of the things that we’re doing. and then of course making sure that data sets are there’s bias detection or you’ve got some understanding of your accuracy representation all of those kinds of things because when you’re going to layer AI into DPI as an accelerator well then you want to be sure that you’re on the right track and that people are considered from the very beginning with that and so I think that’s we see all the

C.V. Madhukar

Thank you Rob Can I go to Sangbu now as you see the evolution of DPI plus AI from the World Bank standpoint one of the things that we’ve often looked at is is having open data sets that can train air engines would be an important way of advancing the benefits of AI but as you know governments build silos one silo after the other how are you seeing this from the World Bank’s standpoint what is the next journey for the next 3 -5 years looking like in terms of getting countries to become more AI ready any thoughts on that would be great

Sangbu Kim

DPI has a lot of aspects and characteristics this is one of the very productive and efficient way to ensure the interoperability even though it cannot ensure everything but we make some more effort to clarify some more interoperability capability so but it is basically what it looked like. So that’s why in AEI era, in order to prevent some siloed approach, DPI can play a very significant role, I would say. If you think about the AEI era, what is the relationship between DPI and AEI? If I just compare the DPI from the previous version of DPI, I would say DPI is more helpful for AEI era compared to the previous mobile era in many reasons. If you look back on our history of computing, we started from the computer and PC and evolved to mobile and evolved to AEI.

The trend is that it is from the very supplier -centric approach through the user -centric approach. We are evolving from the supplier mindset through the user mindset. The DPI is exactly just some of the really, really important tools to ensure that user -centricity. Because without identifying some good tools of users and some interoperability, it cannot really be achieved to fully support the user -customized things. So to me, the DPI and AI will be really important relationships. On the other hand, one opportunity… I also see through this trend… maybe DPI can be also very well upgraded quickly and efficiently upgraded through AI enabled technology so we used to collect all the data in a very manual way and then try very hard to streamline the governance of data sometimes very manually and sometimes by some intervention by the programmer but now we can see some old medical way so that we can quickly streamline all the DPI platforms so this is really a good opportunity for all of us from the World Bank perspective we really expect to see some more progress of this space

C.V. Madhukar

Wonderful Saibal as a senior partner at Boston Consulting Group you have a bird’s eye view of a lot of early thinking on AI led innovation around the world, especially in the private sector. And I’m also sure you’ve been observing closely the India trajectory of DPI in the last decade or so. What lessons from the DPI journey in India can we take to the AI era that might also propel private sector innovation to levels beyond if we just didn’t think about DPI as core infrastructure for AI?

Saibal Chakraborty

So I think, firstly, India’s journey in DPIs has been a fascinating one. It makes me immensely proud that whichever country I go to, and I do work with quite a few South Asian and Southeast Asian countries, India is almost always seen as a benchmark in DPI, and now increasingly AI on top of DPI. I think, so maybe I’ll just answer your question in two parts, right? I mean, when we were building DPIs, starting with Aadhaar, you know, and then, you know, moving on to UPI, et cetera, the idea was always to build open population scale software, which can then trigger innovation, right? And that’s exactly what has happened over the last decade, right? The amount of innovation that has been built on these DPIs is mind -blowing.

I mean, India is now a country of 120 unicorns, and every unicorn, some way or the other, leverages the DPIs. So then, coming to the second part of your question, I mean, what lessons, right? So if you look at the way India is now thinking about AI, and as BCG, we have been privileged to be part of two of the very leading companies in the world, right? So we have done several seminal efforts, one with India AI Mission to build AI Coach, India’s national AI platform. and also with the state of Telangana to build the equivalent for the state of Telangana. Both of these happened last year. I think we are taking a very, very similar ethos, right?

So if you think about India, as I mentioned, there are 120 unicorns. There is no dearth of VC funding in India at all. However, 90 % of that VC funding actually goes into fintech and e -commerce. Very little goes into climate and sustainability. Very little goes into education. Very little goes into MSME relevant topics. So there is a gap, right? So what these platforms are trying to do? And then similarly, access to data within private sector, there is good quality data. But the biggest source of data in India is the government. The access to government data is still at a very nascent state. Quality of data and access to data. And then, of course, the biggest thing in AI, which is compute, right?

I mean, access to compute is what makes or breaks a startup. So the way in India, the way I see it, the way we have started thinking about AI platforms, and I’ll use the word platform, it treats AI as a shared public infrastructure. Just like DPI was a shared public infrastructure, it treats now AI as a shared public infrastructure. If you look at India AI mission, more than 38 ,000 GPUs are now available at, you know, less than rupees 60 per hour, which is less than a dollar per hour. So if you are a startup, very early stages, working in your garage, suddenly GPUs have become a bit more affordable for you, right? That’s genuine shared public infrastructure.

Government data, it’s early days, it’s very early days, but government data is being provided access to through platforms like AI Coach, or in Telangana through the TGDX. to solve the financing problem. How do you channel financing into socially sensitive sectors? The central government and the states are thinking of building fund of funds, which actually then, you know, encourage VCs to co -invest, focusing on those socially sensitive sectors where they would normally not invest in. So if you think about it, the ethos is very similar to when we were building DPIs. How do I create shared capabilities centrally, which then can trigger an entire new wave of startups, you know, and therefore a market ecosystem, just like the DPIs did.

So that’s how I see the journey.

C.V. Madhukar

That’s great insight. I think it seems from what you say, there’s a beginning of a new innovation cycle for the private sector cycle, and we’re looking forward to what comes out. I just wanted to, we have about 10 minutes left. have a somewhat common forward -looking question to all panelists. As the economists say, in the long run, we’re all dead. But what is long run in the AI ecosystem? Is it five years? Is it three years? Because it’s so hard to predict everything. Every day there’s something new happening. So I don’t know, Dr. Hans, if you’re okay going first on this forward -looking question. The question I would ask of you is, as a relatively smaller island nation, how do you expect to leverage this wave of AI innovation over the next three years, maybe five years?

What steps are you anticipating? How are you preparing yourselves to leverage this power that we have to advance our development outcomes?

Dr. Hans Wijayasuriya

So there’s a plus and a minus of being small. Let’s start with some of the challenges. Being small, you’re on the wrong side of the AI divide unless you’re economically in a very powerful position. Say for example, Singapore is an outlier, a small country with a lot of economic power and therefore attracts investors, attracts talent and the sighting of business. Country, one of the challenges we have in Sri Lanka is one, getting that minimum level of sovereign AI infrastructure in place, having the ecosystems around it, retaining our talent. Sri Lanka has very good talent but retaining and developing the talent for Sri Lanka is a challenge but one that we are confident that we can deliver.

So future of AI I think will depend on the market. We’ll also depend on the people. We’ll also depend on the trust in us. The institutions like the data protection, the institutions as well as the laws, and Sri Lanka is very mature on this front. So on the trust side, a smaller country can execute precisely with laser -sharp focus and therefore has a strength. Talent side, again, it’s something to focus on. Now when it comes to the marriage with DPI, I think it falls onto the positives of being a small country because the ability to implement modular systems in a neat and flexible way, in a way that these blocks themselves will evolve, on the confidence that you have a strong trust environment, gives you the ability to build in AI where, like I mentioned earlier, AI sits.

On top of a solid DPI, a mature DPI frame. So I feel the future, AI future will of course be very close, can add that extra piece of experience, lower cost, faster, and more flexible, meaning that it can address multiple scenarios through digital twin and other such AI constructs to deliver citizens a very customized, I’m from the service industry in the past, so I use the word customized, but citizen -specific experience. We’ve been tracking the learning about the focus of the new government and their leadership and the presence of AI in the world. We’re working with the government’s leadership, making big advances on DPA and AI, and looking forward to much exciting stuff in Sri Lanka in the next two to three years.

C.V. Madhukar

Thank you for those comments, Dr. Hans. Rob, could I come to you and think about, you know, you’ve gone through the process over the last couple of years with DPI safeguards. You have the Global Digital Compact. I know there’s a lot of work you’re doing on AI safeguards. Moving away from safeguards, I wanted to see how you are envisioning the developmental role of UNDP leveraging AI in the next three years. I guess three years is long term, but anything you can say that would be helpful for us.

Robert Opp

No, absolutely. So I think there’s a couple levels. So there is an internal to the organization level, which is how do we ensure that UNDP itself has capabilities for leveraging AI to maximum effect. And so there’s a kind of a base level of work that we’ve done internally to the organization, upskilling programs, investing in some, you know, making sure foundation model capabilities are available, working in some SLMs, et cetera, et cetera. Then there’s the layer of working across the kind of verticals that UNDP has, whether it’s environmental action, governance programs, energy, et cetera, et cetera. And so how do we embed AI solutions and thinking and approaches into those verticals? And then there’s the picture of how are we going to support our country partners?

And as you said, we’re engaged in quite a few countries already on AI transformation support, and it’s kind of looking at ecosystem pieces. Do countries have that mix of elements that Dr. Hans was referring to? Do you have the compute accessibility? Do you have talent? Do you have the data available? And so on and so forth. but the one thing that we’ve announced in this summit, during this summit is an exciting partnership with Xstep and a number of other players and we’re one of those players on something we’re calling 100 Pathways or Diffusion Pathways and that is looking kind of like more of a use case driven approach and finding over the next few years 100 different pathways to scaling responsible use of AI along different use cases and it’s something that we’re really excited about because I think we need that to complement the ecosystem support

C.V. Madhukar

That’s so exciting to hear because I think there will be a lot of iteration discovery and innovation to discover those 100 pathways to actually add value to people on the ground looking forward to what comes out of that Thank you if you were to look at the last decade of how development banks have funded and looked at digitization, and now if you look at the three years ahead of you, what might the World Bank and MDBs do differently to be ready, help countries become more ready for the AI era that you haven’t been doing in the last decade or so? Any thoughts would be great.

Sangbu Kim

So good news is that from the Internet network point of view, the coverage is pretty good. So even in sub -Saharan Africa, more than 90 % of area of sub -Saharan Africa is covered by three -plus generation mobile tower. But the issue is that we are really struggling with lack of demand. How are we going to fully utilize this by creating some more value and profit? So now we are modifying some approach to really think about creating demand through government program, through developing some use cases. That’s why we are just keep highlighting the importance of small AI. Small AI is not really a small thing. So it is really about how we can really change the lives of our people.

So our approach is a little tweaking to the user -centric and demand -driven things. That’s our approach.

C.V. Madhukar

That’s great. And I think MDBs and the relationships that they have with countries can make a big difference in how the evolution and the benefits of AI will come to people. So looking forward to what comes out. Saibal, I know I started off by bucketing you as a private sector guy. But I also know you’ve been thinking deeply about government policy that enables or doesn’t enable innovation and growth. So as you look at the next few years, even from private sector innovation lens, what government policies might propel the innovation ecosystem? to serve the underserved populations around the world. Any quick thoughts, Saibal?

Saibal Chakraborty

So, see, I think, you know, the government has a very tricky balancing act, right? I mean, when we were going through this entire experience of building AI Kosh, there’s obviously, for very good reasons, a lot of sensitivity around sovereign data, what data to expose, and if exposed to whom, right? Equally, without sharing of data, I mean, the reason why AI has taken off in a big way in some sectors is because internet came into picture 30 years back, and internet has been pumping, you know, billions and billions and gigabytes of data, right? So AI has something to chew upon. Now, if we have to do the same with government data, then that data, you know, needs to be exposed in a controlled manner.

So my sense is that from a policy standpoint, how do you actually provide that access to data? I mean, walking that tightrope where valuable data is made available to the innovators while not compromising on sovereignty or safety. I think that is one of the policy areas the government has to look at. Specifically for a country like India or similar countries which operate in a federated model, the center can do only so much. The real action, as we know, in a country like India happens at the state level and we have 30 plus of them combining states and union territories. So at the state level also, similar policies and institutions have to be set up. So Telangana, for example, has set up a Section 8 public sector undertaking to drive AI, right?

That creates the kind of focus and the agility that you will need to keep pace with this technology and do some real work at the grassroots. So my suggestion would be create those. Accountable institutions. who can anchor and drive AI and amend policies around data to make sure that people get access, the innovators get access while not compromising on sovereignty. It’s not an easy thing to do, but yeah, those would be my words.

C.V. Madhukar

Thank you, Saibal. I think the role of institutions, both for safeguards, but also to enable the innovation ecosystem haven’t been more important than it’s now more important than ever before. I think it’s hard to summarize this conversation, but I will say that I think we’re at the cusp of something extremely important and something very potent in some ways and can unlock a lot of opportunity for innovation. billions of people. Especially, I think, the segment of the population that was left out of the digital revolution because voice was not the predominant way of interacting. I think AI opens up that window and hopefully will drive much more widespread adoption and usage by common people around the world.

Looking forward to the next few years and thank you very much to our panelists for a wonderful discussion. Thank you all. Thank you.

Speaker 1

Thank you so much to all the speakers. We will just have one memento being given by the organizing team. We made it. Thank you so much for being a part of the India AI Impact Summit. Just to tell you that the expo will be open tomorrow. People still want to come in and people are still not tired, but we are done for today and the sessions are done. Thank you so much. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Hans Wijayasuriya
7 arguments134 words per minute1135 words505 seconds
Argument 1
DPI foundations must be mature before applying AI as scaffolding on top
EXPLANATION
Dr. Hans argues that AI should not redefine DPI, but rather DPI foundations must be established first with mature data architectures, clean registers, reliable APIs, and institutional capacity. Only then should AI be applied as scaffolding on top of that foundation to accelerate build and delivery.
EVIDENCE
Clean data and data maturity, maturity of your data architectures, clean registers, reliable APIs which are not susceptible to cyber attacks, and institutional capacity behind DPI delivery are foundational requirements
MAJOR DISCUSSION POINT
AI and DPI Integration Framework
AGREED WITH
Saibal Chakraborty
Argument 2
Governments must balance inclusion, integrity, safeguards, and sovereignty when implementing AI
EXPLANATION
From a government perspective, Dr. Hans emphasizes that when introducing new AI capabilities, governments must ensure they reduce rather than increase divides, maintain data integrity, implement proper safeguards, and preserve sovereignty. This requires careful consideration of voice-first capabilities, translation, and accessibility features.
EVIDENCE
AI together with DPI can stretch inclusion through voice-first capabilities and translation capabilities; need for bias detection, explainability, and human in the loop safeguards; sovereignty is about building neutral capability across vendors, cloud, and technologies
MAJOR DISCUSSION POINT
Government Policy and Sovereignty Considerations
DISAGREED WITH
Saibal Chakraborty
Argument 3
AI enables voice-first capabilities and translation to reduce digital divides
EXPLANATION
Dr. Hans argues that AI, when combined with DPI, can significantly improve inclusion by enabling voice-first interactions and translation capabilities. This allows for broader accessibility and helps bridge digital divides by accommodating users who may not be literate or comfortable with traditional digital interfaces.
EVIDENCE
Voice-first capabilities, translation capabilities, and multi-modality features; includes human in the loop in service delivery cycle where necessary
MAJOR DISCUSSION POINT
Inclusion and Accessibility Through AI
AGREED WITH
Robert Opp
Argument 4
Small countries can execute with laser-sharp focus despite challenges in retaining talent
EXPLANATION
Dr. Hans acknowledges that small countries face challenges in building sovereign AI infrastructure and retaining talent, but argues they have advantages in executing precisely with focused implementation. Small countries can build modular systems in a neat and flexible way with strong trust environments.
EVIDENCE
Sri Lanka has very good talent but faces challenges in retention; ability to implement modular systems with strong trust environment and mature data protection institutions
MAJOR DISCUSSION POINT
Government Policy and Sovereignty Considerations
Argument 5
Bias detection and explainability are crucial as AI scales problems at population level
EXPLANATION
Dr. Hans warns that when AI systems operate at scale, any bias or opacity will result in harm at scale. Therefore, governments must implement robust safeguards including bias detection, explainability mechanisms, and human oversight to prevent widespread negative impacts.
EVIDENCE
Everything AI scales, so bias and opacity at scale would mean harm at scale; need for augmentation to consent when AI systems generate consent
MAJOR DISCUSSION POINT
Safeguards and Risk Management
AGREED WITH
Robert Opp
Argument 6
Need for human-in-the-loop systems and AI-generated consent augmentation
EXPLANATION
Dr. Hans emphasizes that AI systems require human oversight and intervention capabilities, particularly when AI is used to generate consent mechanisms. This ensures that automated systems maintain human agency and accountability in critical decision-making processes.
EVIDENCE
When you have AI systems in place, your consent could be AI generated as well, so you need to be careful about the augmentation you require
MAJOR DISCUSSION POINT
Safeguards and Risk Management
Argument 7
Institutional capacity and reliable APIs resistant to cyber attacks are foundational requirements
EXPLANATION
Dr. Hans stresses that before implementing AI, countries must have strong institutional capacity and secure, reliable APIs that can withstand cyber attacks. These foundational elements are essential for building trustworthy AI systems on top of DPI infrastructure.
EVIDENCE
Clean registers, reliable APIs which are not susceptible to cyber attacks, and institutional capacity behind DPI delivery are foundational requirements
MAJOR DISCUSSION POINT
Safeguards and Risk Management
S
Saibal Chakraborty
6 arguments155 words per minute978 words377 seconds
Argument 1
AI should be treated as shared public infrastructure, similar to how DPI was built
EXPLANATION
Saibal argues that India is approaching AI with the same ethos as DPI – treating it as shared public infrastructure that can trigger innovation. This includes providing affordable access to compute resources, government data, and funding mechanisms to enable startups and innovation across sectors.
EVIDENCE
India AI Mission provides 38,000+ GPUs at less than $1 per hour; government data access through platforms like AI Coach; fund of funds to encourage VC co-investment in socially sensitive sectors
MAJOR DISCUSSION POINT
AI and DPI Integration Framework
AGREED WITH
Dr. Hans Wijayasuriya
Argument 2
India’s 120 unicorns all leverage DPI infrastructure in some way
EXPLANATION
Saibal highlights that India’s success in creating unicorn companies is directly tied to the DPI infrastructure built over the past decade. Every unicorn startup leverages DPI in some capacity, demonstrating the power of open, population-scale software to trigger innovation.
EVIDENCE
India is now a country of 120 unicorns, and every unicorn, some way or the other, leverages the DPIs
MAJOR DISCUSSION POINT
Private Sector Innovation and Funding
Argument 3
90% of VC funding goes to fintech/e-commerce while climate, education, and MSME sectors lack funding
EXPLANATION
Saibal identifies a funding gap where despite abundant VC funding in India, most investment flows to fintech and e-commerce rather than socially important sectors like climate, education, and MSME support. This creates an opportunity for AI platforms to redirect innovation toward underserved areas.
EVIDENCE
90% of VC funding goes into fintech and e-commerce, very little goes into climate and sustainability, education, and MSME relevant topics
MAJOR DISCUSSION POINT
Private Sector Innovation and Funding
Argument 4
AI platforms provide affordable GPU access (less than $1/hour) to enable startup innovation
EXPLANATION
Saibal explains that by making compute resources affordable through shared public infrastructure, AI platforms democratize access to expensive GPU resources that would otherwise be prohibitive for early-stage startups. This removes a major barrier to AI innovation.
EVIDENCE
More than 38,000 GPUs are now available at less than rupees 60 per hour, which is less than a dollar per hour through India AI mission
MAJOR DISCUSSION POINT
Private Sector Innovation and Funding
Argument 5
Need for controlled data exposure policies that enable innovation while maintaining sovereignty
EXPLANATION
Saibal acknowledges the challenge governments face in balancing data sharing for AI innovation with sovereignty concerns. He argues for policies that provide controlled access to valuable government data for innovators while protecting national interests and citizen privacy.
EVIDENCE
Government has sensitivity around sovereign data, what data to expose and to whom; need for walking the tightrope where valuable data is made available to innovators while not compromising on sovereignty or safety
MAJOR DISCUSSION POINT
Government Policy and Sovereignty Considerations
AGREED WITH
Sangbu Kim
DISAGREED WITH
Dr. Hans Wijayasuriya
Argument 6
Creating accountable institutions at both central and state levels to drive AI initiatives
EXPLANATION
Saibal emphasizes that in federated countries like India, both central and state-level institutions are needed to drive AI initiatives effectively. He cites Telangana’s creation of a Section 8 public sector undertaking as an example of creating focused, agile institutions that can keep pace with AI technology development.
EVIDENCE
Telangana has set up a Section 8 public sector undertaking to drive AI, creating focus and agility needed to keep pace with technology and do real grassroots work
MAJOR DISCUSSION POINT
Government Policy and Sovereignty Considerations
S
Sangbu Kim
3 arguments108 words per minute474 words263 seconds
Argument 1
DPI is more helpful for AI era compared to previous mobile era due to user-centric approach
EXPLANATION
Sangbu argues that the evolution from computer to mobile to AI represents a shift from supplier-centric to user-centric approaches, and DPI is particularly well-suited for the AI era because it ensures user-centricity through interoperability and user identification tools. This makes DPI more valuable now than in previous technological eras.
EVIDENCE
Computing evolution from supplier-centric to user-centric approach; DPI provides tools for user-centricity through interoperability; without identifying users and interoperability, user-customized services cannot be achieved
MAJOR DISCUSSION POINT
AI and DPI Integration Framework
Argument 2
Shift from supply-driven to demand-driven approach focusing on creating value and use cases
EXPLANATION
Sangbu explains that the World Bank is modifying its approach from focusing on infrastructure supply to creating demand through government programs and developing specific use cases. This represents a strategic shift toward ensuring that digital infrastructure actually creates value for users rather than just providing coverage.
EVIDENCE
More than 90% of sub-Saharan Africa is covered by 3G+ mobile towers but struggling with lack of demand utilization
MAJOR DISCUSSION POINT
Development Bank Approach and Support
AGREED WITH
Saibal Chakraborty
DISAGREED WITH
Robert Opp
Argument 3
Despite good mobile coverage, struggling with lack of demand utilization in developing regions
EXPLANATION
Sangbu highlights a key challenge where despite excellent mobile network coverage in regions like sub-Saharan Africa, there is insufficient demand and utilization. This indicates that infrastructure alone is not sufficient – there must be compelling use cases and value propositions to drive adoption.
EVIDENCE
More than 90% of area of sub-Saharan Africa is covered by 3G+ generation mobile tower but struggling with lack of demand on how to fully utilize this by creating value and profit
MAJOR DISCUSSION POINT
Development Bank Approach and Support
R
Robert Opp
6 arguments170 words per minute854 words300 seconds
Argument 1
Early planning with safeguards is essential when layering AI into DPI systems
EXPLANATION
Robert emphasizes that safeguards must be considered from the very beginning of AI and DPI planning, not as an afterthought. He argues that if inclusion is the driving metric rather than efficiency alone, then proper planning and design with people in mind from the start is crucial for successful implementation.
EVIDENCE
Earlier in the process that you can start discussing safeguards, the better off you’ll be; if efficiency is your only metric, you will rush ahead and leave people out
MAJOR DISCUSSION POINT
AI and DPI Integration Framework
AGREED WITH
Dr. Hans Wijayasuriya
Argument 2
Inclusion must be the driving KPI rather than efficiency alone to avoid leaving people out
EXPLANATION
Robert argues that organizations and governments must prioritize inclusion as their key performance indicator when implementing AI and DPI systems. If efficiency is the sole focus, there is a risk of rushing implementation and inadvertently excluding vulnerable populations who need these services most.
EVIDENCE
If efficiency is your only metric, then you will probably rush ahead and leave people out, but if inclusion is your driving KPI, then you need to plan and design with people in mind
MAJOR DISCUSSION POINT
Inclusion and Accessibility Through AI
AGREED WITH
Dr. Hans Wijayasuriya, C.V. Madhukar
Argument 3
Multilingual and multimodal platforms are essential for supporting people with disabilities
EXPLANATION
Robert stresses that AI-enabled DPI systems must be designed with multilingual capabilities and multiple interaction modes to ensure accessibility for people with disabilities and diverse linguistic backgrounds. This is fundamental to achieving true inclusion in digital services.
EVIDENCE
Multilingual platform, multimodal that can support people with disabilities; bias detection and understanding of accuracy representation in datasets
MAJOR DISCUSSION POINT
Inclusion and Accessibility Through AI
AGREED WITH
Dr. Hans Wijayasuriya
Argument 4
Universal DPI safeguards framework being implemented at national levels across countries
EXPLANATION
Robert describes UNDP’s work in creating and implementing a universal DPI safeguards framework that countries can adapt at the national level. This framework, supported by organizations like Codevelop and Gates Foundation, provides structured guidance for implementing DPI safely and inclusively.
EVIDENCE
Work supported by Codevelop and Gates Foundation led to creation of universal DPI safeguards framework over the last year and a half, now supporting multiple countries for national implementation
MAJOR DISCUSSION POINT
Safeguards and Risk Management
Argument 5
Supporting countries with ecosystem elements including compute accessibility, talent, and data availability
EXPLANATION
Robert explains that UNDP’s approach to supporting countries involves assessing and strengthening the complete AI ecosystem, including access to compute resources, availability of skilled talent, and access to quality data. This holistic approach ensures countries have all necessary components for successful AI implementation.
EVIDENCE
Looking at ecosystem pieces – do countries have compute accessibility, talent, data available, and other essential elements
MAJOR DISCUSSION POINT
Development Bank Approach and Support
DISAGREED WITH
Sangbu Kim
Argument 6
Launching 100 Pathways initiative to find scalable responsible AI use cases
EXPLANATION
Robert announces an exciting partnership called 100 Pathways or Diffusion Pathways that takes a use case-driven approach to scaling responsible AI implementation. This initiative aims to identify and develop 100 different pathways for responsible AI scaling across various applications and contexts.
EVIDENCE
Partnership with Xstep and other players on 100 Pathways initiative, taking use case driven approach to find 100 different pathways to scaling responsible use of AI
MAJOR DISCUSSION POINT
Development Bank Approach and Support
C
C.V. Madhukar
1 argument139 words per minute1254 words538 seconds
Argument 1
AI opens opportunities for populations previously left out of digital revolution
EXPLANATION
C.V. Madhukar observes that AI, particularly through voice-first interactions, creates opportunities to include populations who were excluded from the previous digital revolution because traditional digital interfaces were not accessible to them. This represents a significant opportunity for broader digital inclusion.
EVIDENCE
The segment of the population that was left out of the digital revolution because voice was not the predominant way of interacting; AI opens up that window for widespread adoption by common people
MAJOR DISCUSSION POINT
Inclusion and Accessibility Through AI
AGREED WITH
Dr. Hans Wijayasuriya, Robert Opp
S
Speaker 1
1 argument74 words per minute129 words104 seconds
Argument 1
The India AI Impact Summit successfully concluded with continued expo access for interested participants
EXPLANATION
Speaker 1 announces the successful completion of the India AI Impact Summit sessions while noting that the expo will remain open for those who want to continue exploring. This indicates the comprehensive nature of the event and ongoing engagement opportunities for participants.
EVIDENCE
The expo will be open tomorrow for people who still want to come in and are not tired, but the sessions are done for today
MAJOR DISCUSSION POINT
Event Management and Conclusion
Agreements
Agreement Points
AI should be built on mature DPI foundations rather than replacing them
Speakers: Dr. Hans Wijayasuriya, Saibal Chakraborty
DPI foundations must be mature before applying AI as scaffolding on top AI should be treated as shared public infrastructure, similar to how DPI was built
Both speakers agree that AI should complement and build upon existing DPI infrastructure rather than replace it. Dr. Hans emphasizes that DPI foundations must be established first, while Saibal advocates for treating AI as shared public infrastructure following the same successful model as DPI.
Inclusion must be prioritized from the beginning of AI and DPI implementation
Speakers: Dr. Hans Wijayasuriya, Robert Opp, C.V. Madhukar
AI enables voice-first capabilities and translation to reduce digital divides Inclusion must be the driving KPI rather than efficiency alone to avoid leaving people out AI opens opportunities for populations previously left out of digital revolution
All three speakers emphasize that inclusion should be a primary consideration when implementing AI and DPI systems. They agree that AI’s voice-first capabilities can help bridge digital divides and reach previously excluded populations, but this requires deliberate planning and prioritization of inclusion over pure efficiency.
Safeguards and risk management are critical for AI implementation at scale
Speakers: Dr. Hans Wijayasuriya, Robert Opp
Bias detection and explainability are crucial as AI scales problems at population level Early planning with safeguards is essential when layering AI into DPI systems
Both speakers strongly agree that safeguards must be built into AI systems from the beginning, not as an afterthought. They emphasize that problems scale with AI implementation, making early planning for bias detection, explainability, and other safeguards essential for responsible deployment.
Voice-first and multilingual capabilities are essential for inclusive AI systems
Speakers: Dr. Hans Wijayasuriya, Robert Opp
AI enables voice-first capabilities and translation to reduce digital divides Multilingual and multimodal platforms are essential for supporting people with disabilities
Both speakers agree that AI systems must incorporate voice-first interactions and multilingual capabilities to ensure accessibility and inclusion. They see these features as fundamental to reaching diverse populations and people with disabilities.
Government data access needs careful balance between innovation and sovereignty
Speakers: Saibal Chakraborty, Sangbu Kim
Need for controlled data exposure policies that enable innovation while maintaining sovereignty Shift from supply-driven to demand-driven approach focusing on creating value and use cases
Both speakers acknowledge the challenge of making government data available for AI innovation while protecting sovereignty and citizen interests. They agree that a balanced approach is needed that enables innovation through controlled data access while maintaining security and sovereignty.
Similar Viewpoints
Both speakers emphasize the importance of maintaining human oversight and control in AI systems, particularly around consent mechanisms and decision-making processes. They advocate for structured frameworks to ensure responsible AI implementation.
Speakers: Dr. Hans Wijayasuriya, Robert Opp
Need for human-in-the-loop systems and AI-generated consent augmentation Universal DPI safeguards framework being implemented at national levels across countries
Both speakers recognize the need for strong institutional frameworks and comprehensive ecosystem support to enable successful AI implementation. They emphasize the importance of having the right organizational structures and complete set of resources.
Speakers: Saibal Chakraborty, Robert Opp
Creating accountable institutions at both central and state levels to drive AI initiatives Supporting countries with ecosystem elements including compute accessibility, talent, and data availability
Both speakers see DPI as particularly well-suited for the AI era due to its user-centric design and proven track record in enabling innovation. They view DPI as a foundation that becomes even more valuable in the context of AI development.
Speakers: Sangbu Kim, Saibal Chakraborty
DPI is more helpful for AI era compared to previous mobile era due to user-centric approach India’s 120 unicorns all leverage DPI infrastructure in some way
Unexpected Consensus
Small countries can be advantageous for AI implementation
Speakers: Dr. Hans Wijayasuriya
Small countries can execute with laser-sharp focus despite challenges in retaining talent
While one might expect small countries to be at a disadvantage in AI development, Dr. Hans argues that small countries like Sri Lanka can actually have advantages in executing AI initiatives with precision and focus, despite challenges like talent retention. This represents an unexpected optimistic perspective on small nation AI capabilities.
Infrastructure coverage is not the main challenge in developing regions
Speakers: Sangbu Kim
Despite good mobile coverage, struggling with lack of demand utilization in developing regions
Contrary to common assumptions that infrastructure coverage is the primary barrier in developing regions, Sangbu reveals that over 90% of sub-Saharan Africa has mobile coverage, but the real challenge is creating demand and utilization. This shifts focus from supply-side to demand-side solutions.
Private sector innovation requires public sector data and infrastructure support
Speakers: Saibal Chakraborty
90% of VC funding goes to fintech/e-commerce while climate, education, and MSME sectors lack funding AI platforms provide affordable GPU access (less than $1/hour) to enable startup innovation
There’s unexpected consensus that successful private sector AI innovation actually depends heavily on public sector support through shared infrastructure and data access. This challenges the typical narrative of private sector independence and highlights the need for public-private collaboration.
Overall Assessment

The speakers demonstrate strong consensus on several key principles: AI should build upon mature DPI foundations rather than replace them, inclusion must be prioritized from the beginning, comprehensive safeguards are essential, and successful AI implementation requires balanced approaches to data governance and institutional support. There is also agreement on the importance of voice-first and multilingual capabilities for accessibility.

High level of consensus with complementary perspectives rather than conflicting viewpoints. The speakers represent different sectors (government, international development, private sector, multilateral banks) but share similar values around responsible AI implementation, inclusion, and the importance of strong foundational infrastructure. This consensus suggests a mature understanding of AI implementation challenges and a shared commitment to equitable outcomes, which bodes well for coordinated global efforts in AI and DPI development.

Differences
Different Viewpoints
Approach to data sharing and access
Speakers: Saibal Chakraborty, Dr. Hans Wijayasuriya
Need for controlled data exposure policies that enable innovation while maintaining sovereignty Governments must balance inclusion, integrity, safeguards, and sovereignty when implementing AI
Saibal emphasizes the need for more open data access to enable innovation, arguing that valuable government data should be made available to innovators in a controlled manner. Dr. Hans takes a more cautious approach, emphasizing that sovereignty and safeguards must be prioritized, with data protection being fundamental before any exposure.
Development approach – supply vs demand focus
Speakers: Sangbu Kim, Robert Opp
Shift from supply-driven to demand-driven approach focusing on creating value and use cases Supporting countries with ecosystem elements including compute accessibility, talent, and data availability
Sangbu advocates for shifting from infrastructure supply to demand creation through use cases, noting that despite good coverage, utilization remains low. Robert focuses on ensuring countries have complete ecosystem elements including infrastructure, suggesting a more comprehensive supply-side approach to support.
Unexpected Differences
Role of small countries in AI development
Speakers: Dr. Hans Wijayasuriya
Small countries can execute with laser-sharp focus despite challenges in retaining talent
Dr. Hans presents a unique perspective that small countries actually have advantages in AI implementation through focused execution and strong trust environments, which contrasts with typical assumptions that only large countries with extensive resources can succeed in AI development. This perspective was not challenged by other speakers but represents an unexpected viewpoint.
Overall Assessment

The discussion revealed relatively low levels of fundamental disagreement, with most differences centered on implementation approaches rather than core principles. Key areas of difference included the balance between data openness and sovereignty, and whether to prioritize supply-side infrastructure or demand-side use case development.

Low to moderate disagreement level. The speakers generally aligned on the importance of safeguards, inclusion, and the potential of AI to enhance DPI, but differed on sequencing, priorities, and implementation strategies. These disagreements reflect different institutional perspectives and contexts rather than fundamental philosophical differences, suggesting good potential for collaborative approaches that incorporate multiple viewpoints.

Partial Agreements
Both speakers agree that safeguards are crucial for AI implementation, but they differ on timing and approach. Dr. Hans emphasizes the need to have mature DPI foundations first before applying AI, while Robert focuses on incorporating safeguards from the very beginning of the planning process.
Speakers: Dr. Hans Wijayasuriya, Robert Opp
Governments must balance inclusion, integrity, safeguards, and sovereignty when implementing AI Early planning with safeguards is essential when layering AI into DPI systems
Both agree that AI should build upon DPI infrastructure, but they have different perspectives on sequencing. Saibal sees AI as shared public infrastructure that can be developed in parallel with innovation ecosystems, while Dr. Hans insists on a sequential approach where DPI must be fully mature before AI implementation.
Speakers: Saibal Chakraborty, Dr. Hans Wijayasuriya
AI should be treated as shared public infrastructure, similar to how DPI was built DPI foundations must be mature before applying AI as scaffolding on top
Both speakers agree that AI can improve inclusion for previously excluded populations, but they emphasize different aspects. Robert focuses on making inclusion the primary metric from the planning stage, while C.V. Madhukar highlights the specific opportunity that voice-first AI creates for broader population access.
Speakers: Robert Opp, C.V. Madhukar
Inclusion must be the driving KPI rather than efficiency alone to avoid leaving people out AI opens opportunities for populations previously left out of digital revolution
Takeaways
Key takeaways
AI should be implemented as scaffolding on top of mature DPI foundations, not as a replacement for DPI infrastructure Governments must balance four critical dimensions when implementing AI: inclusion, integrity, safeguards, and sovereignty AI enables voice-first and multimodal capabilities that can reduce digital divides and include previously underserved populations India’s approach treats AI as shared public infrastructure, similar to how DPI was built, enabling startup innovation through affordable access to compute resources Early planning with safeguards is essential – inclusion must be the driving KPI rather than efficiency alone to prevent leaving people out Development banks are shifting from supply-driven to demand-driven approaches, focusing on creating value and use cases rather than just infrastructure coverage The marriage of AI and DPI requires strong institutional capacity, clean data architectures, reliable APIs, and robust cybersecurity measures Small countries can leverage their ability to execute with laser-sharp focus, though they face challenges in talent retention and accessing sovereign AI infrastructure
Resolutions and action items
UNDP announced the ‘100 Pathways’ initiative in partnership with Xstep and other players to find 100 different pathways to scaling responsible AI use cases Implementation of universal DPI safeguards framework at national levels across multiple countries India AI Mission providing access to 38,000+ GPUs at less than $1 per hour to make compute affordable for startups Building fund of funds by central government and states to encourage VC co-investment in socially sensitive sectors Sri Lanka working on establishing minimum level of sovereign AI infrastructure and developing talent retention strategies
Unresolved issues
How to balance controlled data exposure for innovation while maintaining sovereignty and safety – described as a ‘tricky balancing act’ Access to government data remains at a ‘very nascent state’ despite government being the biggest source of quality data Talent retention challenges for smaller countries in the AI era Funding gaps persist with 90% of VC funding going to fintech/e-commerce while climate, education, and MSME sectors remain underfunded The challenge of creating demand and utilization despite good mobile infrastructure coverage in developing regions Long-term sustainability and scalability of AI initiatives at state and local levels in federated systems
Suggested compromises
Implementing human-in-the-loop systems to balance AI automation with human oversight and safety Creating accountable institutions at both central and state levels to balance agility with governance requirements Developing multimodal platforms that can serve both tech-savvy and traditional users through various interaction methods Building AI platforms that provide shared capabilities while allowing customization for specific use cases and sectors Establishing controlled data sharing mechanisms that enable innovation while protecting sovereign interests
Thought Provoking Comments
AI will not redefine DPI. AI would, or at least where we look at it from now, maybe I’ll be wrong in six months from today, but the DPI foundations must be in place first. DPI should be mature, and your approach to implementing DPI should be mature, and then you apply AI as a scaffolding on top of that foundation to accelerate your build and delivery.
This comment is particularly insightful because it challenges the common assumption that AI will fundamentally transform everything it touches. Instead, Dr. Hans presents a more nuanced view that positions AI as an accelerator rather than a disruptor of DPI. His acknowledgment that he ‘maybe wrong in six months’ also demonstrates intellectual humility about the rapid pace of AI development.
This comment established a foundational framework for the entire discussion, positioning DPI as the necessary infrastructure layer with AI as an enhancement tool. It influenced subsequent speakers to focus on how AI can amplify existing capabilities rather than replace them, and set the tone for a more measured, implementation-focused conversation rather than speculative futurism.
Speaker: Dr. Hans Wijayasuriya
So everything AI scales. So we need to be conscious that bias, opacity, at scale would mean harm at scale as well.
This is a profound observation about the double-edged nature of AI’s scaling capabilities. It succinctly captures one of the most critical challenges in AI deployment – that both benefits and harms are amplified at population scale, making safeguards not just important but existentially critical.
This comment shifted the discussion toward the critical importance of safeguards and responsible implementation. It provided a conceptual bridge to Robert Opp’s subsequent detailed discussion about UNDP’s safeguards framework and reinforced the need for careful, inclusive planning from the outset.
Speaker: Dr. Hans Wijayasuriya
If efficiency is your only metric, then you will probably rush ahead and leave people out. But if inclusion is your driving KPI, then you really need to make sure that you’re sitting down at the beginning and planning and designing with people in mind.
This comment presents a fundamental tension in technology deployment and offers a clear framework for prioritizing values. It challenges the typical tech industry focus on efficiency and speed, advocating instead for inclusion as a primary success metric. This reframes how we measure progress in AI and DPI implementation.
This observation deepened the conversation about implementation priorities and influenced the discussion toward human-centered design principles. It reinforced Dr. Hans’s earlier points about safeguards and helped establish inclusion as a central theme that other panelists referenced in their subsequent responses.
Speaker: Robert Opp
The trend is that it is from the very supplier-centric approach through the user-centric approach. We are evolving from the supplier mindset through the user mindset. The DPI is exactly just some of the really, really important tools to ensure that user-centricity.
This comment provides a historical perspective on the evolution of computing paradigms and positions DPI as a critical enabler of user-centricity in the AI era. It offers a macro view of technological evolution that contextualizes current developments within a broader historical trajectory.
This historical framing helped elevate the discussion from tactical implementation details to strategic positioning of DPI in the broader technology landscape. It influenced the conversation toward thinking about AI and DPI as part of a larger paradigm shift toward user-centricity rather than just technical capabilities.
Speaker: Sangbu Kim
It treats AI as a shared public infrastructure. Just like DPI was a shared public infrastructure, it treats now AI as a shared public infrastructure… more than 38,000 GPUs are now available at less than rupees 60 per hour, which is less than a dollar per hour.
This comment introduces a groundbreaking concept – treating AI compute and capabilities as public infrastructure rather than private resources. The specific example of affordable GPU access demonstrates how this philosophy translates into concrete policy and implementation, potentially democratizing AI development.
This comment introduced a new paradigm that reframes AI from a private sector advantage to a public good, similar to how India approached DPI. It shifted the conversation toward discussing how governments can level the playing field for innovation and sparked discussion about the role of public policy in AI democratization.
Speaker: Saibal Chakraborty
Being small, you’re on the wrong side of the AI divide unless you’re economically in a very powerful position… but one that we are confident that we can deliver. So future of AI I think will depend on the market. We’ll also depend on the people. We’ll also depend on the trust in us.
This comment honestly addresses the challenges faced by smaller nations in the AI era while identifying specific advantages they might have (precision, focus, trust). It provides a realistic assessment of geopolitical dynamics in AI development while maintaining optimism about smaller countries’ potential.
This comment brought a crucial perspective about digital sovereignty and the challenges of smaller nations, adding nuance to the discussion beyond the experiences of large countries like India. It influenced the conversation toward considering how AI benefits can be more equitably distributed globally.
Speaker: Dr. Hans Wijayasuriya
Overall Assessment

These key comments shaped the discussion by establishing several critical frameworks: AI as an accelerator rather than a replacement for DPI, the paramount importance of inclusion and safeguards at scale, the evolution toward user-centricity, and the concept of AI as public infrastructure. The comments moved the conversation from abstract possibilities to concrete implementation challenges and solutions, while maintaining focus on equity, inclusion, and responsible deployment. The discussion benefited from diverse perspectives – from government implementation (Dr. Hans), international development (Robert Opp), multilateral banking (Sangbu Kim), and private sector consulting (Saibal) – creating a comprehensive view of the AI-DPI intersection. The most impactful insight was the reframing of AI from a disruptive force to a scaling mechanism that amplifies both the benefits and risks of existing systems, making thoughtful implementation more critical than ever.

Follow-up Questions
How can governments balance providing access to valuable data for AI innovation while maintaining sovereignty and safety?
This addresses the critical policy challenge of walking the tightrope between enabling innovation through data access and protecting national interests and citizen safety.
Speaker: Saibal Chakraborty
How can smaller countries like Sri Lanka retain and develop AI talent locally rather than losing it to brain drain?
This is crucial for smaller nations to build sustainable AI capabilities and not fall on the wrong side of the AI divide.
Speaker: Dr. Hans Wijayasuriya
What specific institutional frameworks and policies need to be established at state/provincial levels to effectively implement AI initiatives in federated governance models?
Since real implementation happens at sub-national levels, understanding how to create accountable institutions with appropriate agility is essential for scaling AI benefits.
Speaker: Saibal Chakraborty
How can development banks and MDBs create demand-driven approaches to fully utilize existing digital infrastructure for AI applications?
Despite good network coverage, there’s a struggle with lack of demand, requiring new approaches to create value and drive adoption.
Speaker: Sangbu Kim
What are the 100 specific pathways for scaling responsible AI use cases that will be identified through the UNDP partnership?
This represents a concrete initiative to discover practical applications of AI for development, requiring systematic exploration and documentation.
Speaker: Robert Opp
How can AI-generated consent mechanisms be properly augmented and safeguarded when AI systems are making consent decisions?
As AI systems become more autonomous, ensuring proper consent mechanisms that maintain human agency becomes increasingly complex and critical.
Speaker: Dr. Hans Wijayasuriya
What specific mechanisms can channel venture capital funding into socially sensitive sectors like climate, education, and MSME support rather than just fintech and e-commerce?
Addressing the funding gap in critical development sectors requires innovative financial instruments and policy interventions.
Speaker: Saibal Chakraborty

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Nepal Engagement Session

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on how AI and language technology are transforming rural governance in India, particularly through the Ministry of Panchayati Raj’s digital initiatives. Shri Alok Prem Nagar from the Ministry of Panchayati Raj and Shri Amit Kumar discussed the implementation of AI-powered tools that have revolutionized how India’s 2.5 lakh gram panchayats operate and engage with citizens.


The conversation highlighted two major technological breakthroughs. First, the integration of Bhashini (India’s language AI platform) with eGram Swaraj portal enabled panchayat members to access financial information and planning documents in their local languages, dramatically improving transparency and participation. Second, the launch of Sabha Saar, an AI-enabled voice-to-text meeting summarization tool, addressed a critical pain point for panchayat secretaries who previously struggled with manual documentation of gram sabha proceedings.


The speakers emphasized how these tools have achieved remarkable scale and adoption. Uttar Pradesh successfully onboarded all 59,000 gram panchayats to the eGram Swaraj system in just 40 days, while Sabha Saar has processed over 115,000 gram sabha meetings across multiple states. The discussion revealed that 65% of panchayat secretaries identified meeting documentation as their most time-consuming activity, making Sabha Saar’s automated transcription and translation capabilities particularly valuable.


Both speakers stressed the importance of building solutions that address real grassroots needs rather than imposing technology from the top down. They highlighted how AI tools are enabling participatory governance by making government processes accessible in local languages and creating systematic records that citizens can review and act upon. The conversation concluded with optimism about India’s potential to lead global efforts in population-scale, multilingual AI for governance, leveraging the country’s experience with digital public infrastructure and frugal innovation approaches.


Keypoints

Major Discussion Points:

Digital transformation of Panchayati Raj institutions through AI and language technology: The discussion centers on how the Ministry of Panchayati Raj (MOPR) has leveraged Bhashini (India’s language AI platform) to make governance platforms accessible in local languages, enabling better citizen participation in gram panchayats across India’s 2.5 lakh villages.


Sabha Saar – AI-powered meeting documentation tool: A key innovation that converts audio/video recordings of gram sabha meetings into structured minutes using voice-to-text technology, addressing the major pain point identified by 65% of panchayat secretaries who struggled with timely documentation of proceedings.


eGram Swaraj platform and its multilingual capabilities: The comprehensive digital platform that handles everything from planning to payments for panchayats, enhanced with Bhashini’s translation capabilities to make financial information and governance data accessible to citizens in their native languages.


Implementation challenges and solutions in rural AI adoption: Discussion of practical hurdles including connectivity issues, dialect diversity, and training needs, along with successful strategies like leveraging existing mobile phones rather than requiring new infrastructure investments.


Future vision for AI in grassroots governance: Exploration of next-phase developments including service delivery improvements, automated issue tracking through image recognition, and the potential for India to lead global efforts in population-scale multilingual AI governance solutions.


Overall Purpose:

The discussion aims to showcase how India’s Ministry of Panchayati Raj has successfully implemented AI and language technology solutions to democratize access to governance information and improve participatory democracy at the grassroots level. The conversation serves as both a case study of successful rural AI implementation and a blueprint for scaling similar solutions across other government departments.


Overall Tone:

The tone is consistently positive and celebratory throughout, with speakers expressing genuine enthusiasm about the achievements and impact of these AI initiatives. The conversation maintains an optimistic, forward-looking perspective while acknowledging challenges pragmatically. Both officials demonstrate pride in their accomplishments (particularly the rapid adoption in Uttar Pradesh’s 59,000 gram panchayats) while remaining grounded about the practical realities of rural implementation. The tone becomes increasingly confident toward the end as they discuss India’s potential to lead global efforts in population-scale AI governance solutions.


Speakers

Speakers from the provided list:


Shri Alok Prem Nagar: Works with the Ministry of Panchayati Raj (MOPR), involved in implementing digital governance platforms like eGram Swaraj and Sabha Sar for gram panchayats across India


Shri Amit Kumar: Associated with digital transformation initiatives in the public sector for over 20 years, works on AI implementation and public digital infrastructure


Moderator: Session moderator facilitating the discussion on AI in rural governance and panchayati raj institutions


Additional speakers:


Ms. Deepika: Mentioned at the end to felicitate Mr. Alok, specific role or title not mentioned


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion explored how artificial intelligence and language technology are fundamentally transforming rural governance in India, featuring insights from Shri Alok Prem Nagar from the Ministry of Panchayati Raj (MOPR) and Shri Amit Kumar on the implementation of AI-powered tools that have revolutionised how India’s 2.5 lakh gram panchayats operate and engage with citizens.


The Digital Transformation Challenge

The conversation began with a powerful personal anecdote from Shri Alok Prem Nagar, who described attending a Gram Sabha meeting in Karnataka where, despite being a senior government official, he “didn’t understand a thing” due to language barriers. This moment of realisation became the catalyst for understanding a fundamental problem: how can citizens meaningfully participate in governance when they cannot comprehend the information being presented? As Nagar explained, “it is public money. Everybody in the panchayat needs money. It needs to know what kind of plans are uploaded, how many works got done that were as per the plans, how much did it cost.”


This challenge was particularly acute with the eGram Swaraj portal, a comprehensive digital platform that handles everything from planning to payment stage for all 2.5 lakh gram panchayats across India. Despite its comprehensive functionality, the platform operated exclusively in English, creating a significant barrier to citizen participation and transparency.


Breakthrough Solutions: Bhashini Integration

The transformative moment came with the integration of Bhashini, India’s language AI platform, into the eGram Swaraj system. This integration was first showcased at the Manthan 2023 event, an industry consultation initiative where experts were invited to suggest improvements to government operations. As Nagar described the impact: “imagine that a person from a panchayat is looking at the expenses page for his gram panchayat or her gram panchayat. And then by a click of a button, they’re able to see it in their own language. It was magic.”


This breakthrough enabled citizens to access financial information, planning documents, and governance data in their local languages, dramatically improving transparency and participation. Citizens could now independently review panchayat expenditures, understand development plans, and engage meaningfully in local governance without requiring intermediaries to translate English documents.


Sabha Saar: Revolutionising Meeting Documentation

The second major innovation emerged from systematic user research. A survey conducted by UNICEF using Rapid Pro questioned 8,000 panchayat secretaries across the country about their time allocation. The results revealed that 65% of respondents identified meeting conduct and recording as their most time-consuming activity, creating a significant bottleneck in panchayat operations.


This insight led to the development of Sabha Saar, an AI-enabled voice-to-text meeting summarisation tool powered by Bhashini’s automatic speech recognition services. The solution addresses connectivity challenges in villages by allowing recording on one device and processing on another system. The technical workflow involves Bhashini converting local language recordings to English, AI creating structured meeting minutes, and then Bhashini translating the final document back to the local language. The system includes “human in the loop” provisions for corrections and is expanding to include 11 additional languages including Assamese, Boro, Maithili, and Santal.


Since its launch in August 2024, Sabha Saar has processed over 1,15,115 Gram Sabha meetings, with states like Odisha, Tamil Nadu, and Tripura advancing to second-stage implementations where they use the structured meeting minutes for systematic activity tracking and follow-up actions.


Implementation at Scale and Adoption Challenges

The discussion highlighted the extraordinary scale at which these solutions have been implemented. Uttar Pradesh, with its 59,000 gram panchayats, successfully onboarded the entire eGram Swaraj system in just 40 days. This achievement required registering digital signing certificates and transitioning completely from traditional chequebooks to digital payments across all panchayats.


As Shri Amit Kumar noted, “India’s population scale means even pilot projects exceed the performance scope of entire European countries.” This scale advantage, combined with India’s frugal innovation approach, enables solutions to be implemented at costs significantly lower than Western alternatives whilst maintaining quality standards.


The speakers acknowledged that challenges extend beyond technical issues to cultural change. Even in well-resourced corporate environments, manual note-taking persists despite available technology. The key to success was addressing infrastructure barriers by leveraging existing resources like mobile phones, eliminating procurement requirements and enabling rapid scaling. As Kumar noted, “All they need to have a mobile phone, which any which way they have, right? And the idea is just to kind of record and upload.”


Innovative Applications Beyond Core Functions

The speakers detailed several innovative applications demonstrating the broader potential of AI in rural governance. The Swamitva scheme, which conducted drone surveys over 3.3 lakh village habitations to establish property rights, generated dense point cloud information that was initially underutilised. AI analysis of this data has now enabled the identification of solar panel potential across 2.38 lakh gram panchayats, with roof-wise calculations available through the Gram Manchitra platform.


This solar potential mapping has been integrated with the PM Suryaghar Yojana portal, enabling gram panchayats to drive solar adoption campaigns effectively. Citizens can now zoom into their village, click on solar potential icons, and receive detailed information about how many solar panels can be fitted on specific rooftops.


Additional integrations include daily weather forecasts from the meteorological department delivered to every gram panchayat in local languages, and the Pancham WhatsApp-based chatbot platform for two-way communication with sarpanchas and panchayat secretaries.


Future Applications and Pilot Projects

The conversation explored ambitious plans for expanding AI applications in rural governance. A pilot project in Guwahati uses buses equipped with cameras to capture and automatically categorise infrastructure issues like potholes and drain overflow. This system integrates with the Meri Panchayat mobile interface for image-based issue reporting, automatically analysing images, assigning appropriate issue labels, and routing them to responsible departments with built-in escalation mechanisms.


Spatial development plans and visualisation tools have been implemented for 34 gram panchayats near highways, with Andhra Pradesh adopting spatial planning statewide. The next frontier involves comprehensive service delivery systems where citizens can vocalise their demands in local languages, track application status, and receive automated responses.


The speakers also discussed AI-powered agenda generation that could automatically populate meeting agendas based on previous commitments and follow-up requirements, creating systematic accountability loops.


Implementation Philosophy and Success Factors

Both speakers emphasised the importance of user-centred design and addressing real needs rather than imposing technology from above. As Nagar explained, “if you are ready with a product that addresses their needs and it is friendly and it meets, of course, my need was that I needed the money well accounted for and their need. It was a system that could make it very easy for them to do it. So we met halfway.”


Key success factors identified include: starting with clearly defined problems rather than technology-first approaches; ensuring solutions address mutual needs of both government and citizens; leveraging existing infrastructure to minimise adoption barriers; conducting thorough user research to identify real pain points; and maintaining focus on simplicity and accessibility rather than technical sophistication.


As Nagar noted, AI should be treated as a “good servant, bad master,” emphasising that technology should enable rather than replace human decision-making.


Cross-Departmental Adoption and Scaling

The MOPR experience offers valuable lessons for other ministries implementing AI solutions. The Department of Drinking Water and Sanitation has already approached MOPR to use Bhashini for Village Water Committee meetings, demonstrating the potential for cross-departmental adoption of these language AI tools.


Democratic Impact and Structural Changes

The implementation of these AI tools has created measurable structural changes in panchayat functioning. States adopting Sabha Saar are developing second-generation applications that use structured meeting minutes for activity tracking, creating systematic records that enable citizens to monitor follow-up actions and hold officials accountable.


The availability of meeting minutes in local languages has enabled diaspora populations working in cities like Mumbai and Pune to monitor their home panchayats remotely, increasing engagement and accountability. Citizens can now drill into their gram panchayat’s records to examine Finance Commission grants, implementation status, bill preparation, payment completion, and asset locations with geographic tagging.


Global Leadership and Technological Sovereignty

The discussion concluded with optimistic assessments of India’s potential to lead global efforts in population-scale, multilingual AI governance. Drawing on successful experiences with digital public infrastructure including Aadhaar, UPI, FASTag, and GST, both speakers expressed confidence in India’s ability to implement AI solutions at unprecedented scale.


Shri Amit Kumar emphasised the importance of technological sovereignty, noting that “despite, in spite any kind of geopolitical risk, we should survive. Our system should run.” This requires open architecture, interoperability standards, and the ability to shift between different technologies whilst maintaining data residency within India.


Conclusion

This fireside chat demonstrated how AI can serve as a democratising force when designed with inclusion and accessibility at its core. By addressing language barriers and documentation challenges through problem-first rather than technology-first approaches, these initiatives have transformed passive governance systems into participatory platforms where citizens can meaningfully engage with local democracy.


The success of MOPR’s AI journey suggests that rural areas can adopt advanced technology more rapidly than urban environments when solutions address genuine needs without creating additional barriers. The experience positions India to lead global efforts in developing population-scale, multilingual AI governance solutions that could serve as models for other developing nations seeking to leverage technology for democratic participation and transparent governance.


The conversation ultimately reinforced that AI’s greatest value in governance lies not in replacing human decision-making but in enabling more informed, inclusive, and accountable democratic processes that strengthen the foundation of India’s democratic institutions whilst providing practical benefits that improve the daily lives of citizens across the country’s vast rural landscape.


Session transcriptComplete transcript of the session
Shri Alok Prem Nagar

All panchayats, all two and a half lakh of them, they are present on eGram Swaraj. For right from planning to the payment stage, everything is done on a portal which is called eGram Swaraj. This portal works in the English language. So I’ll tell you, in 2019 when we were starting something called the People’s Plan Campaign, I happened to attend a Gram Sabha in the state of Karnataka. I was there for something like 45 minutes and I was felicitated and sat on stage. And I didn’t understand a thing. And then it struck me, you know, I had this thing that how do you expect these people really to relate to what is happening? Because it is public money.

Everybody in the panchayat needs money. It needs to know what kind of plans are uploaded, how many works got done that were as per the plans, how much did it cost. It costs them to do it. And subsequently, they can raise issues in the meetings pertaining to the works close to their residences. And along came Bhashini. I think we had in the year 2023 an event called Manthan, where we invited a lot of people from the industry to tell us how we could conduct our business better. And so Bhashini was a revelation. And imagine that a person from a panchayat is looking at the expenses page for his gram panchayat or her gram panchayat. And then by a click of a button, they’re able to see it in their own language.

It was magic. And that was the starting point. Yeah. And subsequently, of course. We went from there and. We found out through a survey that what really hurts a panchayat secretary is not to be able to produce the minutes of meeting in time, which are very important, which are the only record of a panchayat’s proceedings. And then, again, using Bhashini and another tool, we were able to create Sabha Sar, in which if you input the video slash audio recording of your meeting, you are able to get a minuted draft, which you can then edit and upload. So that was miracle number two. And briefly, if I could also address Swamitva, the scheme that you mentioned.

Swamitva is a scheme where drone surveys are carried out over all the village habitations. So there are these pictures that are subsequent. Subsequently converted to ortho rectified images and they lead to property rights. for the people living inside those villages. But the way the images have been captured, there is dense point cloud information, all of which was getting wasted. Why? Because we were confining our attention only to the orthorectified images. So we had the AI guys look at that, and then they converted all those rooftops that they could see into the solarization potential. As a result of which now, out of the 3 .3 lakh gram panchayats where drone surveys have been carried out, in 2 .38 lakh gram panchayats, you can go to gram Manchitra, and you can zoom into your village, and then you can click the icon corresponding to the solar ability potential, and it will tell you roof -wise how many, panels can you fit there.

We’ve gone further. and we’ve integrated that with the PM Surigarh Yojana portal. As a result of which, the Gram Panchayat can drive it like a campaign and lead to greater rewards for everybody all around.

Moderator

Actually, it reaches the last mile citizen when you talk about those benefits. So India’s last mile operates in local languages and dialects, as you mentioned, solving that problem. So in your view, how critical is language AI in ensuring that digital governance platforms are inclusive and participatory and increases citizen trust and participation in Gram Sabhas?

Shri Alok Prem Nagar

Like I said, people are now able to follow what was something that was written in. They could still see it, of course. In the English language, then they’d have to go to the person who they knew to be very smart in the village and they’d have this person read it out to them. Now they can see it at their leisure. not just people here but people outside who are working in Mumbai can see what is happening in their panchayats and close to Pune or something and immediately they can get active about it so and the miniaturization tool that I mentioned that opens a whole new set of avenues now you can have a record then against that you can have action taken reports and then you could have follow up in the next meeting it makes it all amenable to very systematic representation on portals so that is what some of the states have already started doing and it is truly remarkable that anybody can go in there and when I say anybody I don’t mean just the panchayat secretaries anybody in a village can drill into their gram panchayats record and see that corresponding to the finance commission grants for any year what was the plan against which how much has been executed, how many bills were prepared against each activity and what is the status of the payment, whether it has been completed, where the asset exists, the geotags and then you can zoom in and maybe see it on Gramman Chitra.

So there are great rewards for everybody all around and we need to of course now intensify it through a capacity building training program. That is something we started doing from the previous year, but it has been an incredible journey. And it is being adopted all over, yeah.

Moderator

So Alokji, let’s talk a bit about Sabha Saar Impact. Let’s let our audience know about it. And with its launch on 14th August 2025, MOPR introduced an AI -enabled voice -to -text meeting summarization tool powered by Bhashini ASR Services. So as of 4th February 2021, over 1 ,15 ,115 Gram Sabha meetings have been held. process. So this is a good number I need a round of applause. So what structural changes have you observed in the panchayat functioning after Sabha Saar?

Shri Alok Prem Nagar

Sabha Saar was one thing that we carried out for the convenience of the panchayats and the panchayat secretaries as opposed to E. Gram Swaraj which was our selfish motive. We wanted panchayats to plan there and show all their vouchers there so that we could tell that this is how the money has been spent. But Sabha Saar actually came through as a part of a survey that was carried out using Rapid Pro by UNICEF. We asked something like 8 ,000 panchayat secretaries all over the country that how do you spend your time? How much of it is spent in inspections and attending programs? And meetings and making records? So one thing that came through was the conduct and recording of meetings was the in 65 percent of the respondents.

That was the activity that was sitting, you know, very heavy on their entire time availability. And so having realized this and having the help of Bhashini, we converted it into a tool. So in Bhashini, it’s very simple. There is no big standard operating procedure, as it were. So if you’re standing having a meeting, there has to be a recording device. It could well be your mobile phone. And then through audio or video recording, you can just place it. Each time somebody speaks and later on, you input this into a into the sub -assert tool. the sabasa tool is not something that is a part of the device on which you carried out your recording so the issue related to connectivity in villages is something that we have been able to sidestep and once you do that it gives you a draft minute of meeting so bhashini turns it into English and the English thing is monetized using the AI engine again bhashini gives it back to them in their own language and yes it’s voila the person can just make a few changes and upload it and we’ve had some heartfelt gratitude coming to us from villages as a result of this

Moderator

ok so has the structured documentation improved transparency participation tracking or monitoring of meeting frequency and agenda quality too

Shri Alok Prem Nagar

now that the minute is ready if there are 5 items, 10 items ok So the states that have really gone ahead and adopted it, which is Odisha, which is Tamil Nadu, which is Tripura, all these people are into the second stages now where they are looking at the minutes of meeting and converting it into or refining it into tools that help them keep track of the activities after they’ve been created. We also realized through our meetings that why is the number just 1 ,15 ,000? So there are a whole lot of people whose languages do not exist on Bhashani. So from there, we asked those states to provide Bhashani with the necessary expertise so that they can train their bots.

And they’re already working on something like 11 more languages, which includes Assamese and Boro and Maithili and Santal and whatnot. Yes. So those languages are also. So it’s been. So. a very gratifying experience and then the learning continues.

Moderator

Yeah, it’s commendable that things have reached to that level. So over to you, Amitji, from an accountability lens, does structured documentation change behavior with the governance systems?

Shri Amit Kumar

Thank you. So I think, you know, so if you have understood the enormity of the situation, right, what we are talking about, 250 ,000 plus gram panchayats and different kind of languages. So just to circle back, if you look at the frugality of the situation, right, so for example, if you look at, in India, generally people talk about either we live in a bullock cart stage, right, or we are aspiring for bullet train, right. So the point is, if AI has to tell us in terms of, you know, how we learn in the future, how will we transform, so we cannot, I mean, leave out 900 plus million people who are living in villages, right. Absolutely. So the idea is not to make it very, very urbanized, you know, very, very kind of elitist idea that, you know, that.

That AI is only for urban, AI is only for industries, AI is only for commercial sector. So, obviously, this is a journey, right? So, you have to start somewhere. So, for example, I mean, the frugality what I was talking about, that we did not ask Gram Panchayat to invest anything, right? All they need to have a mobile phone, which any which way they have, right? And the idea is just to kind of record and upload. Obviously, there will be some challenges and kind of resistance also in the beginning. But, you know, once they get used to it, so, for example, today we are asking them to kind of upload your recording, right? The rest is done by system.

And system also has a provision of human in the loop so that we can go and correct it. Now, tomorrow, we see the next step what we will be doing, what we can do perhaps, right? When the next meeting happens, we can also populate the agenda from last meeting, right? So, what was discussed last time, what was committed, whether you are doing or not doing, right? And then everything goes to kind of public domain. so generally the people who live in city they know that when there is a RWA meeting nobody goes and attend but they all warfare in the whatsapp group in the village also it’s not easy to bring people but once they start getting the hang of it that okay there is a meeting I am getting the mom and it’s available in the public domain we are using AI, AI is for good AI can also be leverage for rural sector why it has to be very very elitist only for passport save so that’s just a beginning it’s just a journey and also if you see from idea point of view phenomenal idea for ministry of panchayati let me congratulate sir and the entire team to think of something like that because AI is all about idea and use case if you have the right idea you can do wonders but you have to have idea and muscles to execute it so that way I believe that this whole documentation will do wonders for them.

Gram Panchayats will also realize something which was missing in the most part of the world that you know the record keeping accountability, transparency so and so forth because generally these decisions were taken by some people only and executed by some and the large population was largely kept out of it knowingly or unknowingly right. So I think that’s what I said that you know it will change the way they work, it will change the way they think because this is only for a you know kind of we are starting only with a let’s say meeting but now they will start thinking and there will be demand from states and otherwise right what more can be done with AI.

So broader scales would be achieved. Yeah Sabasar is an example like Praman we are doing we have launched this Pancham you know bought also for all elected and selected representatives so I think it’s a great you know kind of experience efficiency would obviously help them adopt it. I mean let me tell you in our own corporate meetings we are still some of us making note. despite being on teams despite using co -pilots despite having all tools at our disposal but we are still using it we expect a junior guy to take notes and circle back so that’s a cultural change which you have to also see and these changes and these changes couldn’t have been possible if we wouldn’t have the infrastructure like Bhasni because ministry on its own how ministry got benefited we have infrastructure like Bhasni we have the GPUs got available to us through the NDIA mission otherwise procurement itself could have been a big challenge we have a team to kind of build applications so I think it takes a village to move something so that’s what has happened here

Shri Alok Prem Nagar

thank you for sharing your thoughts just continuing with that the department of drinking water and sanitation has actually approached us that the meetings of their village meeting VWC’s village water committees. They want to use Bhashini for that and there has been some initial interaction between the two teams.

Moderator

That’s commendable, I would say. That’s awesome. So Alokji, let’s talk of some implementation challenges in rural India with AI. AI in rural governance is transformating, but complex. So what are the biggest operational challenges, infrastructure, though a bit, I think Amitji was about to share that, but then infrastructure, training, dialect diversivity and connectivity. So what challenges are you facing? How receptive are panchayat functionalities and rural citizens to AI -enabled systems?

Shri Alok Prem Nagar

Challenges, of course, there are many and you would have anybody tell you. What we have found out, the adoption of e -gram swaraj by our villages gram panchayat and then we have A case in point, Uttar Pradesh has got something like 59 ,000 Gram Panchayats. And for Uttar Pradesh to onboard eGram Swaraj seemed like an impossible task because it involved registering your digital signing certificates and then everybody agreeing to completely dispense with checkbooks. All your payments were then going to be, can you imagine Uttar Pradesh did it in 40 days flat. All 59 ,000 Gram Panchayats. So my point was that if you are ready with a product that addresses their needs and it is friendly and it meets, of course, my need was that I needed the money well accounted for and their need.

It was a system that could make it very easy for them to do it. So we met halfway and if UP can do it in 59 ,000, I am not prepared to hear an excuse from any other state in the country. It’s a trial by fire. Likewise for Sabasar, Sabasar again I said initially that there was a demand that was indicated from the state. So when we set out to meet that, we were clear what is it that we are looking for and people were so forthcoming. In fact, Bhashani also enabled me to write letters to the states in their languages and people were gushing with affection and what not. I got a letter in Telugu for the first time and all that.

So there are challenges but then the Ram Panchayats are predisposed to meet you halfway. So you need to begin that journey and we have seen that with regard to a number of things. There have been campaigns. Every year they carry out a campaign from 2nd October to… the 31st of December, which extends to January typically, where all two and a half lakh gram panchayats prepared their gram panchayat development plans and uploaded on the portal. So 2 .5, 250 ,000 gram panchayats, all of them planning for the next year. And so before you enter the next financial year, their plans are ready. I mean, we don’t do it in the departments, in the ministries. And all these gram panchayats have not done it once, twice.

They started in 2018. They’ve continued to do it ever since. In the COVID year, there was a request that we don’t do this campaign. So there was a massive pushback from the states that, no, we want to do it. The inertia was so great that they still did it. So there are challenges. But if we make an application like he was saying, that this is a simple recording device, this is a mobile phone, there aren’t things that you need to procure to set it up. So if you make a simple tool, people would grab it with both hands. So I think that is the embracing of challenges rather with the response we are getting with Bhashani.

Moderator

So for ministries delivering last mile services such as Ministry of Rural Development and the Ministry of Agriculture and Farmers Welfare, what lessons from MOPR’s AI journey would you share? How important is open architecture and interoperability in your sense?

Shri Alok Prem Nagar

That is dangerous territory. I am not in a position where I could start advising anybody because they have got pretty robust systems of their own. If you look at Manrega Soft and the PM Avas Yojana, because they are running schemes which are very pointed. Avas Yojana is just about houses. Manrega is a scheme where there is of course it is as large as the things that you do in the Finance Commission. It is a very big scheme. It is a very big scheme. but it is fairly well organized and in all of these typically the beneficiary is the individual. In Panchayati Raj mode there are individuals at the end of it but our emphasis is on the institution, the Panchayat and not just E.

Gram Swaraj and the things that we do for their accounting and planning. We also hooked up with the meteorological department and there are daily forecasts being generated for every Gram Panchayat. This people are able to see on their phones and all with the similar ability as they are able to see everything using Bhashini. So it’s a great enablement all around and it can only get better.

Moderator

Absolutely. So Amitji over to you. How critical is open architecture ensuring long term sustainability? And avoiding vendor lock -in.

Shri Amit Kumar

if I can take a minute and talk about the previous question please go ahead sir rightly mentioned that different ministries have got a different mandate it’s not an apple to apple comparison but see you also have to see the panchayati raj the main role of panchayati raj what I understand is the mobilization because they are not running major schemes on their own compared to others and generally the best practices doesn’t have to be in form of technology or architecture only the idea is that if you go down from top there are two different ministries and if you go to the village you will see the same infrastructure, same set of people are only working from both departments right so the idea is if one can do others can also do So there is a lot of learning in terms of method that how we could overcome, how could we mobilize, how we could implement some of these solutions.

And I’m sure we know that RD and agriculture are also doing a lot of things, but their mandate is much bigger. But they can also, you know, take a lot of pride or kind of learning from the success which we have, right? What was the second question? The second one was that how critical is open architecture in ensuring long -term sustainability and avoiding the window lock? So you must be hearing this word called sovereignty quite a lot, right, nowadays. So the whole idea of, you know, being sovereign in any part of the, you know, technology, be it defense, be it IT, be it any way, it’s a survivability, right? So the idea is despite, in spite any kind of, you know, geopolitical risk, we should survive.

Yeah. Our system should run, right? So for that, generally, people confuse sovereignty with also making India local, et cetera. So that’s not the case, right? We will always have. some technology from outside. But we have to design in a way that it is kind of ready to shift, right? So either from a technology point of view, we have the interoperability, the standards which we have chosen, the models which we have chosen, the infrastructure which we can move around and the teams which can control, right? So the data residency has to be within India and data is with us. So obviously if we have trained on one, we can train on something else also. So the idea is also to look little bit long term.

See, what has happened that when we started, obviously there were a lot of POCs. Nobody knew, right, how AI will behave. Still we don’t know. Still we don’t know, right? I mean, so obviously that you have to start somewhere, right? And then you have to also ensure that in future, when we start with one use case, it becomes easy, right? When the department itself becomes fully AI enabled and we have 10 AI use cases running, then it becomes a problem, right? Problem of management. So that’s where I think we need to plan better for future. so that we plan. I mean, it’s not that a use case is defined. Then we found an easy method of procurement of infra or the model which I knew.

So going forward, I think there will be a platform approach. So where we have to think for future also that, okay, these AI cases are likely to come in future as well. Different kind of AI, be it agentic, be it gen -AI, be it conversational, be it computer vision analytics. And accordingly, we have to have open architecture like the way we did in a normal digital transformation. Even digital transformation, there used to be time where we created our own independent monolith applications. But now we are creating applications, you know, which are more API -based, can integrate with anybody, right? And futuristic, can scale our modular. So same concepts have to be used for AI initiatives as well.

Moderator

Well said. So I think adoption comes with responsibility and that’s what you are scaling at, looking at the future. So Alokji, Sabha sir demonstrates how language AI can power grassroots governance. After Sabasa’s success, what deeper integrations do you envision with Bhashini and what does the next phase of collaboration looks like? Let’s talk about that.

Shri Amit Kumar

And we would like through, and people are going to be speaking in any number of languages. I think the next step, my government is something that has already been very, always been very invested in providing services to making ease of living easier, as it were, and providing all manner of things. Everything is finally a service. You need to look at a doctor. You need your road fixed. You need a street light to be working. You want the log water to be drained or something. She needs more attention than us. Yeah. Okay. Over to you.

Shri Alok Prem Nagar

So people should come to expect. they should demand these services from their gram panchayats. There are mechanisms of doing that because gram panchayats don’t have a lot of resources in terms of manpower, in terms of people who are at their beck and call to carry out the activities that are flowing from the charter. So there are systems in a lot of these villages. You have common service centers in some states. They have their own system of common service centers like UP, like Bapuji Seva Kendra in Karnataka, like Mi Seva. So we need to take that further and we need people to be able to talk and find out if a certain service that is available to them, can they avail it in their village?

If they are to do that, what is the mechanism? And if they’ve already made an application, that what should be able to tell them that where that thing currently stands? so that is a very wide area like I said that there are a number of services we also learnt of a pilot that was carried out in Guwahati where the bus used to have a camera it used to drive through capture all number of images and basis that it would assign issue labels to them as it were if there is a drain overflowing so it takes note of that if there is a pothole then it takes note of that and then it assigns it to all these agencies whose job it becomes now to fix that so not that but maybe we have a mobile interface called Meri Panchayat which ports a lot of information from E Gram Swaraj Meri Panchayat also has the capability of capturing images of the issue that is being reported I think the next step is that image it makes sense of the image and it assigns it to the necessary department.

There are people who are mapped whose job it is to carry it out and within a certain amount of time it doesn’t happen, then there is escalation. We need to go deeper into that system. That, I think, is the next frontier. And, of course, because it involves vocalization of your demands, so bhasini is absolutely critical in this. So when we say there is a long way to go, I think that phrase is no more relevant. It’s a short way, but not even a big journey, an intelligent journey to move ahead.

Moderator

So India is building public digital infrastructure for AI at scale. So how do we balance scale with accountability and public trust? We have talked much about how we are building things. But let’s talk about the other side. And can India lead the world in population scale? Of course it can. I am sure about that. But then multilingual AI for governance, when it comes, if you would like to have a shot at it first,

Shri Amit Kumar

so one thing you all have to realize that whatever we do is a population scale and unparallel right because of our size so even our POCs exceed the kind of performance of European countries our UP sir talked about UP 60 ,000 panchayats if you look at UP maybe it will be in top 10 country right sir in terms of population and size I think the world is vouching for us when it comes to the use cases yeah see if you look at that we have got that scale now we have the experience behind us right we did Aadhaar, we did UPI we did Fastag, we did GST and we did Income Tax so now we have that confidence behind us that we can do anything of scale and with the same Prugal approach we will do 10 times cheaper than Western world and certainly not worse better only right so in terms of that and also from last decade we have evolved right so for example the concept of privacy like dpdp act consent based usage like you know adhar brought so a lot of things have improved from a policy side of it now now once you have policies in place systems are easy because system themselves act as a rule you know once you have policies in place then you don’t need so much of human intervention or discretion so since we have done it since we have kind of you know done so much so now if you look at the very simple case bhashni i remember four or five years back and i and amitabh used to i mean kind of debate also whether we need a bhashni okay right because we we had some of the google translate services so on for forth right but the idea is that i mean in the hindsight that was the right call right in future we have to have something called sovereignty word right we have we don’t have to dependent I mean we need to be frugal and we don’t want to use you know the applications which are very expensive from a taxpayer money point of view so similar things we have done a lot right so I think the next step for example if you look at roam around in AI summit you will see how many LLMs and SLMs we are building on our own right honorable ministers talked about five layers application I think we have ample talent to build applications LLMs we use open LLM but we are developing our own and Bosnia also like one of the common infrastructure energy will take care right infra and chips anyway will have dependency but that’s the rest of the world also has a dependency right not that everybody has a rare earth and everybody is building chips so that way I believe that you know that and because we have that technical know how also I mean that’s our kind of bread and bread and butter now a day right so we’ll be able to take the learnings from all these systems and we’ll move forward as of now we were a bit slow in last year or two because AI itself was new for everyone so we took some time but now I think from this year onwards we’ll really scale it up because we have tested the blood, we have seen the success and we will scale it up

Moderator

sure, thank you for sharing that so as we come towards the closure of this conversation I would like to leave with one final thought which is like if Panchayati Raj institutions are the foundation of democracy can AI when built on a public stack and powered by language inclusion become the strongest enabler of participatory governance in 21st century just closing thoughts from you both Alokji, would you

Shri Alok Prem Nagar

absolutely he was just telling you that that we’ve been able to do things at scale this thing about UP that I told you I wear it like a badge that to have done it in some place so and it’s not an easy ask because there are so many stakeholders they’ve got various kinds of issues of their own you’ve got to engage with them address those things and if my problem is well defined and if I know what kind of a thing is going to help me redress that like Bhashini did for us I think that what you said is going to come true because that is so being able to understand my problem and knowing what parts of the problem can be fixed in what manner using the various tools that are available that is the key and it’s not an over simplification but good servant bad master so that is something that stays and it is not going to land you in the right places if you just let it go around like an animal.

But then if you know where to put it, what modules to be inserted, what has been used in the background, and so that would make you more confident. I’m not really an AI person, so I’m just speaking on the strength of what I’ve learned and the experience thus far has been outstanding, partly because we’ve had a very good partner. But other than that, I am not throwing it all open out to AI. I don’t wear T -shirts saying I love AI or something, but I have a problem and it needs fixing and I need to be able to know what aspects of AI can help me fix that in the best possible manner. And that’s my thing on this.

Shri Amit Kumar

Yeah. So like like sir said, you know, sir is not a person. Neither am I. So if you look at, you know, that he was transparent enough to share that. No, no. So look at that way that none of us were right. Because if you’re talking about AY, I’ve been doing this, you know, digital transformation for public sector for over 20 years. Obviously, there was no AI even when there was no DPI, DPG also, you know, what we kind of retrofitted with the names. Right. So if you look at the idea of Panchayati Raj itself is a participative governance. Right. That people have to assemble in the Gram Sabha and decide on the money which they are getting, how to spend and prioritize.

Absolutely. And if AI tools like Pramana and Sabha Sar and Pancham can help that strengthen, what best you know you can expect from from a participative government, from a democratization point of view. So I think this sometimes, you know, that technology becomes secondary. And in my view, most of the time, right, the ideas have to be clear in terms of what you want to achieve. and what problem you want to solve, what scale you want to solve, what are the guardrails you have to kind of, you know, also put in place. So, for example, when we do AI, that it cannot be 100 % autonomous, right? Of course. And it cannot be 100 % human in the loop also.

Because if we have each and every transaction being, you know, approved by human in the loop, then it defeats the purpose of AI. And there is no AI, right? Then we are still living in the rule -based algorithms. Algorithms. So the idea of, you know, that AI will be that we also train, monitor, have the mechanism to take complaints, have the mechanism to perfectly, you know, kind of train it better so that we improve our accuracy. So that is how AI journey. So AI journey is slightly different from the previous digital transformation journey, which were more like a transactional systems, right? So that way, I think, if you look at currently also Sabasar, I think whatever I am hearing from people, market teams also, So it is giving great accuracy, right, in terms of translation and summarization.

And I’m sure whatever there are little bit areas to improve, it will improve on its own. So we cannot stop it, right? So once we have boarded a flight, then we can only get down at where we have to, right? So I think future is bright. And also from a MOPR experience point of view, it will also, I’m sure, energize and motivate a lot many others. I can say with my experience that if MOPR can do in rural, we can use AI tools. There is no stopping for us as a nation.

Moderator

Exactly. This is truly an achievement when it comes to MOPI with the government. So you want to say anything regarding this, Alokji?

Shri Alok Prem Nagar

I thought of another application that works. That is something we’ve been working on, which was spatial development plans. Okay. we again engaged with a lot of panchayats that were close to the highways okay so typically if a panchayat is on a national highway close to a big city and have a population of 10 000 plus then you were eligible to participate in this program okay so there were 34 gps that we involved and we got the planning and architecture colleges to prepare spatial plans for them spatial plan would be futuristic it would zone and it would you know assign it would look into the future and see how this place was going to grow it would devise road networks or something and tell people what they would become over a period of time we had a conference with with gram panchayats around bhopal building and the people were so annoyed We don’t need a spatial plan.

Over a period of time, of course, we told them what it was going to be, but we had this epiphany that people need to be able to see what the spatial plan will help them become. And then we went into the next national conference. We had for each of these 34 spatial development plans a visualization. And we showed people that if you want to become this, you have to do this. And then there was greater enthusiasm. So the people on whom this plan is, who are going to be subjected to this plan, if I could use those words. So these people, if they’re not on board, there is no way you can carry it out. And that, I think, is wide open.

And we’ve had after that. the entire state of Andhra Pradesh has gone ahead and said that all their planning is going to be spatial plans. So that is something that is amenable to AI tools. And a final thing that I remembered that lots of times we need to convey through audio video messages. He mentioned Pancham. So Pancham is a WhatsApp -based chatbot platform which allows us to have two -way conversation with all the sarpanchas and panchayat secretaries in the country. So all these people. And so if there is messaging that needs to be conveyed, if there are videos that need to be quickly created using AI tools, that is something that would be hugely effective in getting the message across in the quickest possible way.

Moderator

Thank you. Thank you so much for such endeavor. insights on the Gram Panchayat and how things are working behind. Actually, I’m sure the audience was truly, they were unknown about what’s happening around and this conversation has given a new tangent to how we look at the rural development. Thank you so much Shri Alok and thank you so much Shri Amir for sharing these thoughts on Gram Panchayat development. Thank you so much for this fireside chat. Thank you. I would like to call Ms. Deepika to please felicitate Mr. Alok.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Shri Alok Prem Nagar
14 arguments144 words per minute3297 words1372 seconds
Argument 1
eGram Swaraj portal digitizes all panchayat operations from planning to payments, with 2.5 lakh panchayats onboarded
EXPLANATION
The eGram Swaraj portal serves as a comprehensive digital platform that handles all aspects of panchayat operations, from initial planning stages through to final payment processing. This system has successfully onboarded all 250,000 panchayats across India, creating a unified digital infrastructure for rural governance.
EVIDENCE
All panchayats, all two and a half lakh of them, are present on eGram Swaraj. The portal works in English language and handles everything from planning to payment stage.
MAJOR DISCUSSION POINT
Digital transformation of rural governance infrastructure
Argument 2
Language barriers in English-only systems prevent rural citizens from understanding public money usage and participating effectively
EXPLANATION
The exclusive use of English in digital governance platforms creates a significant barrier for rural citizens who need to understand how public funds are being utilized in their communities. This language barrier prevents meaningful participation in local governance processes and reduces transparency and accountability.
EVIDENCE
Attended a Gram Sabha in Karnataka for 45 minutes, was felicitated and sat on stage, but didn’t understand anything. Realized how people are expected to relate to what is happening with public money when they can’t understand the language.
MAJOR DISCUSSION POINT
Language accessibility in digital governance
AGREED WITH
Moderator
Argument 3
Uttar Pradesh successfully onboarded 59,000 gram panchayats to eGram Swaraj in just 40 days, demonstrating scalability
EXPLANATION
The rapid deployment of eGram Swaraj across all gram panchayats in Uttar Pradesh within 40 days demonstrates that large-scale digital transformation is achievable in rural India. This success involved complete digitization including digital signing certificates and elimination of traditional checkbooks for all payments.
EVIDENCE
Uttar Pradesh has 59,000 Gram Panchayats and onboarded eGram Swaraj in 40 days flat, involving registering digital signing certificates and completely dispensing with checkbooks for all payments.
MAJOR DISCUSSION POINT
Scalability of digital governance solutions
Argument 4
Bhashini integration enables citizens to view panchayat information in their local languages with a single click, creating “magic” for users
EXPLANATION
The integration of Bhashini language AI technology transforms the user experience by allowing citizens to instantly translate panchayat information from English to their native languages. This breakthrough eliminates the previous need to seek help from educated village members to understand official documents and proceedings.
EVIDENCE
Person from a panchayat looking at expenses page can click a button and see it in their own language – ‘It was magic.’ Previously people had to go to smart village members to read English content to them.
MAJOR DISCUSSION POINT
AI-powered language solutions for rural inclusion
Argument 5
Language AI is critical for inclusive digital governance platforms that increase citizen trust and participation in Gram Sabhas
EXPLANATION
Language AI technology is essential for making digital governance truly inclusive by enabling citizens to access information in their preferred languages at their convenience. This accessibility extends participation beyond local boundaries, allowing diaspora communities to stay engaged with their home panchayats.
EVIDENCE
People can now see information at their leisure in their own language, not just local people but those working in Mumbai can see what’s happening in their panchayats near Pune and get active about it.
MAJOR DISCUSSION POINT
Language AI as enabler of participatory governance
Argument 6
Survey of 8,000 panchayat secretaries revealed that meeting conduct and recording consumed 65% of their time, creating a major bottleneck
EXPLANATION
A comprehensive survey conducted using Rapid Pro by UNICEF across 8,000 panchayat secretaries nationwide identified meeting documentation as the most time-consuming activity. This finding highlighted a critical operational challenge that was preventing efficient governance at the grassroots level.
EVIDENCE
Survey using Rapid Pro by UNICEF asked 8,000 panchayat secretaries how they spend their time. 65% of respondents identified conduct and recording of meetings as the activity sitting very heavy on their time availability.
MAJOR DISCUSSION POINT
Operational challenges in rural governance
Argument 7
Sabha Saar converts audio/video recordings into draft meeting minutes using Bhashini, requiring only mobile phones and addressing connectivity issues
EXPLANATION
Sabha Saar provides a simple solution for meeting documentation by converting audio or video recordings into draft minutes through AI processing. The system is designed to work with basic mobile phones and operates offline during recording, with processing happening later when connectivity is available.
EVIDENCE
Recording can be done with mobile phone through audio or video. Sabha Saar tool is not part of the recording device, sidestepping connectivity issues in villages. Bhashini converts to English, AI engine processes it, then Bhashini converts back to local language.
MAJOR DISCUSSION POINT
AI-enabled meeting documentation solutions
Argument 8
States like Odisha, Tamil Nadu, and Tripura are advancing to second-stage implementations using meeting minutes for activity tracking
EXPLANATION
Progressive states have moved beyond basic meeting documentation to leverage the structured data for advanced governance functions. These states are now using the digitized meeting minutes to create systematic tracking and follow-up mechanisms for decisions and commitments made in meetings.
EVIDENCE
States that have adopted Sabha Saar – Odisha, Tamil Nadu, Tripura – are in second stages, looking at minutes of meeting and converting them into tools that help keep track of activities after they’ve been created.
MAJOR DISCUSSION POINT
Advanced applications of AI in governance
Argument 9
Bhashini is expanding to include 11 more languages including Assamese, Boro, Maithili, and Santal to address coverage gaps
EXPLANATION
Recognition that many panchayat languages were not initially supported by Bhashini led to a collaborative effort to expand language coverage. States are now providing linguistic expertise to train AI models for additional regional and tribal languages to ensure broader inclusion.
EVIDENCE
Realized through meetings that many people’s languages do not exist on Bhashini. Asked states to provide Bhashini with necessary expertise to train their bots. Working on 11 more languages including Assamese, Boro, Maithili, and Santal.
MAJOR DISCUSSION POINT
Expanding language coverage for digital inclusion
Argument 10
Swamitva scheme’s drone survey data is being converted to show solar panel potential for 238,000 gram panchayats using AI
EXPLANATION
The Swamitva scheme’s drone surveys, originally conducted for property rights mapping, generated valuable dense point cloud data that was being underutilized. AI analysis of this existing data now provides detailed solar installation potential for individual rooftops across hundreds of thousands of villages.
EVIDENCE
Drone surveys carried out over village habitations for property rights. Dense point cloud information was getting wasted. AI guys converted rooftops into solarization potential. Out of 3.3 lakh gram panchayats with drone surveys, 2.38 lakh now show roof-wise solar panel capacity on Gram Manchitra.
MAJOR DISCUSSION POINT
AI-powered analysis of existing government data
Argument 11
Integration with PM Suryaghar Yojana portal enables gram panchayats to drive solar campaigns effectively
EXPLANATION
The solar potential data derived from AI analysis has been integrated with the PM Suryaghar Yojana portal, creating a seamless pathway from identification to implementation. This integration allows gram panchayats to actively promote solar adoption as a coordinated campaign with clear benefits for all stakeholders.
EVIDENCE
Integrated solar potential data with PM Surigarh Yojana portal. As a result, Gram Panchayat can drive it like a campaign and lead to greater rewards for everybody all around.
MAJOR DISCUSSION POINT
Integration of AI insights with policy implementation
Argument 12
Simple, need-addressing tools that don’t require additional procurement are readily adopted by gram panchayats
EXPLANATION
The success of digital adoption in rural areas depends on creating solutions that address genuine needs while minimizing barriers to entry. When tools are designed to solve real problems and work with existing infrastructure, gram panchayats demonstrate remarkable willingness to adopt new technologies.
EVIDENCE
UP’s 59,000 gram panchayats adopted eGram Swaraj in 40 days. If you make a simple tool that addresses their needs and is friendly, people grab it with both hands. Example of simple recording device using mobile phone for Sabha Saar.
MAJOR DISCUSSION POINT
Design principles for rural technology adoption
AGREED WITH
Shri Amit Kumar
Argument 13
Future applications include service delivery systems where citizens can vocalize demands and track application status
EXPLANATION
The next phase of AI integration envisions comprehensive service delivery systems where citizens can verbally request services, track their status, and receive updates in their local languages. This would create a complete digital governance ecosystem that responds to citizen needs in real-time.
EVIDENCE
People should be able to talk and find out if a service is available, what is the mechanism to avail it, and where their application currently stands. Integration with systems like common service centers, Bapuji Seva Kendra, Mi Seva.
MAJOR DISCUSSION POINT
Future vision for AI-enabled citizen services
Argument 14
Computer vision can automatically assign issue labels from captured images and route them to appropriate departments
EXPLANATION
Advanced AI applications can analyze images of civic issues captured through mobile interfaces and automatically categorize and route them to responsible departments. This would create an intelligent issue resolution system with built-in escalation mechanisms for unresolved problems.
EVIDENCE
Pilot in Guwahati where bus camera captured images and assigned issue labels – drain overflowing, potholes – to relevant agencies. Meri Panchayat app can capture images, next step is AI making sense of images and assigning to necessary departments with escalation mechanisms.
MAJOR DISCUSSION POINT
Computer vision applications in civic issue management
S
Shri Amit Kumar
10 arguments176 words per minute2680 words910 seconds
Argument 1
AI democratization should extend beyond urban and commercial sectors to include 900+ million village residents
EXPLANATION
AI technology should not be limited to urban, industrial, or commercial applications but must be made accessible to the vast rural population of India. This inclusive approach ensures that technological advancement benefits all citizens rather than creating further digital divides between urban and rural areas.
EVIDENCE
In India, people talk about living in bullock cart stage or aspiring for bullet train. Cannot leave out 900+ million people living in villages. AI should not be only for urban, industries, commercial sector – it shouldn’t be very elitist.
MAJOR DISCUSSION POINT
Inclusive AI development for rural populations
AGREED WITH
Shri Alok Prem Nagar
Argument 2
Infrastructure like Bhashini provides necessary GPU resources and eliminates procurement challenges for ministries
EXPLANATION
Shared AI infrastructure platforms like Bhashini solve critical resource and procurement challenges that individual ministries would face when implementing AI solutions. This centralized approach provides access to expensive computing resources and technical expertise that would otherwise be difficult to obtain.
EVIDENCE
Ministry benefited from infrastructure like Bhashini, got GPUs available through NDIA mission. Otherwise procurement itself could have been a big challenge. Have team to build applications – it takes a village to move something.
MAJOR DISCUSSION POINT
Shared infrastructure for AI implementation
Argument 3
Frugal approach using existing mobile phones eliminates investment barriers for gram panchayats
EXPLANATION
The design philosophy of requiring minimal additional investment makes AI tools accessible to resource-constrained gram panchayats. By leveraging ubiquitous mobile phones rather than requiring new hardware, the solution removes financial barriers to adoption.
EVIDENCE
Did not ask Gram Panchayat to invest anything. All they need is a mobile phone, which they have anyway. Just record and upload, rest is done by system with human in the loop provision for corrections.
MAJOR DISCUSSION POINT
Frugal innovation in AI deployment
Argument 4
Cultural change is needed as even corporate meetings still rely on manual note-taking despite available technology
EXPLANATION
The challenge of adopting AI-powered documentation tools exists even in well-resourced corporate environments where advanced technologies are available. This highlights that successful implementation requires addressing cultural resistance and changing established work practices, not just providing technology.
EVIDENCE
In corporate meetings we still have someone making notes despite being on Teams, despite using co-pilots, despite having all tools. We expect junior guy to take notes and circle back – that’s a cultural change needed.
MAJOR DISCUSSION POINT
Cultural barriers to AI adoption
Argument 5
Structured documentation will improve accountability, transparency, and systematic representation on portals
EXPLANATION
AI-enabled documentation creates structured data that can be systematically analyzed and presented, leading to improved governance outcomes. This structured approach enables better tracking of decisions, commitments, and follow-up actions, ultimately enhancing democratic accountability.
EVIDENCE
Once you have record, you can have action taken reports, follow up in next meeting. Makes it amenable to systematic representation on portals. States have started doing this – anybody can drill into gram panchayat records and see finance commission grants, plans, execution status, bills, payments, asset locations with geotags.
MAJOR DISCUSSION POINT
Benefits of structured governance documentation
AGREED WITH
Moderator
Argument 6
India’s population scale means even pilot projects exceed the performance scope of entire European countries
EXPLANATION
India’s massive population creates a unique advantage where even small-scale pilot projects operate at a scale that surpasses the entire scope of many developed countries. This scale provides valuable experience and confidence in implementing large-scale digital solutions.
EVIDENCE
Whatever we do is population scale and unparalleled because of our size. Even our POCs exceed the performance of European countries. UP with 60,000 panchayats would be in top 10 countries by population and size.
MAJOR DISCUSSION POINT
Scale advantages in AI implementation
Argument 7
Open architecture and interoperability are essential for long-term sustainability and avoiding vendor lock-in
EXPLANATION
Designing AI systems with open architecture ensures technological sovereignty and flexibility to adapt to changing needs and geopolitical circumstances. This approach prevents dependence on single vendors and maintains control over critical governance infrastructure.
EVIDENCE
Sovereignty means survivability despite geopolitical risks. Need interoperability, standards, models, infrastructure that can move around, teams that can control. Data residency within India, if trained on one system can train on another. Need to plan for when department has 10 AI use cases running.
MAJOR DISCUSSION POINT
Technological sovereignty in AI systems
AGREED WITH
Moderator
Argument 8
Platform approach is needed for future AI use cases including agentic AI, generative AI, conversational AI, and computer vision
EXPLANATION
As AI applications multiply across government departments, a unified platform approach becomes essential for managing diverse AI technologies efficiently. This strategic approach prepares for various types of AI implementations while maintaining coherent architecture and governance.
EVIDENCE
Going forward, there will be platform approach. Have to think for future AI cases – agentic AI, gen-AI, conversational AI, computer vision analytics. Same concepts from digital transformation – API-based, can integrate with anybody, futuristic, scalable, modular.
MAJOR DISCUSSION POINT
Strategic platform approach for AI governance
Argument 9
India can lead the world in population-scale multilingual AI for governance based on successful digital infrastructure experience
EXPLANATION
India’s proven track record with large-scale digital infrastructure projects like Aadhaar, UPI, and GST, combined with its multilingual AI capabilities, positions it to become a global leader in AI-powered governance. The country’s frugal innovation approach delivers solutions at significantly lower costs than Western alternatives.
EVIDENCE
We have done Aadhaar, UPI, Fastag, GST, Income Tax – have confidence we can do anything at scale. Will do 10 times cheaper than Western world and better. Have evolved with policies like DPDP Act, consent-based usage. Building our own LLMs and SLMs.
MAJOR DISCUSSION POINT
India’s potential for global AI leadership
AGREED WITH
Moderator
Argument 10
AI should strengthen participative governance by helping people assemble in Gram Sabhas and decide on resource allocation
EXPLANATION
AI tools should enhance rather than replace democratic processes by making it easier for citizens to participate in local governance decisions. The technology should support the fundamental principle of participative governance where communities collectively decide on resource allocation and priorities.
EVIDENCE
Panchayati Raj itself is participative governance – people assemble in Gram Sabha and decide on money allocation and prioritization. AI tools like Pramana, Sabha Sar, and Pancham can strengthen this participative governance and democratization.
MAJOR DISCUSSION POINT
AI as enabler of democratic participation
M
Moderator
6 arguments133 words per minute640 words286 seconds
Argument 1
Language AI is critical for ensuring digital governance platforms are inclusive and participatory, increasing citizen trust and participation in Gram Sabhas
EXPLANATION
The moderator emphasizes that India’s last mile operates in local languages and dialects, making language AI essential for solving accessibility problems. Digital governance platforms must be inclusive and participatory to build citizen trust and encourage active participation in local governance structures like Gram Sabhas.
EVIDENCE
India’s last mile operates in local languages and dialects, solving that problem is crucial for digital governance platforms to be inclusive and participatory
MAJOR DISCUSSION POINT
Language AI as enabler of inclusive digital governance
AGREED WITH
Shri Alok Prem Nagar
Argument 2
Sabha Saar’s launch demonstrates significant impact with over 115,000 Gram Sabha meetings processed using AI-enabled voice-to-text summarization
EXPLANATION
The moderator highlights the measurable success of Sabha Saar, an AI-powered meeting summarization tool launched by MOPR. The processing of over 115,000 Gram Sabha meetings represents substantial adoption and impact of AI technology in rural governance documentation.
EVIDENCE
With Sabha Saar’s launch on 14th August 2025, MOPR introduced an AI-enabled voice-to-text meeting summarization tool powered by Bhashini ASR Services. As of 4th February 2021, over 1,15,115 Gram Sabha meetings have been processed
MAJOR DISCUSSION POINT
Measurable impact of AI in rural governance
Argument 3
Structured documentation through AI improves transparency, participation tracking, and monitoring of meeting frequency and agenda quality
EXPLANATION
The moderator inquires about whether AI-enabled structured documentation leads to improved governance outcomes. This includes better transparency in decision-making processes, enhanced ability to track citizen participation, and improved monitoring of how frequently meetings occur and the quality of their agendas.
MAJOR DISCUSSION POINT
AI’s impact on governance transparency and accountability
AGREED WITH
Shri Amit Kumar
Argument 4
Open architecture and interoperability are critical for long-term sustainability and avoiding vendor lock-in in AI systems
EXPLANATION
The moderator raises the important question of how critical open architecture is for ensuring that AI systems remain sustainable over time and don’t create dependencies on specific vendors. This addresses the strategic importance of maintaining flexibility and control over critical governance infrastructure.
MAJOR DISCUSSION POINT
Technological sovereignty and sustainability in AI implementation
AGREED WITH
Shri Amit Kumar
Argument 5
India has the potential to lead the world in population-scale multilingual AI for governance
EXPLANATION
The moderator suggests that India’s experience in building public digital infrastructure for AI at scale, combined with its multilingual capabilities, positions it to become a global leader in AI-powered governance. The question focuses on how to balance this scale with accountability and public trust.
EVIDENCE
India is building public digital infrastructure for AI at scale
MAJOR DISCUSSION POINT
India’s potential for global leadership in AI governance
AGREED WITH
Shri Amit Kumar
Argument 6
AI built on public stack and powered by language inclusion can become the strongest enabler of participatory governance in the 21st century
EXPLANATION
The moderator proposes that if Panchayati Raj institutions are the foundation of democracy, then AI systems built on public infrastructure and enhanced with comprehensive language support could fundamentally transform participatory governance. This represents a vision for AI as a democratic enabler rather than a replacement for human decision-making.
EVIDENCE
Panchayati Raj institutions are the foundation of democracy
MAJOR DISCUSSION POINT
AI as enabler of 21st century democratic participation
Agreements
Agreement Points
Language barriers in digital governance systems prevent meaningful citizen participation
Speakers: Shri Alok Prem Nagar, Moderator
Language barriers in English-only systems prevent rural citizens from understanding public money usage and participating effectively Language AI is critical for ensuring digital governance platforms are inclusive and participatory, increasing citizen trust and participation in Gram Sabhas
Both speakers agree that language accessibility is fundamental to inclusive digital governance, with English-only systems creating barriers that prevent rural citizens from understanding and participating in local governance processes.
AI should be inclusive and accessible to rural populations, not limited to urban/commercial sectors
Speakers: Shri Amit Kumar, Shri Alok Prem Nagar
AI democratization should extend beyond urban and commercial sectors to include 900+ million village residents Simple, need-addressing tools that don’t require additional procurement are readily adopted by gram panchayats
Both speakers emphasize that AI technology must be designed for and made accessible to rural populations through simple, practical solutions that address real needs without creating barriers to adoption.
Structured documentation through AI improves governance transparency and accountability
Speakers: Shri Amit Kumar, Moderator
Structured documentation will improve accountability, transparency, and systematic representation on portals Structured documentation through AI improves transparency, participation tracking, and monitoring of meeting frequency and agenda quality
Both speakers agree that AI-enabled structured documentation creates systematic records that enhance democratic accountability and enable better tracking of governance processes.
Open architecture and technological sovereignty are essential for sustainable AI systems
Speakers: Shri Amit Kumar, Moderator
Open architecture and interoperability are essential for long-term sustainability and avoiding vendor lock-in Open architecture and interoperability are critical for long-term sustainability and avoiding vendor lock-in in AI systems
Both speakers emphasize the critical importance of maintaining technological independence and flexibility through open architecture to ensure long-term sustainability and avoid dependency on specific vendors.
India has the potential to lead globally in population-scale AI governance
Speakers: Shri Amit Kumar, Moderator
India can lead the world in population-scale multilingual AI for governance based on successful digital infrastructure experience India has the potential to lead the world in population-scale multilingual AI for governance
Both speakers express confidence in India’s ability to become a global leader in AI-powered governance, leveraging its experience with large-scale digital infrastructure and multilingual capabilities.
Similar Viewpoints
Both speakers advocate for simple, accessible technology solutions that work with existing infrastructure (like mobile phones) and provide immediate, tangible benefits to users without requiring significant investment or technical expertise.
Speakers: Shri Alok Prem Nagar, Shri Amit Kumar
Bhashini integration enables citizens to view panchayat information in their local languages with a single click, creating “magic” for users Frugal approach using existing mobile phones eliminates investment barriers for gram panchayats
Both speakers view AI as an enabler rather than replacement for democratic processes, emphasizing that technology should strengthen participative governance and help citizens engage more effectively in local decision-making.
Speakers: Shri Alok Prem Nagar, Shri Amit Kumar
Language AI is critical for inclusive digital governance platforms that increase citizen trust and participation in Gram Sabhas AI should strengthen participative governance by helping people assemble in Gram Sabhas and decide on resource allocation
Both speakers highlight India’s unique advantage in implementing solutions at unprecedented scale, with successful examples demonstrating that large-scale digital transformation is not only possible but can be achieved rapidly in rural contexts.
Speakers: Shri Alok Prem Nagar, Shri Amit Kumar
Uttar Pradesh successfully onboarded 59,000 gram panchayats to eGram Swaraj in just 40 days, demonstrating scalability India’s population scale means even pilot projects exceed the performance scope of entire European countries
Unexpected Consensus
Cultural resistance to AI adoption exists even in well-resourced environments
Speakers: Shri Amit Kumar, Shri Alok Prem Nagar
Cultural change is needed as even corporate meetings still rely on manual note-taking despite available technology Simple, need-addressing tools that don’t require additional procurement are readily adopted by gram panchayats
Unexpectedly, both speakers acknowledge that technology adoption challenges are not just about resources or infrastructure, but about cultural change. The recognition that even corporate environments resist AI adoption despite having advanced tools available suggests that successful implementation requires addressing human behavior and established practices, not just providing technology.
Shared infrastructure approach is more effective than individual ministry solutions
Speakers: Shri Amit Kumar, Shri Alok Prem Nagar
Infrastructure like Bhashini provides necessary GPU resources and eliminates procurement challenges for ministries Integration with PM Suryaghar Yojana portal enables gram panchayats to drive solar campaigns effectively
Both speakers unexpectedly converge on the value of shared, integrated infrastructure rather than siloed solutions. This consensus suggests a shift from traditional government approaches where each ministry develops independent systems, toward collaborative platforms that leverage common resources and create synergies across different government functions.
Overall Assessment

The speakers demonstrate remarkable consensus across multiple dimensions of AI implementation in rural governance, including the importance of language accessibility, the need for inclusive and simple technology solutions, the value of structured documentation for transparency, the critical nature of open architecture for sustainability, and India’s potential for global leadership in this space.

Very high level of consensus with complementary perspectives rather than conflicting viewpoints. The speakers build upon each other’s arguments and share a unified vision of AI as an enabler of democratic participation rather than a replacement for human decision-making. This strong alignment suggests a mature understanding of both the opportunities and challenges in implementing AI for rural governance, with practical experience informing their shared perspectives. The consensus has significant implications for scaling AI solutions across India’s rural governance systems and potentially serving as a model for other developing nations.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The discussion shows remarkable consensus among all speakers on the fundamental goals and approaches to AI implementation in rural governance. There are no significant disagreements identified.

Very low disagreement level. The speakers demonstrate strong alignment on objectives, methods, and vision for AI in rural governance. The few partial agreements represent complementary perspectives rather than conflicting viewpoints, which suggests a mature and collaborative approach to policy implementation that could facilitate effective execution of AI initiatives in rural India.

Partial Agreements
Both speakers agree that technology adoption requires addressing user needs and behavioral change, but they emphasize different aspects – Alok focuses on making tools simple and accessible, while Amit emphasizes the need for cultural transformation even in well-resourced environments
Speakers: Shri Alok Prem Nagar, Shri Amit Kumar
Simple, need-addressing tools that don’t require additional procurement are readily adopted by gram panchayats Cultural change is needed as even corporate meetings still rely on manual note-taking despite available technology
Both agree on the need for inclusive technology that serves rural populations, but approach it differently – Alok focuses specifically on language barriers as the key obstacle, while Amit frames it as a broader issue of preventing AI from becoming elitist
Speakers: Shri Alok Prem Nagar, Shri Amit Kumar
Language barriers in English-only systems prevent rural citizens from understanding public money usage and participating effectively AI democratization should extend beyond urban and commercial sectors to include 900+ million village residents
Takeaways
Key takeaways
AI-powered language solutions like Bhashini have successfully democratized rural governance by enabling citizens to access panchayat information in their local languages, transforming participation from passive to active engagement Sabha Saar has revolutionized meeting documentation by reducing the time burden on panchayat secretaries from 65% of their workload to a simple recording and upload process, with over 115,000 meetings processed India’s frugal innovation approach using existing infrastructure (mobile phones) has enabled rapid scalability – demonstrated by UP’s 59,000 panchayats onboarding in 40 days AI applications in rural governance extend beyond documentation to include solar potential mapping from drone data, service delivery systems, and automated issue routing from visual inputs Open architecture and interoperability are critical for long-term sustainability, avoiding vendor lock-in, and maintaining technological sovereignty while building population-scale solutions The success of MOPR’s AI initiatives demonstrates that rural areas can adopt advanced technology when solutions address real needs without requiring additional investment or complex procedures
Resolutions and action items
Bhashini team is working on adding 11 more languages including Assamese, Boro, Maithili, and Santal to address coverage gaps States like Odisha, Tamil Nadu, and Tripura are advancing to second-stage implementations using Sabha Saar meeting minutes for systematic activity tracking and follow-up Department of Drinking Water and Sanitation has approached MOPR to use Bhashini for Village Water Committee meetings, with initial team interactions underway Next phase development includes creating service delivery systems where citizens can vocalize demands, track application status, and receive automated issue routing Implementation of computer vision systems to automatically analyze captured images, assign issue labels, and route problems to appropriate departments with escalation mechanisms Development of agenda population systems that can automatically generate meeting agendas based on previous meeting commitments and follow-up requirements
Unresolved issues
Specific timeline and resource allocation for expanding Bhashini to additional languages and dialects not yet covered Detailed implementation strategy for scaling AI solutions across all 2.5 lakh gram panchayats uniformly Integration challenges and coordination mechanisms between different ministries (Rural Development, Agriculture) adopting similar AI approaches Balancing automation levels in AI systems – avoiding both 100% autonomous operations and 100% human-in-the-loop processes Specific guardrails and monitoring mechanisms needed for AI systems in governance to ensure accountability and prevent misuse Resource requirements and capacity building programs needed to train panchayat functionaries on advanced AI tools beyond current implementations
Suggested compromises
Adopting a platform approach for future AI implementations that can accommodate different types of AI (agentic, generative, conversational, computer vision) while maintaining interoperability Implementing hybrid AI systems that balance automation with human oversight – not fully autonomous but not requiring human approval for every transaction Using existing infrastructure (mobile phones, connectivity) rather than requiring new hardware investments to ensure adoption without financial barriers Leveraging open-source models and infrastructure like Bhashini while maintaining the flexibility to switch technologies to avoid vendor lock-in Starting with simple, need-addressing tools that demonstrate immediate value before introducing more complex AI applications Allowing states to provide language expertise to Bhashini for training regional language bots rather than centralized development of all dialects
Thought Provoking Comments
I happened to attend a Gram Sabha in the state of Karnataka. I was there for something like 45 minutes and I was felicitated and sat on stage. And I didn’t understand a thing. And then it struck me, you know, I had this thing that how do you expect these people really to relate to what is happening? Because it is public money.
This personal anecdote represents a profound moment of self-awareness and empathy. Despite being a senior government official, Nagar honestly admits his disconnect from grassroots governance due to language barriers. This insight challenges the assumption that digital governance platforms can be effective without addressing linguistic accessibility.
This comment established the foundational premise for the entire discussion about language AI in governance. It shifted the conversation from technical capabilities to human-centered design, making the case for why Bhashini integration was not just useful but essential for democratic participation.
Speaker: Shri Alok Prem Nagar
So the idea is not to make it very, very urbanized, you know, very, very kind of elitist idea that, you know, that AI is only for urban, AI is only for industries, AI is only for commercial sector.
This comment challenges the prevailing narrative about AI being primarily an urban, commercial technology. It reframes AI as a democratizing force that should serve rural populations, fundamentally shifting how we think about AI’s role in society.
This perspective broadened the discussion beyond technical implementation to questions of equity and inclusion. It elevated the conversation to address systemic inequalities in technology access and positioned rural AI initiatives as a matter of social justice rather than just efficiency.
Speaker: Shri Amit Kumar
We found out through a survey that what really hurts a panchayat secretary is not to be able to produce the minutes of meeting in time, which are very important, which are the only record of a panchayat’s proceedings.
This insight demonstrates the power of user-centered research in identifying real pain points rather than assumed problems. It shows how understanding actual user needs led to the development of Sabha Sar, moving beyond top-down technology deployment to bottom-up problem solving.
This comment shifted the discussion from technology-first to problem-first thinking. It illustrated how effective governance technology emerges from understanding ground realities, influencing the conversation toward the importance of user research and responsive design in government systems.
Speaker: Shri Alok Prem Nagar
So my point was that if you are ready with a product that addresses their needs and it is friendly and it meets, of course, my need was that I needed the money well accounted for and their need. It was a system that could make it very easy for them to do it. So we met halfway.
This comment reveals a sophisticated understanding of stakeholder alignment in government technology. It shows how successful implementation requires balancing accountability needs (government’s perspective) with usability needs (users’ perspective), rather than imposing one-sided solutions.
This insight deepened the discussion about implementation challenges and success factors. It moved the conversation beyond technical features to the politics and psychology of technology adoption, emphasizing the importance of mutual benefit in government-citizen technology relationships.
Speaker: Shri Alok Prem Nagar
So the idea is also to look little bit long term… Problem of management. So that’s where I think we need to plan better for future… there will be a platform approach.
This comment introduces critical thinking about AI governance and scalability challenges. It acknowledges that while starting with individual use cases is necessary, governments need architectural thinking to avoid creating fragmented, unmanageable AI systems.
This shifted the discussion from celebrating current successes to anticipating future challenges. It introduced complexity about AI governance, interoperability, and the need for systematic approaches to AI deployment in government, elevating the conversation to strategic planning level.
Speaker: Shri Amit Kumar
I am not throwing it all open out to AI. I don’t wear T-shirts saying I love AI or something, but I have a problem and it needs fixing and I need to be able to know what aspects of AI can help me fix that in the best possible manner.
This comment provides a refreshingly pragmatic perspective on AI adoption, countering the hype-driven narrative often surrounding AI discussions. It emphasizes problem-solving over technology enthusiasm, which is particularly valuable coming from someone who has successfully implemented AI solutions.
This grounded the entire discussion in practical reality and shifted the tone from technological evangelism to measured, problem-focused implementation. It provided a framework for thinking about AI adoption that other participants and the audience could relate to, making the conversation more accessible and credible.
Speaker: Shri Alok Prem Nagar
Overall Assessment

These key comments fundamentally shaped the discussion by establishing it as a human-centered, equity-focused conversation about AI in governance rather than a purely technical discussion. Nagar’s opening anecdote about language barriers set the stage for understanding AI as a tool for democratic inclusion, while Kumar’s comments about AI not being elitist reinforced this theme. The insights about user research and stakeholder alignment shifted the focus to implementation wisdom rather than just technological capabilities. The discussion evolved from showcasing successes to examining deeper questions about scalability, governance, and the pragmatic approach needed for sustainable AI adoption in government. Together, these comments created a nuanced dialogue that balanced celebration of achievements with honest assessment of challenges, making it both inspiring and practically valuable for others working in digital governance.

Follow-up Questions
How can AI tools be integrated with other ministries delivering last mile services such as Ministry of Agriculture and Farmers Welfare?
This question was raised to understand how lessons from MOPR’s AI journey could be applied to other ministries, but wasn’t fully explored in terms of specific implementation strategies
Speaker: Moderator
What are the specific technical requirements and challenges for expanding Bhashini to support the 11 additional languages mentioned (Assamese, Boro, Maithili, Santal, etc.)?
While mentioned that states are working on 11 more languages, the technical complexities and resource requirements for this expansion weren’t detailed
Speaker: Shri Alok Prem Nagar
How can the image recognition pilot from Guwahati (bus camera capturing and auto-assigning issues) be scaled and integrated with existing panchayat systems?
This was mentioned as a potential next step for automated issue detection and assignment, but implementation details and scalability challenges weren’t discussed
Speaker: Shri Alok Prem Nagar
What specific mechanisms need to be developed for AI-powered service delivery where people can vocally request services and track their status?
This was identified as the next frontier but lacks detailed technical architecture and implementation roadmap
Speaker: Shri Alok Prem Nagar
How can the dense point cloud information from drone surveys be further utilized beyond solar potential mapping?
While solar potential was successfully extracted, other potential applications of this rich data weren’t explored
Speaker: Shri Alok Prem Nagar
What are the specific training and capacity building requirements for scaling AI adoption across all 2.5 lakh gram panchayats?
Training programs were mentioned as needed but specific curriculum, delivery methods, and resource requirements weren’t detailed
Speaker: Shri Alok Prem Nagar
How can AI tools be integrated with spatial development plans to provide better visualization and citizen engagement?
While visualization was mentioned as successful, the role of AI in creating and presenting these visualizations wasn’t fully explored
Speaker: Shri Alok Prem Nagar
What are the technical specifications and governance framework needed for a platform approach to manage multiple AI use cases across government departments?
The need for a platform approach was identified but specific technical architecture and management frameworks weren’t detailed
Speaker: Shri Amit Kumar
How can the Department of Drinking Water and Sanitation’s Village Water Committees integrate with Bhashini, and what are the technical requirements for this expansion?
Initial interactions were mentioned but specific integration challenges and technical requirements weren’t discussed
Speaker: Shri Alok Prem Nagar
What are the specific accuracy metrics and improvement mechanisms for Sabha Sar across different languages and dialects?
While great accuracy was mentioned, specific metrics and continuous improvement processes weren’t detailed
Speaker: Shri Amit Kumar

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Panel Discussion AI & Cybersecurity _ India AI Impact Summit

Panel Discussion AI & Cybersecurity _ India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion centered on the launch and development of the Global Network of Centres for Exchange and Cooperation on AI Capacity Building, a UN-supported initiative aimed at democratizing AI access and reducing the global digital divide. The network emerged from recommendations by the UN Secretary General’s high-level advisory body on AI and was initiated through collaboration between Saudi Arabia and Kenya during the General Assembly.


S. Krishnan from India’s Ministry of Electronics and IT emphasized their comprehensive approach to AI education, announcing plans to teach AI across all university courses and to school children starting from third grade. Amit Shukla from India’s Ministry of External Affairs highlighted the global AI capacity divide, noting that only countries with AI capabilities can fully benefit from the technology, and announced that 14 countries have already nominated institutions to join the network.


The panel discussion revealed the network’s rapid growth and collaborative spirit. Dr. Abdurrahman Habib from Saudi Arabia shared impressive results from their Women Elevate program, which has trained 6,000 women from 86 countries in AI skills with an 89% completion rate. Professor Balaraman Ravindran from IIT Madras, representing India’s first center in the network, emphasized that AI literacy should extend beyond technical expertise to enable everyone to use AI effectively in their respective fields.


Seydina Moussa Ndiaye from Senegal, a former UN advisory body member, explained how the network addresses the gap between countries that understand AI trends and those that need capacity building support. The network has established a cooperation framework and is developing blueprints to help countries build their own centers. Looking toward 2030, participants envision the network creating meaningful global dialogue where all countries can contribute equally to AI discussions, ensuring no one is left behind in the AI revolution.


Keypoints

Major Discussion Points:

AI Education and Training Initiatives: Multiple speakers emphasized comprehensive AI education programs, including India’s policy to teach AI from third grade through university level, and Saudi Arabia’s Women Elevate program that has trained over 6,000 women from 86 countries with an 89% completion rate.


Global Network for AI Capacity Building: The central focus was on establishing and expanding the UN Global Network of Centres for Exchange and Cooperation on AI Capacity Building, which currently includes 14 countries (Brazil, China, Ethiopia, Guinea, India, Kazakhstan, Kenya, Rwanda, Saudi Arabia, Senegal, Slovakia, South Africa, Trinidad and Tobago, and Vietnam).


Bridging the Global Digital Divide: Speakers consistently addressed the need to ensure equitable access to AI benefits, particularly for Global South countries that face resource and access constraints, emphasizing that without collaborative efforts, AI could create unprecedented divides between nations.


Institutional Innovation and Governance: Discussion of the need for new institutional frameworks to guide AI development, moving beyond private sector-led initiatives to create collaborative governance structures that enable data sharing, regional compute centers, and shared best practices.


Meaningful Human-AI Coexistence: The conversation concluded with reflections on maintaining human identity, community, agency, and purpose in an increasingly AI-driven world, emphasizing the importance of ensuring technology serves humanity rather than the reverse.


Overall Purpose:

The discussion aimed to launch and promote the UN Global Network of Centres for Exchange and Cooperation on AI Capacity Building, showcasing how member countries are collaborating to democratize AI access, share expertise, and ensure no nation is left behind in the AI revolution. The session served to highlight successful capacity-building initiatives and encourage broader international participation in the network.


Overall Tone:

The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers expressed enthusiasm and pride in their achievements while emphasizing partnership and mutual support. The tone was formal yet warm, with participants celebrating milestones and expressing genuine excitement about the network’s rapid growth. There was a sense of urgency about addressing global AI inequities, but this was balanced with confidence in the collaborative solutions being developed. The closing remarks became more philosophical and reflective, adding depth to the practical discussions that preceded them.


Speakers

Speakers from the provided list:


S. Krishnan – Secretary, Ministry of Electronics and IT (India)


Amit Shukla – Joint Secretary, Cyber Diplomacy Division, Ministry of External Affairs (India)


Mehdi Snene – Senior Advisor to the UN Secretary General’s Tech Envoy


Abdurrahman Habib – Representative from Kingdom of Saudi Arabia, leading one of the UNESCO centers in Saudi Arabia


Balaraman Ravindran – Professor at IIT Madras (India), member of the UN scientific panel


Seydina Moussa Ndiaye – Former UN Secretary General high-level advisory body on AI member, representative from Senegal


Fitsum Assamnew Andargie – Representative from Ethiopia, part of AFRD Labs network


Vilas Dhar – President, Patrick J. McGowan Foundation


Anne Marie Engtoft Meldgaard – Tech Ambassador from Denmark


Eugenio Garcia – Ambassador for Technology and Innovation from the Government of Brazil


Moderator – Session moderator (role/title not specified)


Additional speakers:


None – all speakers mentioned in the transcript are included in the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion focused on the Global Network of Centres for Exchange and Cooperation on AI Capacity Building, a UN-supported initiative designed to democratise AI access and address the global digital divide. The network represents a member-state-led approach to international AI governance, with initial momentum from Saudi Arabia and Kenya’s collaborative call during the General Assembly.


Current Network Membership and Institutional Framework

The network has achieved significant early progress, with 14 countries nominating institutions: Brazil, China, Ethiopia, Guinea, India, Kazakhstan, Kenya, Rwanda, Saudi Arabia, Senegal, Slovakia, South Africa, Trinidad and Tobago, and Vietnam. This diverse geographical representation demonstrates global appetite for collaborative AI capacity building.


Seydina Moussa Ndiaye, a former member of the UN high-level advisory body on AI, provided strategic context for the network’s development. The network was conceived alongside two other key initiatives: the UN AI scientific panel and the global AI dialogue. While the panel provides evidence-based analysis and the dialogue facilitates discussion, the network addresses a critical gap by ensuring all countries have sufficient AI understanding to participate meaningfully in global conversations.


The network has moved beyond conceptual discussions to practical implementation. A cooperation framework has been adopted during workshops, with significant progress made at gatherings including one in Dakar where six centres participated. The network has developed an “offer sheet” system where participating centres specify services they can provide, facilitating resource sharing and collaboration.


Demonstrated Success Stories and Practical Implementation

Dr. Abdurrahman Habib from Saudi Arabia shared remarkable results from their Women Elevate programme, which exemplifies successful large-scale AI capacity building. The programme aims to empower 25,000 women globally in AI over three years through online training culminating in Microsoft AI 900 certification. In just one year, they have achieved impressive results: 29,000 women registered, 6,000 completed training from 86 countries, with an extraordinary 89% completion rate for the 26-hour, five-to-six-week programme.


These statistics demonstrate that high-quality, inclusive AI education can be delivered effectively across diverse global contexts. The programme has been successfully adapted for public servants, with Kenya training over 300 women in their foreign affairs ministry, proving the programme’s versatility and practical applicability.


Dr. Habib noted that in the Global South, women often view technology and STEM fields as preferred career paths, creating natural enthusiasm for AI education that can be leveraged for capacity building initiatives.


National AI Education Strategies and Training Approaches

S. Krishnan from India’s Ministry of Electronics and IT outlined a comprehensive approach to AI education serving as a model for inclusive AI literacy. India’s multi-tiered strategy encompasses: working with industry bodies on retraining programmes for existing workers, ensuring AI is taught across all university courses regardless of discipline, and implementing policy to teach AI to school children starting from third grade.


Professor Balaraman Ravindran from IIT Madras provided candid insights into the educational challenges AI presents, admitting: “I don’t even know how to teach anymore.” This reveals the profound disruption AI is causing to traditional educational paradigms, as students increasingly embrace self-learning and resist conventional classroom settings. Ravindran emphasised that AI capacity building should focus not solely on creating researchers, but on enabling people to “use AI to do whatever you want to do better,” since every field will be influenced by this technology.


India’s international capacity building efforts include their ITEC programme, which has provided training to officials from 160 countries since 1964, with plans to expand AI-specific courses significantly.


Global AI Capacity Divide and International Cooperation

Amit Shukla from India’s Ministry of External Affairs highlighted a critical paradox: while AI enables human welfare and progress, “only countries with AI capabilities can reap actual AI benefits to their fullest potential.” Without collective action to ensure equitable sharing of AI benefits, this technology could create “the widest unfathomable divide among countries.”


Countries from the Global South face particular challenges including resource constraints and limited access to AI infrastructure, inhibiting their ability to harness AI for economic development. This reality necessitates collaborative international efforts to bridge the emerging AI capacity divide through initiatives like this network.


Institutional Innovation and Governance Framework

Vilas Dhar from the Patrick J. McGowan Foundation provided analysis of broader institutional challenges facing AI governance, asking: while AI technology innovation proceeds rapidly, “where is the innovation in the institutions that will guide what the future of AI looks like?” He noted that no country is so far ahead in institutional AI work that others cannot catch up, but neither is any country so far behind that they are out of the race entirely.


Dhar emphasised that effective AI governance requires moving beyond frameworks to develop “muscle memory” through practice and collaboration. The network provides mechanisms for countries to build practical experience in sharing data, collaborating on compute resources, and developing joint approaches to AI challenges that transcend national boundaries.


Anne Marie Engtoft Meldgaard, Denmark’s Tech Ambassador, noted that in the European Union, similar institutional development would typically take much longer, highlighting the network’s remarkable efficiency in moving from concept to implementation.


Future Vision and Development Goals

Dr. Fitsum Assamnew Andargie from Ethiopia, representing the AFRD Labs network, envisioned the network contributing to “distribution of capacity” encompassing both computational power and human expertise, ensuring “no one left behind” with people capable of conducting AI research and using AI to improve livelihoods.


Professor Ravindran offered an ambitious vision: the network should contribute to countries advancing their AI readiness so significantly that the UN would need to redesign its categorisation systems for AI preparedness, elevating all countries to the highest level rather than having them distributed across multiple readiness levels.


Dr. Habib projected exponential growth for the network, emphasising its primary value in creating a platform for sharing experiences and programmes that previously did not exist, contributing to meaningful global AI dialogue where all countries participate as equals.


Human-Centred AI Development and Meaningful Coexistence

Ambassador Meldgaard articulated a framework for “meaningful coexistence with technology” requiring four essential elements: identity (maintaining humanity in a technological world), community (strengthening human connections despite digitisation), agency (ensuring meaningful control over AI’s role in people’s lives), and purpose (collectively determining what we want from AI technology).


Her vivid metaphor crystallised concerns about current AI development: “I would love for the AI to empty the dishwasher whilst I write poetry and play with my kids. But right now, we’re in a trajectory where I am emptying the dishwasher whilst the AI is playing with my kids and writing poetry.” This observation challenges assumptions about AI’s beneficial development and underscores the importance of capacity building that includes critical thinking about AI’s role, not merely technical skills.


International Support and Commitments

The session demonstrated strong international commitment to the network’s success. Ambassador Eugenio Garcia from Brazil announced his country’s participation through two specific universities—the Federal University of Pernambuco and the Federal University of Rio Grande do Sul—emphasising that the network complements the AI track of the Global Digital Compact and strengthens multilateralism.


Next Steps and Strategic Implementation

The network is preparing for its next meeting in Riyadh before the July summit, with plans to continue expanding membership and programmes. Future developments include creating blueprints to help countries without existing centres establish their own AI capacity building institutions.


Conclusion

The discussion revealed remarkable consensus on the urgent need for global AI capacity building through international cooperation, comprehensive education, and human-centred approaches. The Global Network of Centres for Exchange and Cooperation on AI Capacity Building represents a collaborative approach to ensuring AI development serves humanity’s collective interests rather than exacerbating existing inequalities.


The network’s early achievements, demonstrated through concrete successes like the Women Elevate programme and rapid institutional development across 14 countries, provide evidence that ambitious global AI capacity building is practically achievable. With strong international support and clear next steps including the upcoming Riyadh meeting, the network is positioned to make significant contributions toward ensuring the AI revolution benefits all of humanity rather than creating unprecedented global divides.


Session transcriptComplete transcript of the session
S. Krishnan

Industry bodies, we are working on retraining. Through the higher education department, we are looking at making sure that AI is taught across all courses in all universities and all institutions so that everyone, irrespective of which branch they study, are aware of how AI can make a difference to them. And our school education department has announced as a matter of policy that AI would be taught to school children right from class three, from third grade. So in that sense, we are looking to make AI truly inclusive and train the next generation to adapt to AI and ensure that those who have already joined the work stream are also retrained for this purpose. I’m once again delighted that this event is taking place and it will generate more commitments to further strengthen this global network of institutions.

Thank you very much.

Moderator

Thank you, Mr. Krishnan, for these insights. delightful remarks, especially around democratizing access to AI resources as well as keeping humans at the center. Now I would like to call upon Sri Amit Shukla, Joint Secretary, the Cyber Diplomacy Division from the Ministry of External Affairs. Can we have a round of applause for Mr. Shukla, please?

Amit Shukla

Sri S. Krishnan, Secretary, Ministry of Electronics and IT, my dear friend, Professor Ravindran, Excellencies, distinguished guests, ladies and gentlemen. Artificial intelligence has emerged today as an enabler for the welfare and progress of humanity. Whenever AI is deployed with purpose, it can catalyze economic growth and social empowerment. government for all. Yet, only countries with AI capabilities can reap actual AI benefits to their fullest potential. We must collectively address this anomaly and ensure that the benefits of AI is equitably shared. Else, this very revolutionary technology could only bring the widest unfathomable divide among countries. Countries, especially from the global south, face resources and access constraints. This inhibits their pursuit to harnessing AI for economic and development opportunities.

A collaborative international effort becomes highly relevant to bridge this emerging AI capacity divide. India, with this conviction, has been a strong proponent for international AI capacity building cooperation, especially for the global south. Our long -standing ITEC program is the testimony to this belief. Under the ITEC program, we have imparted training to thousands of officials from 160 countries since 1964. We have deployed our vast and rich network of institutions and training facilities for this purpose. Annually, around 10 ,000 fully funded in -person training opportunities programs are offered to nearly 400 courses at 100 eminent institutes in India. Some of these training courses are AI courses and we intend to expand this further. In this spirit, we stand with the initiatives of the United Nations and welcome the establishment of the Global Network of Centres for Exchange and Cooperation on Capacity Building.

The network would bring unique expertise and perspectives from different regions of the world. This diversity would only enrich the purpose of the network in its assessment of local AI capacity needs. The network must truly facilitate sharing of expertise, training use cases and developing infrastructure for countries. We have developed our expertise in successful, innovative AI technologies. . Our achievement on integration of DPI solutions and adoptions. into AI to leverage technology for social and economic progress could add value to the network. The AI capacity building models under India AI mission would be relevant for the network. I congratulate all the participating countries on launching of the framework for the network. As we stand today, we have 14 countries already nominating institutions.

These are Brazil, China, Ethiopia, Guinea, India, Kazakhstan, Kenya, Rwanda, Saudi Arabia, Senegal, Slovakia, South Africa, Trinidad and Tobago, and Vietnam. It is a matter of satisfaction that IT Madras from India took the initial steps in this endeavor. Let today’s steps of the network build tomorrow’s bigger strides. Thank you.

Moderator

Sorry, could I make a quick announcement to have all the panelists and the speakers on the stage for a quick photo? Mr. Shukla? We will have a quick photo opportunity with all of the featured speakers for this session and we will proceed with the panel right after. Thank you for your patience. Thank you. S. Krishnan and Joint Secretary Shri Amit Shukla. We will now proceed with the panel discussion. Thank you. I would now like to invite Dr. Mehdi Snen, a Senior Advisor to the UN Secretary General’s Tech Envoy, to please introduce the panelists and moderate the panel. Thank you.

Mehdi Snene

He’s coming back. Thank you so much for your excellencies setting the discussion regarding the Global Network for Centers of Exchange and Cooperation on AI Capacity Building. It’s truly my honor today to welcome these distinguished panelists today on the panel to talk about the network, explain how it started, where we are heading to, and then what are the biggest plans we have for the network. I’m Mehdi from the UN, and I’m today happy to have Sayyidina Musa from the UN. the Center of Senegal, Habib Abdurrahman, Abdurrahman Habib from Kingdom of Saudi Arabia, Fitsun from Ethiopia, and Dr. Ravi from IIT Madras. And I’ll start with a kind of chronological order of how we have set up the network that started with an initial call from the Kingdom of Saudi Arabia in Kenya during the General Assembly, calling for member states to join their effort to build a global network on AI capacity building that could truly leave no one behind, in particular in the needs of building the AI national strategy and building local and sovereign national AI capacities.

Dr. Abdurrahman, I’ll start with you as the chronology of the genesis of the network. I know that there is a lot of initiative coming. in the Kingdom of Saudi Arabia related to the AI. You are leading one of the centers that has been already established with UNESCO. I see that there is a lot of UN agencies also collaborating and cooperating on building this kind of network beyond the actual one. I would like to start by asking, how do you see the cooperation? You are leading one of the UNESCO centers in Saudi Arabia. How do you see the cooperation? How do you see the cooperation and diesel cooperation among the different networks, but also among the different networks held by different international organizations today?

Abdurrahman Habib

Thank you very much. Thank you for having us. It’s a pleasure to be here and seeing our friends. I’m so excited and happy to see the dream is coming true. A couple of years ago, we started multiple meetings saying we want to work on capacity. We think capacity is a very important thing. We think capacity is a very important thing. building is one of the most critical parts, and at the same time, it needs a lot of investment, and we need to come together to build it together. Long story short, we sat with multiple countries. I’m very proud that we and Kenya, especially Philip, Ambassador Philip, he’s not here, but we managed to put together the first meeting in order for us to talk about it.

It was very challenging at the beginning. That wasn’t part of the plan, wasn’t part of an official gathering, but we believe that capacity building needs to be a network. We need to work together, not scattered, and we need to support each other in programs. In Saudi, one of the strengths that we have in the past couple of years is actually capacity building, and that’s why we tried to show what’s been done in Saudi, especially for the Global South, for all of us today on the table, I believe. All of us are very proud of the work that we’re doing. have big population, young population, that they are eager to learn. I’ll just give an example.

When we started Women Elevate program at our UNESCO center, we thought Women Elevate program will show them how eager those students are. So the goal of Women Elevate is to empower ladies globally on AI by offering a training program for 25 ,000 ladies as a goal for three years. Only in the past year, we managed to finish 6 ,000. The number is not important. What’s actually important is to see what is the success rate of those students in the program. This is an online program, fully online. We provide, of course, mentorship, and we provide the support, and it finishes with a certificate from Microsoft AI 900. This is 26 hours of training. That’s about five to six weeks. More than… 89 % of the students are finishing the courses and getting the certificate.

Now 6 ,000 of them have done the program and almost the majority of them got the certificate. We’re talking about more than 86 countries this program covered. And we believe such programs and many other programs globally will be able to make a dent and change the future for so many of our citizens. And especially for women and I mean it for Global South. In Global South we look at technology differently than the northern part. Many of our colleagues and sisters they look at IT and STEM overall as the go -to major and the go -to place to learn and equip in technology. Therefore you will see that we have 29 thousand ladies registered in the program. Can you imagine that 29 ,000 ladies just since June want to continue and learn in this program, and we will be hopefully able to cater and finish this target and move to new targets as we go.

Not only that, but also we twisted the program a bit by offering the program for public servants. Unfortunately, we are only offering it for public servant ladies. So, for example, in Kenya, Philip managed to train the majority of his team and the working groups, more than 300 ladies now already trained in the foreign affairs in Kenya. And that’s what we want to see in delegation and many other programs, and that’s what we are hoping to achieve in the next couple of years. So I’m very excited and happy to see that dream came true, and we are in a network today where we will share programs, and we will hopefully share even more and more success. We will share this story as we go with our colleagues.

Thank you.

Mehdi Snene

Dr. Abdurrahman, thank you so much. This is impressive numbers you’ve articulated there, and I’m happy to count on your support. This is, as I said, this is a member -state -led initiative, and it’s proposing centers, as our officials from India already expressed. One of the first centers to join is, thanks to our Professor Ravi, a colleague now from the scientific panel, is IIT Madras. So you took the initiative of initiating that, so probably you really see the outcome and the value of that. Firstly, as a professor at IIT Madras in India, and secondly, as a scientific, joining the UN scientific panel, what do you really expect from the network? How do you see the value of that network?

Balaraman Ravindran

Great. So first off, I mean, I’m super thrilled that we are here now, that we have actually gotten this. moving and the thing with for India so for us it’s we are a country in multiple parts we are like a curate some parts of the country which is a lot of talent and other parts of the country where we really have to start building our own networks for doing this kind of capacity building and so we know the difficulty and value of making sure that the entire population is skilled at least AI literate and have the capacity to contribute meaningfully both to an AI enriched economy and as well as the AI development going forward and I believe this is a conversation that cannot be done within the country alone so we really need to get everybody on board and as an academic what I’m looking at is just taking all my other hats off just putting my teacher hat on I don’t even know how to teach anymore.

So that’s the truth, right? The skilling, the learning, the mechanisms, the facilities that are available, and even the training that the children who are… I teach at a university. I shouldn’t call them children anymore, but anyway. The students are going through when they come to us, right? It’s very different. There’s a lot much more self -learning. Students are more comfortable doing things on their own, and in fact, trying to force -fit them into a classroom setting is always challenging, right? But then, what I’m also seeing is that everybody, everybody wants to know AI. Everybody wants to use AI. And I think that’s correct. Not because I want more people to do AI research. I’m not looking for more grad students or research assistants, but it is because every walk of life is going to get influenced by AI.

So when we talk about capacity building in AI… It is not just capacity to do AI better, but capacity to use AI to do whatever you want to do better. And that, I think, is a global imperative. Everybody should know how to use this technology so that as a planet, as humankind, we are able to jointly elevate our worth. And so, as Professor Bengio was saying in the morning, we want everybody at the table. Well, nobody is the dinner. That was a very provocative statement. It leaves a powerful image in your mind. But I think that’s important. And what we are doing here is great for that. And now, putting on my panelist hat. Do I have a couple of more minutes for the panelist hat?

Do I have a couple of more minutes for the panelist hat? Okay, I’ll take that as an yes. I was not ready for that question. I’m not supposed to answer the questions. Sure. So from the viewpoint of the scientific panel, so the whole idea behind the scientific panel is to provide evidence -driven, science -based approach to the state of AI, the impact of AI, and the potential progress of AI in the coming times. So in that sense, unless we have meaningful engagement with the global majority, with everyone in the globe, it’s going to be futile trying to say that the panel is going to talk for the world at large. And for us to have that conversation, we need to make sure there is a sufficient amount of…

of expertise, sufficient capacity around the globe to engage in that conversation. So I think that is important. I’m pretty sure the panel had a tough time finding enough representation from the global south. Thank you so much. That you can answer. We need to get that. Yeah, true.

Mehdi Snene

Thank you so much, Professor Ravi, and thanks for your kind words. So started with Saudi Arabia and then India. Saudi Arabia offered the first center, and then Senegal offered the second center, and to host the second meeting of the network that happened at the end of January in Dakar, and our host today is with us. So, Sadie and I, you are a former UN Secretary General, a high -level advisory body on AI member, and among the recommendations, there was this network of capacity building. I recognize some other HLAB members sitting there. I don’t think in the room. so they will be watching you closely. I’ll give you my microphone, no worries. I’ll give you mine.

So when you’ve done this recommendation, you have the best view on what you expect from the network. We’ll make it, because we are running out of time. But please give us more clarification on the initial area and then the current implementation and where we are heading to.

Seydina Moussa Ndiaye

Thank you, Mehdi. I’m very excited to be here. As you say, the network of AI Capacity Building Center was one of the recommendations of the H -Lab. And the two first ones were the panel and the global dialogue. And as you say, the idea of the panel, was to have evidence on opportunities and challenges on AI to give to policymakers. And the dialogue… was to bring all countries together to have this dialogue around AI. But as you know, when we have all countries, there is this gap between countries. There is some countries who understand what’s going on and others who are here but don’t understand all the trends, all the risks, all the challenges, all the opportunities of AI.

So that’s why the network of capacity building on AI capacity building was also proposed to give the opportunity to help countries to have more understanding on AI and to build their own ecosystem. And with the network, obviously, now… being a reality. I think that what we have done since then is to adopt our cooperation framework. We begin the work in India here with IIT Madras, and the cooperation framework is now adopted during the workshop in Dakar, where we had, I think it was six centers which adopted it. And we talked about what could be the way of doing things within the network. We worked on an offer sheet so that each network who came, each center, who came in the network can offer some services to the network.

And we are still working on stabilizing the office sheet. And the next step will be to have a blueprint because it’s important to help also countries which haven’t a center yet to build a center. So we will have a blueprint on how to build a center. I think that we asked Audet to do the first draft. And we worked on a couple of activities we can do. And one of the main project is to have a capacity building. And I think we will work on it with Abib and so on. And we try to have all the big projects. Multi -country projects. so we can work together and help each other. And the next step will be to have perhaps a third meeting.

Habi was talking about having it in Riyadh, I think. So perhaps it will be at Riyadh before the summit on July.

Mehdi Snene

Excellent. This is excellent news. Excellent news. So our centers get prepared to come to Riyadh. Exciting city. I’ve been there recently. Good. So Kingdom of Saudi Arabia started the initiative, India first center. Africa was strongly represented by Dakar as a second host. And then we got among the first cohort of centers that joined are Ethiopia. Dr. Fitzsimmons was joining us in IAT Madras and then in Dakar. With that, we’re going to wrap up the session. with a strong enthusiasm regarding the center. As a center who joined, not building the initiative, but joined it, you have for sure seen something within that initiative that attracts you. And I’m sorry we are running out of time, so maybe we have two minutes, but I want to listen from one of the first centers to join the cohort.

Why did you join the network?

Fitsum Assamnew Andargie

Thank you, Mehdi, and I’m very happy to be here and also very happy to be part of the network. So I’ll tell you how we got into this. So I’m part of a network of African labs supported by IDRC called AFRD Labs. And we saw that. Like, there is a need for collaboration across Africa. to develop our AI capacity. And we were introduced to this new program that we thought that would actually help us in actually creating the network. And in fact, joining the network would help us, want each other to develop our capacity and lean on our neighbors not to be left behind. For example, in Ethiopia, there is a huge investment in AI by the establishment of the AI Institute.

And it was responsible for developing policy, developing strategy, and also supporting capacity building. And from a university, like the university itself, started thinking about AI and started its own policy in the way like… education is delivered. And then an AI course was built, developed, and when we looked at this, we still are left behind. We need to become more competitive. And for that, we need the capacity building. So this network provides us opportunity. Not only that, we can also help others because we understand the context, the local context. The problems we faced when trying to establish our own centers. And our discussions actually helped us understand that, oh, okay, we are in it together, so we can help each other get there.

So that’s why, actually, we were very… We were very enthusiastic. And the government was very enthusiastic about saying, oh, okay, we should join this network. Thank you.

Mehdi Snene

Excellent. Thank you so much. Again, so we heard a lot from the investigators, principal investigators, the designers, the participants, all the enthusiastic centers about the network. Now, in a very short answer, I’ll get back to all of you. How do you see the network in the next five years? Meaning we have at the UN the 2030 SDG goal. In 30 seconds, if you can do it, how do you see the network in 2030 contributing to that? Or where do you see the network in 2030? Dr. Habib, please.

Abdurrahman Habib

Okay. Thanks, Mehdi. In 2030, I think that the dialogue is here. If the network. Work well. we will have a meaningful dialogue. It’s not only some countries who will lead the discussion, but all countries in the world, I think. I believe that the network will grow exponentially. We’re a small number now, and we already grew exponentially in the past couple of months. This will continue as a trajectory for quite some time. But what’s more important is that the platform is there now. So we can share experiences and share programs in a way that we didn’t have before. And by doing so, I believe that will also contribute to our beneficiaries, whomever they are, and we’ll receive more and more training and more capacity will be built in that program.

Thank you.

Fitsum Assamnew Andargie

Thank you. In five years, where I see the network is that it could have… distribution of capacity. When I say that it’s not only the compute power, but the human power as well. We will have no one left behind, which means you have people that can do research and generate new knowledge, and people that can use AI and develop their livelihoods. That’s where I see the effect of this is going to be for all these countries involved. Thank you.

Mehdi Snene

Dr. Ravi.

Balaraman Ravindran

has the categorization of countries as to how ready they are with regard to AI. So five years from now, I wish the network would have contributed to such an extent that the UN would have to redo the categorization. So that they have to take the topmost level and start splitting it into four as opposed to having four levels of AI readiness. So everybody is at the top level as we imagine it now. And then we’ll go on from there. Thank you so much, Ravi. The floor is yours, yours, Chair MC.

Moderator

Thanks for all the panelists and thanks for joining us for this short discussion. Thank you, panelists, for that insightful discussion. Before we proceed with the closing remarks, I’d like to remind the audience that we will soon have with us Sri A. Revan Thretty, the Honorable Chief Minister of Telangana, a state that’s emerged as one of the leaders in industrial innovation and technology -led governance. And he will be presenting a keynote address on AI and cybersecurity, Harnessing AI Power in the State’s Growth. Those who would like to stay back for that session, please be seated. Those who would like to leave after the ODIT session, please use this door. Thank you so much. Thank you. That was a really insightful panel which spoke about the needs of the Global South, capacity building for women and youth particularly.

And now moving on, we would like to invite Mr. Vilas Thar, President Patrick J. McGowan Foundation, for his keynote address. May I please request Mr. Thar to come on stage?

Vilas Dhar

Thank you so much and good afternoon, everyone. What an exciting conversation that wraps together so much of what we’ve heard over the summit. I want to acknowledge Your Excellencies, our friends here in the room, and I want to take this few minutes to share with you three ideas that directly connect to the network we’re building. The first is we’re in a time when innovation in technology seems like it’s moving so quickly. but I have to ask where is the innovation in the institutions that will guide what the future of AI looks like? And I think there is a matter of timing that’s quite interesting. In many ways it feels like no country is as far ahead on this institutional work as they might hope but neither is any country so far behind that they feel like they’re totally out of the race.

This network gives us an ability to build the institutions that will guide what the AI future looks like. And in that I think is the second opportunity. When we begin to think about what it will look like to build collaboration across countries, across sectors, across topics, I think it’s fair to say that we will not look to the private sector to define that conversation for us. It will require a different model. One that brings governments to play to set policy that allows us to collaborate with the sharing of data. With the idea that compute, even as much as we want to talk about it being sovereign, will have regional centers of excellence and that we need to build ways to collaborate around how we can collaborate.

And I think that’s why we share those resources. resources, that talent flows in this modern world, and that we need the institutions that will let us share our best practices. And third, and maybe most importantly, at a time when AI governance is the topic of the moment and everybody has a new framework, a framework that’s grounded in deep process and practice but still exists only as a framework, we need the institutions that will turn frameworks into practice, that’ll build the muscle memory of collaboration, that’ll actually tell us what it looks like to sit down and negotiate the complexity of ensuring common cyber defense, of sharing data, of building algorithms around agricultural practices that transcend geographies and local weather patterns, that allow us to abstract the underlying knowledge that drives these algorithmic designs and make sure that we can apply them in each place as needed.

That governance is a matter of muscle memory. It’s a matter of practice and it’s a matter of choice. Now, these are three observations that guide why we came to the original idea of creating this network to begin with. I want to acknowledge my colleagues here in the front row from the UN Secretary General’s high -level advisory body, a group that came together with scientific expertise and policy expertise from around the world to set forward a set of recommendations that didn’t just focus on capacity building, but also on the frameworks of global governance at scale. And I want to acknowledge the countries that led on the Global Digital Compact, the first new major multilateral institutional framework for how we might think about issues of interconnectedness in a digital world.

And I want to acknowledge the countries that came forward to really put this initiative together, starting first, of course, with our dear friends from Saudi Arabia and from Kenya, with the incredible work of India here and Senegal and the work that will continue. But I want to acknowledge that even when it often feels like this work happens in abstraction, that it happens in international agreements, in national coordination, at the heart of where this work happens is in the digital world. And I want to acknowledge the people, the scientists, the people that are involved in the civil society advocates, the private sector entrepreneurs who are building this at scale. And so let me conclude then on this point, that even as we come to the end of this incredible summit, as we’ve heard from so many, as we’ve heard both proclamations from the stage, and maybe more quietly, behind closed doors, the work that’s happening when people come together to ask a simple question, how can I help?

How can I be involved? That we ensure that we open the doors of transparency, that we allow for participatory mechanisms, that we ensure that we hold not just our values around what technology should look like, but what our society should look like as it enables these technologies. That we continue to enforce a basic adherence to questions like, are we ensuring diversity in participation? Are we ensuring that the next time we hold a conversation like this, we’ll see an equal number of men and women leading centers around the world on AI? That the students who are represented at institutions like this, show a diversity of thought. That we’re ensuring that we’re investing in the rights and norms and values and principles that should guide international collaboration.

And that the centers like the ones that are represented here today will be the vanguard of a global network that sits above and beyond where private sector innovation and frontier models sits, but rather innovates towards the kind of society that we all aspire to. One where these tools are used to enable our common purpose. One where India leads, but so too does Senegal and Kenya. So does Trinidad. So does Chad. So does the United States. And so does Saudi Arabia, a world where we come together to define what our common vision might look like when AI enables our very best. I want to thank you all. I want to thank the incredible center chairs that we had.

And I want to see us all come together in this work. Thank you.

Moderator

Thank you, Mr. Govan, for those powerful reflections, particularly emphasizing the need for diversity in participation. I’m now very excited to announce our final speaker, Her Excellency, Ms. Anne Melgaard, the tech ambassador from Denmark. But I also would like to make a short announcement that there will be a short intervention after this by the International Center for Global Innovation. Ambassador for Technology and Innovation from the Government of Brazil, Mr. Eugenio Garcia as well. so but please my honor to welcome miss Melgar

Anne Marie Engtoft Meldgaard

good afternoon everyone Vilas you’re such a hard act to follow I love this idea of Muslim memory let me congratulate the four gentlemen that was on the stage before I am so impressed and I’m almost a little scared at the scale of the progress that you have already created with this network in such a short amount of time in my home region in the European Union and in Europe it would take a little longer for us before we find the format and the framework and then we make it into London maybe in a few years would actually be able to do what you’ve been doing in such a short amount of time congratulations the global digital divide we’ve been speaking lengthy about it this last week it is still a huge challenge and to the global dissemination to a true democratization of this technology of a meaningful axis around AI when 34 countries are in the global digital divide we are able to do what you have been doing and we are able to do what you have been doing and we are able to do what you have been you have been doing and we are able to do what you have been doing and we are able to are the only ones that have the world’s compute, it becomes really, really challenging.

But what I think this network is doing, it is shining a light that goes beyond these traditional divides that we see in the infrastructure. It is around upskilling and reskilling. And I actually believe that we have more in common between the global north and the global south. And I think we can learn a lot more from each other when it comes to upskilling and reskilling. And that’s why this network is such an important and I think landmark piece for the AI puzzle to be solved. I want to end, I want to make this short, and I want to end on why I think this is important. A dear friend of mine, he has a framework for talking about meaningful coexistence with technology.

And it requires four things, four ingredients. First of all, identity. How to remain human in a technological world. It seems like a stupid, obvious question, right? But I think many of us are feeling the sense that I’m losing a little bit. that we need to be more inclusive. And I think that’s why I’m saying that we need to be more inclusive. me being a human being. My identity as an individual, as a Dane, as a woman, whatever your identity might be in a world where technology is taking over, how to make that persistent, that we have that sense of identity. That is part of being skilled to take the right decisions. The second one is around community.

In a time of increasing technology, we need more community, not less. This gathering could have been a Zoom meeting, but nevertheless, thousands of us travel from all over the world, spending time in here with too much air conditioning and out in traffic with too much traffic. Why? Because of the human connection. Because of the impromptu meeting, the inspiring speeches, but also the people you meet when you’re in the coffee line. Those who are inspiring, and that community is being built, and that’s why these AI summits work. That’s why the communities that we’re part of, they cannot be solely put together. They need to be put into a digital world. They need to present. Then there’s agency.

In a more agentech world, we need more agency, not less. I think many of the people that you meet, maybe your families, your communities, the citizens that you represent, if you’re a lawmaker and policymaker, the feel of agency, of actually having a say in how this is unfolding, is minimized. And this is another place where reskilling, skilling comes up, having the right tools to be part of that. And then finally, about purpose. How often do we ask ourselves, what is the purpose of this technology? There’s a sense that I would love for the AI to empty the dishwasher while I write poetry and I play with my kids. But right now, we’re in a trajectory where I am emptying the dishwasher while the AI is playing with my kids and writing poetry.

If we do not insist on having the questions around, what is the purpose of that technology? And if we do not skill our citizens, ourselves, in being able to ask, what do we collectively want? What do we want out of this technology? We’re going to get technology that we serve. the other way around. And so congratulations on this incredible network. I hope to be a stronger partner of it, but right now you are shining a light on necessary peace for a more meaningful coexistence with technology. Thank you.

Moderator

Thank you, Excellency, and it’s always a pleasure to see a woman in the room speaking on this subject. I’d now like to request Ambassador Garcia to quickly make his intervention. Thank you.

Eugenio Garcia

Thank you. I’ll be very brief since I was not in the program, but just to say that Brazil fully supports this global network of the United Nations on capacity on AI and capacity building. I think it’s very well known and remembered yesterday. President Lula from Brazil. He mentioned specifically in his statement that the role of tonight’s nations is key for an international governance of AI and we need to come to the defense of the multilateral system. It is important that we can we do this together so we’ll be working. We have two institutions, two universities from Brazil. They are already joining this network. One from the northeast of Brazil, Federal University of Pernambuco and also from the south in Brazil which the Federal University of Rio Grande do Sul.

So these two institutions are already collaborating with the network. Of course maybe in the future others could also join but just to say that this network will complement very well. the AI track of the Global Digital Compact, both the scientific panel and the global dialogue. And I think if we can strengthen multilateralism, I think that’s the way to go, and we can count on our support. Thank you so much.

Moderator

Thank you, Ambassador. In the interest of time, I’d just like to thank the speakers, the panelists, and the audience. I hope they enjoyed this insightful session, and we look forward to more news on this network. And thank you, everyone. We now move on to the next session. Thank you so much. Thank you, speakers. May I remind the audience that we now have with us Sri A. Raven Threaty, the Honorable Chief Minister of Telangana, for a session on Harnessing AI Power in the State’s Growth, a keynote address on AI and cybersecurity, we would encourage the audience to please stay back for the session. Thank you. those who choose to leave may please do so through the door on my left.

Thank you very much.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
S. Krishnan
2 arguments164 words per minute144 words52 seconds
Argument 1
AI should be taught across all university courses and to school children from third grade to ensure inclusive AI literacy
EXPLANATION
The speaker advocates for comprehensive AI education integration across all educational levels. This includes making AI part of every university course regardless of the field of study, and introducing AI education to children as early as third grade to ensure widespread AI literacy.
EVIDENCE
The higher education department is working to ensure AI is taught across all courses in all universities and institutions. The school education department has announced as policy that AI would be taught to school children from class three/third grade.
MAJOR DISCUSSION POINT
Comprehensive AI education integration
AGREED WITH
Balaraman Ravindran, Abdurrahman Habib
DISAGREED WITH
Balaraman Ravindran
Argument 2
Industry bodies are working on retraining programs to help existing workforce adapt to AI
EXPLANATION
The speaker emphasizes that beyond educating new generations, there are active efforts to retrain current workers to adapt to AI technologies. This ensures that those already in the workforce are not left behind in the AI transition.
EVIDENCE
Industry bodies are working on retraining through various departments to ensure those who have already joined the work stream are retrained for AI adaptation.
MAJOR DISCUSSION POINT
Workforce retraining for AI adaptation
A
Abdurrahman Habib
3 arguments152 words per minute817 words321 seconds
Argument 1
Women Elevate program successfully trained 6,000 women globally with 89% completion rate, demonstrating high demand for AI education
EXPLANATION
The speaker presents concrete evidence of successful AI capacity building through a specific program targeting women globally. The high completion rate and scale demonstrate both the effectiveness of the program and the strong demand for AI education among women worldwide.
EVIDENCE
Women Elevate program aimed to train 25,000 women over three years, completed training for 6,000 women in the past year with 89% completion rate across 86 countries. The program includes 26 hours of training over 5-6 weeks ending with Microsoft AI 900 certification. 29,000 women have registered for the program.
MAJOR DISCUSSION POINT
Successful AI education programs for women
AGREED WITH
S. Krishnan, Balaraman Ravindran
DISAGREED WITH
Balaraman Ravindran
Argument 2
Saudi Arabia and Kenya initiated the network concept during the General Assembly to build global cooperation on AI capacity building
EXPLANATION
The speaker explains the origins of the global AI capacity building network, crediting Saudi Arabia and Kenya with initiating this collaborative effort. The initiative emerged from recognition that capacity building requires collective investment and shared resources rather than scattered individual efforts.
EVIDENCE
Multiple meetings were held with various countries, with Saudi Arabia and Kenya (specifically Ambassador Philip) organizing the first meeting. The speaker notes it was challenging initially as it wasn’t part of official gatherings, but they believed capacity building needed to be a network effort.
MAJOR DISCUSSION POINT
Genesis of global AI capacity building network
AGREED WITH
Amit Shukla, Seydina Moussa Ndiaye, Mehdi Snene, Eugenio Garcia
Argument 3
The network should contribute to meaningful global AI dialogue where all countries participate, not just some leading the discussion
EXPLANATION
The speaker envisions the network’s future impact as democratizing AI governance discussions globally. Rather than having only certain countries dominate AI policy conversations, the network should enable all countries to participate meaningfully in shaping AI’s future.
EVIDENCE
The speaker projects that by 2030, if the network works well, there will be meaningful dialogue where it’s not only some countries leading the discussion, but all countries in the world participating.
MAJOR DISCUSSION POINT
Democratizing global AI governance
B
Balaraman Ravindran
4 arguments145 words per minute725 words297 seconds
Argument 1
Traditional classroom teaching methods need to adapt as students are more comfortable with self-learning in the AI era
EXPLANATION
The speaker, as an academic, observes that current students have different learning preferences and capabilities compared to previous generations. Students are more comfortable with self-directed learning, making traditional classroom-based teaching methods less effective and requiring adaptation.
EVIDENCE
The speaker notes there’s much more self-learning happening, students are more comfortable doing things on their own, and trying to force-fit them into classroom settings is challenging.
MAJOR DISCUSSION POINT
Evolution of teaching methods in AI era
DISAGREED WITH
S. Krishnan
Argument 2
Every walk of life will be influenced by AI, so capacity building should focus on using AI to do any job better, not just AI research
EXPLANATION
The speaker argues for a broad approach to AI capacity building that goes beyond training AI researchers or specialists. Since AI will impact all professions and activities, the focus should be on teaching people how to use AI tools to enhance their existing work and capabilities.
EVIDENCE
The speaker states that everybody wants to know and use AI, and emphasizes that capacity building in AI is not just about doing AI better, but using AI to do whatever you want to do better, which is a global imperative.
MAJOR DISCUSSION POINT
Broad-based AI capacity building approach
AGREED WITH
S. Krishnan, Abdurrahman Habib
DISAGREED WITH
Abdurrahman Habib
Argument 3
Global representation is essential for the UN scientific panel to meaningfully engage with worldwide AI expertise
EXPLANATION
The speaker explains that the UN scientific panel on AI needs diverse global participation to provide credible, evidence-based guidance on AI’s impact and progress. Without sufficient representation from the global south and other regions, the panel cannot claim to speak for the world.
EVIDENCE
The speaker notes that the scientific panel aims to provide evidence-driven, science-based approaches to AI’s state and impact, but unless there’s meaningful engagement with the global majority, it would be futile to claim the panel speaks for the world. The panel had difficulty finding enough representation from the global south.
MAJOR DISCUSSION POINT
Need for global representation in AI governance
Argument 4
The network should help countries advance their AI readiness levels so significantly that UN categorization systems need updating
EXPLANATION
The speaker sets an ambitious vision for the network’s impact over five years. The goal is to elevate all countries’ AI capabilities to such an extent that current international classification systems for AI readiness become obsolete and need to be redesigned with higher standards.
EVIDENCE
The speaker references that the UN has categorization of countries regarding AI readiness, and envisions that in five years, the network would contribute to such progress that the UN would need to redo the categorization, splitting the top level into four parts instead of having four levels of readiness.
MAJOR DISCUSSION POINT
Ambitious goals for global AI capacity advancement
A
Amit Shukla
2 arguments140 words per minute440 words188 seconds
Argument 1
The network aims to address the AI capacity divide and ensure equitable sharing of AI benefits globally
EXPLANATION
The speaker identifies a fundamental problem where only countries with existing AI capabilities can fully benefit from AI technology, creating an unfair divide. The network is positioned as a solution to ensure AI benefits are shared more equitably across all nations, preventing AI from widening global inequalities.
EVIDENCE
The speaker notes that only countries with AI capabilities can reap actual AI benefits to their fullest potential, and warns that without addressing this, AI could create the widest unfathomable divide among countries. Countries from the global south face resource and access constraints that inhibit their AI development.
MAJOR DISCUSSION POINT
Addressing global AI inequality
AGREED WITH
Fitsum Assamnew Andargie, Anne Marie Engtoft Meldgaard, Mehdi Snene
Argument 2
India’s ITEC program has trained thousands of officials from 160 countries since 1964, offering 10,000 annual training opportunities
EXPLANATION
The speaker presents India’s long-standing commitment to international capacity building through the ITEC program as evidence of their experience and capability in this area. This program demonstrates India’s established infrastructure and track record for training international participants.
EVIDENCE
The ITEC program has trained thousands of officials from 160 countries since 1964, offering annually around 10,000 fully funded in-person training opportunities across 400 courses at 100 eminent institutes in India. Some courses are already AI-focused with plans to expand further.
MAJOR DISCUSSION POINT
India’s established capacity building infrastructure
S
Seydina Moussa Ndiaye
2 arguments95 words per minute407 words255 seconds
Argument 1
The network was recommended by the UN High-Level Advisory Body to help countries understand AI trends, risks, and opportunities
EXPLANATION
The speaker explains the institutional origins of the network within UN governance structures. The network was specifically designed to address the knowledge gap that prevents some countries from meaningfully participating in global AI discussions and decision-making processes.
EVIDENCE
The network was one of the recommendations of the UN High-Level Advisory Body on AI (H-Lab), alongside the scientific panel and global dialogue. It was proposed to help countries that don’t understand AI trends, risks, challenges, and opportunities to build their own ecosystems.
MAJOR DISCUSSION POINT
UN institutional framework for AI governance
AGREED WITH
Amit Shukla, Abdurrahman Habib, Mehdi Snene, Eugenio Garcia
Argument 2
A cooperation framework has been adopted with centers offering services through an offer sheet system
EXPLANATION
The speaker describes the practical operational structure that has been developed for the network. This includes formal frameworks for cooperation and systematic ways for member centers to contribute their capabilities and resources to the collective effort.
EVIDENCE
The cooperation framework was adopted during a workshop in Dakar with six centers participating. They worked on an offer sheet system where each center can offer services to the network, and are developing a blueprint to help countries without centers to build them.
MAJOR DISCUSSION POINT
Operational structure of the network
F
Fitsum Assamnew Andargie
2 arguments101 words per minute365 words214 seconds
Argument 1
The network provides opportunity for countries to help each other while understanding local contexts and shared challenges
EXPLANATION
The speaker emphasizes the mutual benefit aspect of the network, where countries can both receive help and provide assistance based on their experiences. The shared understanding of local contexts and common challenges makes the collaboration more effective than external assistance alone.
EVIDENCE
The speaker mentions being part of AFRD Labs network in Africa and seeing the need for collaboration across Africa. Ethiopia has made huge investments in AI through establishing an AI Institute, and they can help others because they understand the local context and problems faced when establishing centers.
MAJOR DISCUSSION POINT
Mutual assistance and local context understanding
Argument 2
By 2030, the network should contribute to redistributing both compute power and human capacity globally
EXPLANATION
The speaker envisions the network’s long-term impact as fundamentally changing the global distribution of AI capabilities. This includes not just technical infrastructure (compute power) but also human expertise, ensuring no region or country is left behind in AI development.
EVIDENCE
The speaker projects that in five years, the network will achieve distribution of capacity including both compute power and human power, with people able to do research, generate new knowledge, and use AI to develop their livelihoods.
MAJOR DISCUSSION POINT
Global redistribution of AI capabilities
AGREED WITH
Amit Shukla, Anne Marie Engtoft Meldgaard, Mehdi Snene
V
Vilas Dhar
3 arguments203 words per minute999 words295 seconds
Argument 1
Innovation in institutions that guide AI’s future is needed, not just technological innovation
EXPLANATION
The speaker argues that while technological advancement in AI is rapid, there’s a critical gap in developing the institutional frameworks needed to govern and guide AI development. The focus should shift to creating innovative governance structures and collaborative mechanisms.
EVIDENCE
The speaker notes that while innovation in technology seems to be moving quickly, there’s a question about where the innovation is in institutions that will guide AI’s future. No country is far ahead on institutional work, but neither is any country so far behind that they’re out of the race.
MAJOR DISCUSSION POINT
Need for institutional innovation in AI governance
Argument 2
AI governance requires muscle memory, practice, and choice – turning frameworks into actual practice
EXPLANATION
The speaker emphasizes that effective AI governance cannot be achieved through theoretical frameworks alone. It requires practical experience, repeated collaboration, and the development of institutional habits that translate policy frameworks into real-world implementation.
EVIDENCE
The speaker mentions that at a time when everyone has new AI governance frameworks that exist only as frameworks, there’s a need for institutions that turn frameworks into practice, build muscle memory of collaboration, and handle complex negotiations around cyber defense, data sharing, and algorithm development.
MAJOR DISCUSSION POINT
Practical implementation of AI governance
Argument 3
The network should ensure diversity in participation, including equal gender representation in AI center leadership
EXPLANATION
The speaker calls for the network to actively promote inclusive participation across multiple dimensions. This includes ensuring that future AI governance structures reflect diverse perspectives and that women have equal representation in leadership positions within the network.
EVIDENCE
The speaker asks whether the network ensures diversity in participation and specifically questions if the next conversation will see equal numbers of men and women leading AI centers around the world, along with diversity of thought in student representation.
MAJOR DISCUSSION POINT
Ensuring diversity and inclusion in AI governance
AGREED WITH
Moderator, Anne Marie Engtoft Meldgaard
E
Eugenio Garcia
2 arguments120 words per minute207 words102 seconds
Argument 1
Brazil supports the network with two universities already joining: Federal University of Pernambuco and Federal University of Rio Grande do Sul
EXPLANATION
The speaker announces Brazil’s concrete commitment to the network by contributing institutional participants. This demonstrates Brazil’s support for multilateral AI governance and capacity building efforts through academic institutions from different regions of the country.
EVIDENCE
Two Brazilian institutions are already joining the network: Federal University of Pernambuco from northeast Brazil and Federal University of Rio Grande do Sul from southern Brazil, with possibility of others joining in the future.
MAJOR DISCUSSION POINT
Brazil’s institutional commitment to the network
AGREED WITH
Amit Shukla, Abdurrahman Habib, Seydina Moussa Ndiaye, Mehdi Snene
Argument 2
The network complements the AI track of the Global Digital Compact and strengthens multilateralism
EXPLANATION
The speaker positions the network within broader international governance frameworks, specifically linking it to the Global Digital Compact. This demonstrates how the network fits into larger multilateral efforts to govern digital technologies and AI at the global level.
EVIDENCE
The speaker mentions that President Lula specifically stated that the UN’s role is key for international AI governance and the need to defend the multilateral system. The network complements the AI track of the Global Digital Compact, including the scientific panel and global dialogue.
MAJOR DISCUSSION POINT
Integration with multilateral governance frameworks
A
Anne Marie Engtoft Meldgaard
2 arguments195 words per minute828 words254 seconds
Argument 1
Denmark recognizes the network’s importance in addressing global digital divide beyond traditional infrastructure divides
EXPLANATION
The speaker acknowledges that while infrastructure-based digital divides remain significant (with only 34 countries having substantial compute resources), the network addresses a different but equally important dimension through skills and capacity building. This approach can bridge gaps that pure infrastructure investment cannot address.
EVIDENCE
The speaker notes that when 34 countries are the only ones with the world’s compute, it becomes challenging, but the network shines a light beyond traditional infrastructure divides through upskilling and reskilling, where global north and south have more in common and can learn from each other.
MAJOR DISCUSSION POINT
Beyond infrastructure-focused digital divide solutions
AGREED WITH
Amit Shukla, Fitsum Assamnew Andargie, Mehdi Snene
Argument 2
Meaningful coexistence with AI technology requires maintaining human identity, community, agency, and purpose
EXPLANATION
The speaker presents a comprehensive framework for human-AI interaction that goes beyond technical skills to encompass fundamental human needs. This framework emphasizes that successful AI integration requires preserving essential human elements while adapting to technological change.
EVIDENCE
The speaker outlines four requirements: identity (remaining human in a technological world), community (needing more human connection as technology increases), agency (having more control in an agentic tech world), and purpose (asking what we want from technology rather than serving it).
MAJOR DISCUSSION POINT
Human-centered approach to AI integration
AGREED WITH
Moderator, Vilas Dhar
M
Moderator
2 arguments130 words per minute579 words266 seconds
Argument 1
The session emphasizes democratizing access to AI resources while keeping humans at the center
EXPLANATION
The moderator highlights key themes from the discussion, specifically noting the importance of making AI accessible to all while maintaining human-centered approaches. This reflects the overall direction of the capacity building network discussions.
EVIDENCE
Moderator’s summary remarks about ‘democratizing access to AI resources as well as keeping humans at the center’
MAJOR DISCUSSION POINT
Human-centered AI democratization
AGREED WITH
Anne Marie Engtoft Meldgaard, Vilas Dhar
Argument 2
The panel discussion addressed needs of the Global South, with particular focus on capacity building for women and youth
EXPLANATION
The moderator summarizes the key outcomes of the panel, emphasizing how the discussion specifically addressed capacity building needs in developing countries. The focus on women and youth highlights the importance of inclusive approaches to AI education and training.
EVIDENCE
Moderator’s closing remarks noting the panel ‘spoke about the needs of the Global South, capacity building for women and youth particularly’
MAJOR DISCUSSION POINT
Inclusive capacity building for underrepresented groups
M
Mehdi Snene
3 arguments145 words per minute843 words348 seconds
Argument 1
The Global Network for Centers of Exchange and Cooperation on AI Capacity Building was initiated by Saudi Arabia and Kenya during the General Assembly
EXPLANATION
Mehdi Snene explains the chronological origins of the network, crediting Saudi Arabia and Kenya with the initial call for member states to join efforts in building global AI capacity. This initiative specifically aims to ensure no one is left behind in AI development, particularly focusing on building national AI strategies and sovereign capabilities.
EVIDENCE
Initial call from Kingdom of Saudi Arabia in Kenya during General Assembly, calling for member states to join effort to build global network on AI capacity building that could ‘leave no one behind’
MAJOR DISCUSSION POINT
Genesis and leadership of global AI capacity building network
AGREED WITH
Amit Shukla, Abdurrahman Habib, Seydina Moussa Ndiaye, Eugenio Garcia
Argument 2
The network has grown to include 14 countries with nominated institutions across diverse regions
EXPLANATION
Mehdi Snene presents the concrete progress of the network, showing its global reach and diverse participation. The list demonstrates representation from multiple continents and development levels, indicating successful international cooperation in AI capacity building.
EVIDENCE
14 countries have nominated institutions: Brazil, China, Ethiopia, Guinea, India, Kazakhstan, Kenya, Rwanda, Saudi Arabia, Senegal, Slovakia, South Africa, Trinidad and Tobago, and Vietnam
MAJOR DISCUSSION POINT
Global participation and institutional commitment
Argument 3
The network aims to address the gap between countries that understand AI trends and those that need capacity building support
EXPLANATION
Mehdi Snene identifies a fundamental challenge in global AI governance where some countries have sufficient understanding to participate meaningfully in AI discussions while others lack the necessary knowledge base. The network is designed to bridge this knowledge and capacity gap.
EVIDENCE
Reference to gap between countries that ‘understand what’s going on and others who are here but don’t understand all the trends, all the risks, all the challenges, all the opportunities of AI’
MAJOR DISCUSSION POINT
Addressing AI knowledge and capacity gaps globally
AGREED WITH
Amit Shukla, Fitsum Assamnew Andargie, Anne Marie Engtoft Meldgaard
Agreements
Agreement Points
Comprehensive AI education integration across all levels
Speakers: S. Krishnan, Balaraman Ravindran, Abdurrahman Habib
AI should be taught across all university courses and to school children from third grade to ensure inclusive AI literacy Every walk of life will be influenced by AI, so capacity building should focus on using AI to do any job better, not just AI research Women Elevate program successfully trained 6,000 women globally with 89% completion rate, demonstrating high demand for AI education
All speakers agree that AI education should be comprehensive, inclusive, and accessible to everyone regardless of their field or background, with evidence showing strong demand and success in such programs
Global cooperation and network approach to AI capacity building
Speakers: Amit Shukla, Abdurrahman Habib, Seydina Moussa Ndiaye, Mehdi Snene, Eugenio Garcia
The network aims to address the AI capacity divide and ensure equitable sharing of AI benefits globally Saudi Arabia and Kenya initiated the network concept during the General Assembly to build global cooperation on AI capacity building The network was recommended by the UN High-Level Advisory Body to help countries understand AI trends, risks, and opportunities The Global Network for Centers of Exchange and Cooperation on AI Capacity Building was initiated by Saudi Arabia and Kenya during the General Assembly Brazil supports the network with two universities already joining: Federal University of Pernambuco and Federal University of Rio Grande do Sul
Multiple speakers emphasize the critical need for international cooperation through the network to address AI capacity gaps and ensure equitable global AI development
Addressing global AI inequality and digital divides
Speakers: Amit Shukla, Fitsum Assamnew Andargie, Anne Marie Engtoft Meldgaard, Mehdi Snene
The network aims to address the AI capacity divide and ensure equitable sharing of AI benefits globally By 2030, the network should contribute to redistributing both compute power and human capacity globally Denmark recognizes the network’s importance in addressing global digital divide beyond traditional infrastructure divides The network aims to address the gap between countries that understand AI trends and those that need capacity building support
Speakers consistently recognize the urgent need to bridge AI capacity gaps between developed and developing nations to prevent further global inequality
Human-centered approach to AI development
Speakers: Moderator, Anne Marie Engtoft Meldgaard, Vilas Dhar
The session emphasizes democratizing access to AI resources while keeping humans at the center Meaningful coexistence with AI technology requires maintaining human identity, community, agency, and purpose The network should ensure diversity in participation, including equal gender representation in AI center leadership
Speakers agree that AI development must prioritize human values, maintain human agency, and ensure inclusive participation while democratizing access
Similar Viewpoints
Both speakers envision the network fundamentally transforming global AI governance by elevating all countries’ capabilities and enabling universal participation in AI policy discussions
Speakers: Balaraman Ravindran, Abdurrahman Habib
The network should help countries advance their AI readiness levels so significantly that UN categorization systems need updating The network should contribute to meaningful global AI dialogue where all countries participate, not just some leading the discussion
Both speakers emphasize the mutual benefit and collaborative nature of the network, where countries can both contribute and receive support based on shared experiences and local understanding
Speakers: Fitsum Assamnew Andargie, Abdurrahman Habib
The network provides opportunity for countries to help each other while understanding local contexts and shared challenges Saudi Arabia and Kenya initiated the network concept during the General Assembly to build global cooperation on AI capacity building
Both Indian officials highlight their country’s established infrastructure and commitment to capacity building, both domestically and internationally
Speakers: S. Krishnan, Amit Shukla
Industry bodies are working on retraining programs to help existing workforce adapt to AI India’s ITEC program has trained thousands of officials from 160 countries since 1964, offering 10,000 annual training opportunities
Unexpected Consensus
Need for institutional innovation over technological innovation
Speakers: Vilas Dhar, Balaraman Ravindran
Innovation in institutions that guide AI’s future is needed, not just technological innovation Traditional classroom teaching methods need to adapt as students are more comfortable with self-learning in the AI era
Unexpected consensus that the real challenge isn’t technological advancement but rather developing new institutional frameworks and teaching methods to govern and implement AI effectively
Practical implementation focus over theoretical frameworks
Speakers: Vilas Dhar, Seydina Moussa Ndiaye
AI governance requires muscle memory, practice, and choice – turning frameworks into actual practice A cooperation framework has been adopted with centers offering services through an offer sheet system
Unexpected alignment on the need to move beyond theoretical AI governance frameworks to practical, operational systems that can be implemented and practiced
Gender equality as central to AI capacity building
Speakers: Abdurrahman Habib, Vilas Dhar, Moderator
Women Elevate program successfully trained 6,000 women globally with 89% completion rate, demonstrating high demand for AI education The network should ensure diversity in participation, including equal gender representation in AI center leadership The panel discussion addressed needs of the Global South, with particular focus on capacity building for women and youth
Unexpected strong consensus across different speakers on prioritizing women’s participation in AI, going beyond general inclusion to specific programs and leadership representation
Overall Assessment

The discussion reveals remarkable consensus on the need for global AI capacity building through international cooperation, comprehensive education, and human-centered approaches. All speakers agree on addressing AI inequality, the importance of the network initiative, and the need for inclusive participation.

Very high level of consensus with no significant disagreements identified. The implications are positive for the network’s future, suggesting strong international support and shared vision for equitable AI development globally. This consensus provides a solid foundation for implementing the Global Network for Centers of Exchange and Cooperation on AI Capacity Building.

Differences
Different Viewpoints
Approach to AI education delivery methods
Speakers: Balaraman Ravindran, S. Krishnan
Traditional classroom teaching methods need to adapt as students are more comfortable with self-learning in the AI era AI should be taught across all university courses and to school children from third grade to ensure inclusive AI literacy
While both support comprehensive AI education, Ravindran emphasizes adapting teaching methods away from traditional classrooms toward self-learning, while Krishnan focuses on systematic integration across all formal educational institutions and levels
Scope and focus of AI capacity building
Speakers: Balaraman Ravindran, Abdurrahman Habib
Every walk of life will be influenced by AI, so capacity building should focus on using AI to do any job better, not just AI research Women Elevate program successfully trained 6,000 women globally with 89% completion rate, demonstrating high demand for AI education
Ravindran advocates for broad-based AI literacy across all professions and activities, while Habib demonstrates success with targeted, specialized training programs for specific demographics
Unexpected Differences
Timeline and pace expectations for institutional development
Speakers: Anne Marie Engtoft Meldgaard, Abdurrahman Habib
Denmark recognizes the network’s importance in addressing global digital divide beyond traditional infrastructure divides The network should contribute to meaningful global AI dialogue where all countries participate, not just some leading the discussion
Overall Assessment

The discussion shows remarkably high consensus on goals with minor disagreements on implementation approaches. Main areas of difference include educational delivery methods, scope of capacity building programs, and expectations for institutional development timelines

Low level of disagreement with strong overall alignment on the network’s importance and objectives. Disagreements are primarily tactical rather than strategic, focusing on how to achieve shared goals rather than questioning the goals themselves. This high level of consensus suggests strong potential for successful collaboration and implementation of the global AI capacity building network

Takeaways
Key takeaways
AI education must be democratized and made inclusive, starting from elementary school (grade 3) through university level across all disciplines, not just technical fields The Global Network of Centers for Exchange and Cooperation on AI Capacity Building has successfully launched with 14 countries participating and a cooperation framework adopted Capacity building should focus on enabling people to use AI to improve their existing work rather than just creating AI researchers International cooperation is essential to bridge the AI capacity divide between Global North and South, with successful models like India’s ITEC program and Saudi Arabia’s Women Elevate program demonstrating effectiveness AI governance requires institutional innovation and ‘muscle memory’ through practice, not just frameworks – turning policy into actionable collaboration The network aims to ensure no country is left behind in AI development, with particular emphasis on empowering women and youth in the Global South Meaningful coexistence with AI technology requires maintaining human identity, community, agency, and purpose while building technical capacity
Resolutions and action items
A third network meeting will be held in Riyadh, Saudi Arabia before the July summit Development of a blueprint to help countries without centers establish their own AI capacity building centers Expansion of AI training courses under India’s ITEC program Stabilization of the ‘offer sheet’ system where each center can offer services to the network Implementation of multi-country collaborative projects for capacity building Brazil’s commitment to participate through two universities: Federal University of Pernambuco and Federal University of Rio Grande do Sul Continuation of the Women Elevate program targeting 25,000 women globally over three years
Unresolved issues
How to ensure equal gender representation in AI center leadership positions Specific mechanisms for sharing compute resources and data across the network Detailed implementation strategies for translating AI governance frameworks into practice How to measure and track progress toward the 2030 goal of redistributing AI capacity globally Funding mechanisms and sustainability models for the network’s long-term operations Integration and coordination between different UN agency networks (UNESCO centers, this network, etc.) Specific curriculum standards and certification processes for AI education across different countries
Suggested compromises
Recognition that different regions may need different approaches – the network should accommodate local contexts while maintaining global coordination Balancing sovereign AI development with regional collaboration and resource sharing Adapting traditional educational methods to accommodate students’ preference for self-learning while maintaining structured guidance Combining online training programs with mentorship and support systems to achieve high completion rates Focusing on practical AI application skills rather than just theoretical knowledge to meet diverse country needs
Thought Provoking Comments
Yet, only countries with AI capabilities can reap actual AI benefits to their fullest potential. We must collectively address this anomaly and ensure that the benefits of AI is equitably shared. Else, this very revolutionary technology could only bring the widest unfathomable divide among countries.
This comment reframes AI not just as a technological advancement but as a potential source of global inequality. It introduces the critical concept that AI could exacerbate existing divides rather than bridge them, challenging the often optimistic narrative around AI democratization.
This observation set the foundational premise for the entire discussion about the Global Network, establishing the urgency and moral imperative behind capacity building initiatives. It shifted the conversation from celebrating AI achievements to addressing AI equity gaps.
Speaker: Amit Shukla
I don’t even know how to teach anymore… The skilling, the learning, the mechanisms, the facilities that are available, and even the training that the children who are… students are going through when they come to us, right? It’s very different.
This vulnerable admission from an experienced academic reveals the profound disruption AI is causing to traditional educational paradigms. It acknowledges that even educators are struggling to adapt, highlighting the depth of transformation needed.
This comment humanized the capacity building challenge and validated the struggles many educators face. It shifted the discussion from abstract policy frameworks to the practical, human reality of educational transformation in the AI era.
Speaker: Balaraman Ravindran
When we talk about capacity building in AI… It is not just capacity to do AI better, but capacity to use AI to do whatever you want to do better.
This distinction fundamentally redefines what AI capacity building means, moving beyond technical AI skills to AI literacy across all domains. It broadens the scope from creating AI specialists to empowering all citizens.
This redefinition expanded the conversation’s scope and helped justify why AI education should be universal rather than specialized, influencing how other speakers framed their capacity building initiatives.
Speaker: Balaraman Ravindran
I have to ask where is the innovation in the institutions that will guide what the future of AI looks like? And I think there is a matter of timing that’s quite interesting. In many ways it feels like no country is as far ahead on this institutional work as they might hope but neither is any country so far behind that they feel like they’re totally out of the race.
This observation challenges the focus on technological innovation by highlighting the lag in institutional innovation. The timing insight suggests a unique window of opportunity where global collaboration is still possible before institutional gaps become insurmountable.
This comment shifted the discussion from technical capacity to institutional capacity, emphasizing that the real challenge isn’t just teaching AI but building the governance structures to guide AI development. It provided strategic justification for why the network initiative is timely and necessary.
Speaker: Vilas Dhar
I would love for the AI to empty the dishwasher while I write poetry and I play with my kids. But right now, we’re in a trajectory where I am emptying the dishwasher while the AI is playing with my kids and writing poetry.
This vivid metaphor crystallizes the concern about AI development priorities and human agency. It illustrates how current AI development may be automating human creativity and connection rather than mundane tasks, challenging assumptions about AI’s beneficial trajectory.
This powerful imagery provided a memorable framework for discussing AI’s purpose and direction. It reinforced the need for capacity building that includes critical thinking about AI’s role, not just technical skills, and connected to broader themes about maintaining human agency and purpose.
Speaker: Anne Marie Engtoft Meldgaard
More than 89% of the students are finishing the courses and getting the certificate… We’re talking about more than 86 countries this program covered… 29 thousand ladies registered in the program.
These concrete success metrics demonstrate that large-scale, inclusive AI education is not just theoretically possible but practically achievable. The high completion rates challenge assumptions about online learning effectiveness in diverse global contexts.
These numbers provided tangible evidence that the network’s ambitious goals are realistic, shifting the conversation from whether such initiatives can work to how they can be scaled and replicated across the network.
Speaker: Abdurrahman Habib
Overall Assessment

These key comments collectively transformed the discussion from a typical policy announcement into a nuanced exploration of AI’s societal implications. The conversation evolved through several phases: first establishing the equity imperative (Shukla), then acknowledging educational disruption (Ravindran), expanding the definition of AI literacy, highlighting institutional innovation gaps (Dhar), and finally questioning AI’s fundamental purpose and direction (Meldgaard). The concrete success stories (Habib) provided proof of concept that grounded the more philosophical discussions in practical reality. Together, these insights elevated the network launch from a bureaucratic initiative to a critical intervention in shaping AI’s global trajectory, emphasizing that capacity building must address not just technical skills but also institutional frameworks, educational paradigms, and fundamental questions about human agency in an AI-driven world.

Follow-up Questions
How to effectively measure and track the success rate and impact of AI capacity building programs across different countries and contexts?
While sharing impressive statistics about the Women Elevate program (89% completion rate, 6,000 participants from 86 countries), there’s an implicit need to develop standardized metrics for measuring success across the global network
Speaker: Abdurrahman Habib
How to adapt teaching methodologies and educational frameworks for AI in the context of changing learning patterns and student preferences?
Professor Ravindran explicitly stated ‘I don’t even know how to teach anymore’ and discussed the challenge of students being more comfortable with self-learning versus traditional classroom settings, indicating a need for research into new pedagogical approaches
Speaker: Balaraman Ravindran
How to ensure meaningful representation from the Global South in international AI governance bodies and scientific panels?
Ravindran noted that ‘the panel had a tough time finding enough representation from the global south,’ highlighting the need to research and address barriers to participation in global AI governance
Speaker: Balaraman Ravindran
How to develop and implement a standardized blueprint for establishing AI capacity building centers in countries that don’t have them yet?
Ndiaye mentioned that ‘the next step will be to have a blueprint because it’s important to help also countries which haven’t a center yet to build a center,’ indicating ongoing work needed in this area
Speaker: Seydina Moussa Ndiaye
How to effectively share and distribute not just computational power but human capacity and expertise across the global network?
Andargie emphasized the need for ‘distribution of capacity’ including both ‘compute power’ and ‘human power,’ suggesting research into mechanisms for knowledge and expertise sharing
Speaker: Fitsum Assamnew Andargie
How to redesign UN categorization systems for AI readiness as countries advance through capacity building initiatives?
Ravindran expressed hope that the network would contribute to countries advancing so much that ‘the UN would have to redo the categorization’ of AI readiness levels, implying need for dynamic assessment frameworks
Speaker: Balaraman Ravindran
How to ensure gender parity and diversity in AI center leadership and participation globally?
Dhar challenged the audience asking ‘are we ensuring that the next time we hold a conversation like this, we’ll see an equal number of men and women leading centers around the world on AI?’ indicating need for research into barriers and solutions for diversity
Speaker: Vilas Dhar
How to develop frameworks for meaningful coexistence with AI technology that preserve human identity, community, agency, and purpose?
Meldgaard outlined four key ingredients (identity, community, agency, purpose) for meaningful coexistence with technology and questioned how to maintain human elements in an increasingly technological world
Speaker: Anne Marie Engtoft Meldgaard

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Regulating Open Data_ Principles Challenges and Opportunities

Regulating Open Data_ Principles Challenges and Opportunities

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

The Innovation Beneath AI: The US-India Partnership powering the AI Era

The Innovation Beneath AI: The US-India Partnership powering the AI Era

Session at a glanceSummary, keypoints, and speakers overview

Summary

This panel discussion focused on the infrastructure requirements needed to support AI at scale, examining the physical foundations beneath AI development rather than just the models themselves. The conversation was moderated by Ujwal Kumar and featured experts from venture capital, entrepreneurship, government, and technology sectors discussing the massive infrastructure buildout required for AI deployment.


The panelists emphasized that AI’s scaling depends heavily on physical infrastructure including energy systems, semiconductors, critical minerals, and data centers. Tuan Ho from X Fund highlighted the strategic vulnerability created by supply chain dependencies, particularly in rare earth magnets essential for everything from hard drives to chip manufacturing. He noted that over 90% of rare earth magnets currently come through China, creating significant risks for AI infrastructure development.


Jeff Binder discussed how AI tools are creating unprecedented opportunities for entrepreneurs, allowing them to bring products to market with significantly less capital than previously required. However, he warned of potential overbuilding in the infrastructure space, similar to the fiber buildout during the early internet boom. Prince Dhawan from REC Limited explained India’s innovative approach through the India Energy Stack, which enables programmable power and peer-to-peer energy trading, allowing data centers to source power dynamically from distributed solar installations.


Vrushali Gaud from Google outlined the company’s $15 billion commitment to India, including new subsea cables and AI hubs, emphasizing India’s potential for leapfrogging traditional infrastructure limitations. Dr. Tobias Helbig from NXP Semiconductors provided a contrasting perspective, suggesting that the current focus on centralized data centers might be analogous to IBM’s prediction of only five computers worldwide, with the real future lying in billions of edge devices requiring minimal power.


The discussion concluded with agreement that while current infrastructure investments are necessary, the ultimate transformation will likely involve decentralized, edge-based AI systems that operate more efficiently and sustainably than today’s power-hungry data centers.


Keypoints

Major Discussion Points:

AI Infrastructure Requirements and Scale: The panel extensively discussed how AI deployment at scale requires massive infrastructure buildout including energy systems, semiconductors, critical minerals, and data centers. This represents what Jensen Huang called “the largest infrastructure build-out in human history.”


US-India Strategic Partnership in Critical Supply Chains: Significant focus on the US-India critical minerals corridor, rare earth supply chains, and Google’s $15 billion commitment to India including subsea cables and AI hubs. The discussion emphasized reducing dependence on China for critical materials like rare earth magnets.


Energy Grid Transformation and Clean Power: Detailed exploration of how AI’s massive energy demands require “programmable power” and intelligent grids. India’s Energy Stack was highlighted as enabling peer-to-peer energy trading and coordination at scale to support data centers through distributed renewable sources.


Evolution from Centralized to Edge AI Computing: The panel discussed a fundamental shift from power-hungry centralized data centers to efficient edge devices, comparing it to the evolution from IBM’s “five computers” to billions of personal devices. This transition could dramatically reduce power requirements while bringing AI closer to users.


Investment Opportunities and Risks in AI Infrastructure: Discussion of where venture capital should focus, with emphasis on durable infrastructure investments over volatile GPU/model investments. The panel noted potential overbuilding risks similar to the dot-com era, but with better measurability of success/failure.


Overall Purpose:

The discussion aimed to shift focus from AI models and applications to the foundational infrastructure required to support AI at scale, exploring investment opportunities, policy frameworks, and technological innovations needed across the US-India partnership.


Overall Tone:

The tone was consistently optimistic and forward-looking throughout, with panelists expressing excitement about opportunities while acknowledging real challenges. There was strong consensus on the transformative potential of AI infrastructure, though some cautionary notes were raised about potential overbuilding and obsolescence risks. The discussion maintained a collaborative, solution-oriented atmosphere with panelists building on each other’s insights rather than disagreeing.


Speakers

Speakers from the provided list:


Ujjwal Kumar – Founder and CEO of Quantum Alliance, Co-founder of Cognosy AI (Moderator)


Tuan Ho – Unicorn founder turned venture capitalist, Partner at X Fund


Jeff Binder – Serial entrepreneur with multiple Fortune 500 exits, Harvard Venture Partners


Vrushali Gaud – Global Director of Climate Operations at Google


Prince Dhawan – IAS officer, Executive Director at REC Limited under the Ministry of Power


Tobias Helbig – VP of Innovation at NXP Semiconductors


Participant – (Role/title not specified – appears to be event organizer/host introducing the panel)


Additional speakers:


None identified beyond the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive panel discussion examined the critical infrastructure requirements needed to support artificial intelligence deployment at scale, moving beyond the typical focus on AI models to explore the foundational physical systems that will enable the technology’s widespread adoption. Moderated by Ujwal Kumar, founder and CEO of Quantum Alliance and co-founder of Cognosy AI, the conversation brought together experts from venture capital, entrepreneurship, government policy, corporate strategy, and semiconductor innovation to address what Jensen Huang has called “the largest infrastructure build-out in human history.”


The Infrastructure Imperative

The panel opened with Kumar’s observation that whilst AI models receive significant attention, the underlying infrastructure requirements represent where the real opportunities and challenges lie. This infrastructure spans critical minerals, energy systems, semiconductors, data centres, and grid modernisation—all of which require unprecedented coordination and investment. Recent developments underscore this urgency, including Google’s $15 billion commitment to India featuring a gigawatt-scale AI hub in Vizag and new subsea cables connecting through Africa and Singapore-Australia routes.


Critical Supply Chain Vulnerabilities and Investment Opportunities

Tuan Ho, a unicorn founder turned venture capitalist at X Fund, provided crucial insights into the strategic vulnerabilities created by current supply chain dependencies, particularly in rare earth magnets. His firm’s investment in Vulcan Elements—now backed by $1.4 billion in funding—illustrates the scale of intervention required to address America’s rare earth magnet supply chain, where over 90% currently flows through China. This dependency creates profound risks because magnets are essential for virtually all moving components: hard drives, motors, and critically, the manufacturing of semiconductors themselves.


Ho emphasised that this represents a fundamental shift in venture capital focus, with investors now examining decades-old industries that have seen minimal innovation. Power grids that haven’t been upgraded for nearly a century, refining capacity limitations, and sustainable data centre operations all present significant opportunities. The convergence of AI infrastructure needs with geopolitical supply chain concerns creates what he described as “huge opportunity for us as investors” in building companies that address these foundational requirements.


Importantly, Ho noted that infrastructure businesses offer “more durability and clarity” regarding problem definition compared to pure AI model companies, as the underlying problems—power distribution, cooling, materials processing—remain consistent even as specific technologies evolve.


Entrepreneurial Leverage and Market Dynamics

Jeff Binder, a serial entrepreneur with multiple Fortune 500 exits now at Harvard Venture Partners, provided insights into how AI tools are transforming entrepreneurship itself. He argued that current AI capabilities give entrepreneurs “massive leverage” they previously lacked, particularly in cross-border collaboration between the US and India. AI tools are eliminating traditional barriers in front-end development, enabling entrepreneurs to deliver products with potentially “a tenth the capital” previously required.


However, Binder introduced a crucial contrarian perspective, projecting that within two years, the industry might face significant overcapacity and ROI challenges despite current concerns about power and compute shortages. This potential overbuild, paradoxically, could benefit entrepreneurs by making infrastructure resources extremely inexpensive. Importantly, he distinguished the current AI boom from the dot-com era by highlighting that “almost every aspect of artificial intelligence deployment from the foundational aspects all the way to the top of the stack are measurable,” making success and failure clearer and faster to determine.


India’s Strategic Energy Innovation

Prince Dhawan from REC Limited provided perhaps the most technically sophisticated analysis of AI’s energy requirements, introducing the concept that “AI essentially will not scale unless your power is programmable.” His central thesis identified intelligent grids, rather than chips or compute capacity, as the primary constraint for AI scaling.


Dhawan detailed India’s groundbreaking approach through the India Energy Stack, which creates interoperable digital rails enabling coordination at scale. The system allows data centres to source power dynamically from distributed sources, fundamentally changing the economics of AI infrastructure. As he explained, “individual retail households can essentially monetize their rooftop solar power by supplying to such data centers,” creating new livelihood opportunities through peer-to-peer energy trading.


The technical innovation lies in the stack’s ability to handle “standard rules for measurement, identification, and settlement all in near real-time,” enabling what Dhawan termed “intelligent electrons.” This addresses the fundamental challenge that whilst AI evolves in quarters, traditional grid infrastructure evolves over decades. Dhawan also highlighted Reliance’s commitment of “trillion dollars in the next seven years” as indicative of the scale of investment flowing into Indian infrastructure.


Google’s Full-Stack Infrastructure Strategy and Climate Innovation

Vrushali Gaud from Google provided insights into the company’s comprehensive approach to AI infrastructure, emphasising innovation across the entire technology stack. Google’s $15 billion commitment to India encompasses not only data centres but also networking infrastructure, including subsea cables creating global connectivity through multiple routes.


Gaud explained Google’s rationale for heavy investment in India beyond the obvious market size. India’s young, tech-eager population has demonstrated remarkable ability to leapfrog traditional development stages, as evidenced by rapid adoption of digital payments through UPI and GPay. This leapfrogging potential extends to clean energy infrastructure, where India represents one of the few places where “the math on clean energy just works.”


A significant announcement was Google’s Climate Technology Center, developed in partnership with the Office of Principal Scientific Advisory for the Government of India. This initiative focuses on three key areas: green skilling for decarbonisation careers, low-carbon materials for construction (including data centres), and sustainable aviation fuel development. Importantly, the center targets Tier 2 and Tier 3 cities and involves a “wider spread of universities” to democratise innovation opportunities and ensure contextual relevance to local conditions.


The Edge Computing Revolution

Dr. Tobias Helbig from NXP Semiconductors provided the most provocative perspective, using a historical analogy to challenge current assumptions about AI infrastructure. He recalled that in 1942, IBM’s head predicted a world market for “about five computers”—technically correct for that era’s computers, but completely missing the evolution toward billions of personal devices.


Helbig suggested that current discussions about massive, power-hungry data centres might represent a similar blind spot. He pointed to efficiency contrasts between human brains (20 watts) and flies (below 1 milliwatt for remarkable intelligence) to argue that dramatic improvements are possible. His company already demonstrates this potential: “we today have products where on whatever 10 watts or so you can run very meaningful LLMs,” enabling AI deployment at the edge rather than in centralised facilities.


This vision encompasses AI evolution “moving from, hey, I can perceive something, is it a dog, a cat, to I can think, generative AI, I can create something out of those models to the point that I can create agents” that act independently in the real world. His marathon watch, which runs for 12 days on a single charge whilst providing significant intelligence, exemplifies this future direction.


Helbig also offered perspective on innovation cycles, noting “we have a tendency to overestimate the next two years and impact and underestimate what’s happening in 10 years,” suggesting that edge computing advances may be more transformative than currently anticipated.


Investment Risk Assessment and Financial Innovation

The discussion revealed sophisticated understanding of investment risks across different infrastructure layers. Prince Dhawan noted that financial institutions already recognise obsolescence risks by refusing debt financing for GPUs whilst providing it for basic infrastructure like buildings and power systems, validating concerns about the durability of different infrastructure components.


However, Jeff Binder warned that hardware breakthroughs could potentially make entire data centres “almost instantly, at least from a financing perspective, obsolete” if new chip designs achieve dramatic efficiency improvements. This creates a complex risk landscape where investors must balance infrastructure durability against rapid technological advancement.


Unprecedented Government Financing

The panel highlighted unprecedented levels of government investment in AI infrastructure globally. Tuan Ho noted that having tech conferences where prime ministers and heads of state announce hundreds of billions in infrastructure investment represents something “we’re not used to seeing” from a venture capital perspective. This government financing spans multiple nations, creating global competition for AI infrastructure leadership whilst providing both opportunities and challenges for private investors.


US-India Partnership and Strategic Collaboration

Throughout the discussion, the US-India partnership emerged as a particularly promising model for international cooperation in AI infrastructure development. This collaboration combines India’s innovation potential, favourable clean energy economics, and digital infrastructure capabilities with US investment and technological expertise. The partnership spans venture capital investment, corporate strategy, government policy coordination, and technical innovation across the entire AI infrastructure stack.


Future Scenarios and Unresolved Challenges

The panel identified potential future scenarios ranging from successful coordination between distributed energy resources, intelligent grids, and efficient edge computing, to significant overbuilding of centralised infrastructure that becomes stranded as efficiency improvements reduce power requirements.


Several critical challenges remain unaddressed, including permitting issues for clean energy infrastructure, the mismatch between AI development timelines (quarters) and infrastructure development timelines (decades), and the need for financing models that appropriately price obsolescence risks across different infrastructure layers.


Conclusion

This panel discussion successfully shifted focus from AI models to foundational infrastructure requirements, revealing a complex landscape of opportunities, risks, and interdependencies. The consensus emerged that whilst current infrastructure investments are necessary, the ultimate transformation will likely involve more distributed, efficient systems operating closer to users and applications.


The discussion’s most valuable contribution was demonstrating that AI infrastructure requires simultaneous consideration of technical, financial, geopolitical, and environmental dimensions rather than optimising any single factor in isolation. As Tuan Ho observed, the unprecedented scale of government investment in AI infrastructure represents a new paradigm for venture capitalists and entrepreneurs alike. This holistic perspective, combined with the measurability advantages that distinguish AI deployment from previous technology waves, will be essential as the world navigates what may indeed prove to be the largest infrastructure buildout in human history.


Session transcriptComplete transcript of the session
Participant

Thank you. Thank you. Thank you. this infrastructure right now and closing the gap between commitments and capacity. This is where the real opportunity lives. Moderating today’s session is Ujwal Kumar, founder and CEO of Quantum Alliance and co -founder of Cognosy AI. Quantum Alliance works with universities, industry and governments to get top talent working on the foundational problems beneath AI, from critical minerals to energy to semiconductors. He will be joined by Tuan Ho, Unicorn founder turned venture capitalist, now partner at X Fund. Jeff Binder, serial entrepreneur with multiple Fortune 500 exits, now at Harvard Venture Partners. Prince Thavan, IAS officer and executive director at REC Limited under the Ministry of Power. Rushali Gaut, global director of climate operations at Google.

Dr. Tobias Helbig, V .I. and VP of Innovation at NXP Semiconductors. Ujwal, over to you. We’ll start with a quick picture for the panelists if you can all rise Thank you

Ujjwal Kumar

Thank you everyone We are up against Jan, we are up against her boss. So, but, let’s have fun in this panel. And the broader idea, like we have been hearing all about AI models, what AI can do, and this panel is more about, we are talking about AI at scale now, what it needs, what it would make fulfill when we talk about AI, like AI -driven companies, when we are talking about AI -driven solutions. Let’s talk about this now, as AI is forcing creative destruction of how the world builds infrastructure, energy, semiconductors, critical minerals, physical edge systems, data center. US and India are now building this together, rare earth corridors in India’s union budget. Google committed $15 billion to India and accelerated focus on clean energy.

Jensen at Davos called this the largest infrastructure build -out in human history. Two weeks ago, 54 countries launched FORGE, the first global framework for the minerals that power AI. Yesterday, on this very summit, Sundar Pichai laid out Google’s $15 billion commitment to India, a gigawatt -scale AI hub in Vizag, four new subsea cables between US and India. The models are getting attention, the infrastructure is getting the money, and we exactly have the right people to figure out where is all this going and what do we need further. Thank you. To start with, XFund was the early investor in Vulcan Elements. Now it is backed by 1 .4 billion US dollar government partnership to bring America’s rare earth magnet supply chain.

What, according to you, the US -India critical minerals corridor look like from the investor side?

Tuan Ho

First off, thank you for having us here. I’m really glad you pulled this panel together. I think your points earlier about the focus that we tend to put on discussing AI models and everything in the model layer, but we don’t really talk about what exists underneath that, is actually a really unique topic to cover and one that I and XFund, has generally been extremely excited about. so the way I look at that is that in this strive that we have to build intelligence we tend to talk a lot about the industrial revolution that it will create we often have to understand we often look at the industrial revolution that will be required in order to support the creation of that infrastructure there are a lot of inputs required for AI infrastructure so you’ve got energy you’ve got energy the power grid, power generation has to be clean, sustainable, renewable and the demands for AI infrastructure are going to require us to really solve large problems as to how to supply that power you’ve got, everybody’s talking about critical minerals now you mentioned Vulcan elements right You know, Vulcan Elements was a business that we invested in.

It was a Navy veteran out of Harvard who had spent a lot of time looking at supply chain issues for the U .S. military and noted that, you know, 95, over 90 % of, you know, magnets, rare earth magnets were coming through China. It creates a strategic vulnerability for the United States. And the reason why it creates a vulnerability is because, if you think about it, there are so many things that we need, that we build that require magnets. You can’t build hard drives. You can’t build motors. You can’t, I mean, nothing that moves can be built without them. We talk a lot about chips. You can’t manufacture chips without magnets. And so I look at, you know, problems like that, and for the first time, you know, I’m not going to be able to build a magnet.

I think you’ve got guys like me, venture capitalists, looking at… the opportunity to invest in building up that type of infrastructure to solve those sorts of problems. But that’s just one, right? You have to figure out how do you source it? Where do you get the materials from? And so when you look at things happening on the geopolitical scale, for the first time we are, at least in the United States, we’re looking at these trade deals to try to figure out where we’re going to supply the materials. Like how are we going to completely rebuild our power grid? How are we going to build up the capacity for refining those materials? Where are we going to source them from?

Where are we going to have to get them from? As we’re looking at data centers, how can we make them more sustainable, more power efficient? In order to support the AI needs that we have right now, like power consumption for data infrastructure, infrastructure is already, I think it’s about… It’s approaching 10 % plus. How are you going to meet that demand? And so from an investor perspective, yeah, we’re going to look at all of the cool AI products that entrepreneurs are looking to build. But on the other side, what is very exciting for us is looking at all the low -hanging fruit that exists for all of the inputs of industries that have not been innovated in for decades.

Power grids that have not been upgraded for the better part of a century. It creates huge opportunity for us as investors. And you mentioned US -India. Yeah. I… I find a lot of opportunity in the U .S. and India working more closely together to try to figure out how on both sides of the world we can build great companies to meet that need.

Ujjwal Kumar

Thanks, John. You spoke about needs. You spoke about the innovation. You spoke about what the early stage startups should be focusing on. I’ll move to Jeff, who has built companies from scratch and made multiple Fortune 500 exits. I would ask him what would it need for young entrepreneurs to build and scale in this space and do it successfully.

Jeff Binder

Thank you for having us and putting this together. I know that we have more people here than Sundar has at his keynote. So I heard Sam Altman only had 10 yesterday, so we’ve already outdone him. So, you know, I think it’s such an interesting time. I was there in the early web days in 99 and 2000 and 2001, and, you know, the excitement around the Internet obviously fueled a massive tech boom. And ultimately a fiber build -out, an infrastructure build -out, and it took years for all of that infrastructure to be absorbed, and ultimately it was. I think the difference this time around is that the tool sets themselves that entrepreneurs have available to them are smart. And they can bridge some of the challenges, especially since we’re talking about partnerships between the U .S.

and India. But, you know, oftentimes, especially when you get to things like user interfaces, there are cultural differences from the development work that would happen in India for an Indian audience or the U .S. or China. That’s always made it more difficult for collaboration, sort of on the front end. And many of the products that entrepreneurs are working on are often, you know, front end facing, consumer facing, at least initially. They’re generally not building a lot of B2B platforms. That happens later when you get the experience as to what’s necessary in a business environment. And I think that AI is going to change drastically the ability to leverage sort of cross -border talent, in particular with India and China and other places that were harder to leverage before.

It’s certainly from a quality perspective, SQA and back -end development, I think entrepreneurs have been able to leverage India and other places for the last couple decades. But it’s been harder to get the front end of a product to sort of match the cultural necessity of a given market. And I think that’s going to change. And I think for entrepreneurs, it means that they have a massive amount of leverage that they didn’t have before. And it means that we’re going to have a flood of new ideas that are actually brought to market and work fairly well and allow entrepreneurs to deliver products with probably a tenth the capital, depending on the product, obviously. If you’re doing magnets, you’re sort of stuck with the physical properties and refining and some of the things that you can’t do from an IT perspective.

But. I think for entrepreneurs, it’s an extraordinary opportunity. And those that will win. in my mind over the next few years are going to be the ones that leverage the tools most quickly because it’s not possible any longer to develop in the way that people were developing two or three years ago. If you do that, you’re going to be way late. And so now it’s about not so much your product, but learning what the state of the art is, which is literally changing every day in AI. And it’s a golden age, I think, for entrepreneurs. I think it’s going to be much, much more difficult for investors as an environment because the wealth of ideas are going to get much further along.

And that makes it more difficult, not less difficult, I think, to be an investor because you have more mature products. The entrepreneurs are going to be more mature and the entrepreneurs will have more leverage. And they may be able to make it to market much earlier than they would have otherwise, which means where they might have gone for a second round of seed capital, they may be able to get to market and be into revenue with a single small round of seed capital or no seed capital. And that makes that whole early, early stage ecosystem of angel and venture investing much more challenging. And so I think it’s just a great time. I do think that there’s a huge risk, and I don’t think it affects entrepreneurs or young entrepreneurs, but I do think there’s a huge risk of an overbuild.

It feels a lot like the leverage in terms of optimizing hardware and infrastructure is only going to get better, and it’s potentially going to leave us with actually a – I know right now we’re worried about power, we’re worried about compute, we’re worried about data centers, but I would project if we sat here two years from now, will be looking at a grand overbuild with a real challenge around ROI and how to make all these investments work. And so that’s going to be another positive for the entrepreneur because those resources are going to become very inexpensive relative to even what they are today. And so in that sense, I think it’s a great day for young entrepreneurs.

Ujjwal Kumar

Thanks, Jeff. Jeff, picking up from you about leaving from some of the AI tools, going to market faster, build out, ROIs, now we move to the right time, actually, when you spoke about ROIs. One of the things I was very curious about, all the world leaders coming here and putting a big bet on India. So, like, we just heard Sundar yesterday talking about $15 billion. New C cables between, like, with India. new innovation hubs. Rosalie, you are leading the clean energy transition with Google. I wanted to understand what Google’s, AI’s scale demand basically in terms of energy and why you are placing such heavy weight on India.

Vrushali Gaud

Okay, good. Thank you. Thank you all for joining and I appreciate it. I am being pitched against my boss, so I’m going to try and keep it as entertaining and as nice and valuable as I can. That’s a very interesting question in terms of the scale and why India. But I’ll build on a few things that both of you spoke about. One of the interesting things about this particular AI innovation timeframe you’re looking at is it’s what I call across the full stack. So you’re looking at a lot of things that are happening, which typically we talk about software, AI models, applications. That’s the shiny objects everybody talks about and very exciting. But then the amount of work that’s happening on underneath that, which is why I love this session, too, is beneath the AI.

The physical infrastructure layer of it is fascinating. And that goes from everything from the foundational layer that you’re talking about, which is your materials, your data center construction, your access to energy, to water, to all those foundations. So then how do you construct things the right way? We forget about the physical. These are all buildings, quite a few of them. How do you construct them the right way? And then how do you operate them the right way? And then the use of that. And so what we are seeing is just tremendous value and innovation across the entire stack of AI. Which I. Which I, as an engineer, find very, very fascinating. So in terms of Google, I think the.

The privilege and responsibility that Google has is how do we bring about the most value across that full stack, both from a business perspective but also from the impact perspective. And so a lot of the investments you’re seeing, you’re seeing across those pieces, right? So if you walked across this summit, you would hear different pieces of it. Our expo was mostly featured on the product side, so AI for education, AI for healthcare, AI for agriculture. Culture, how do you use AI in domains, contextualize it, and all of that has a layer of a country and where that context is. And then the announcements you talked about were a lot more on the physical side. So it’s what’s required for data centers.

You need good design, good builds, but then you also need network. And so the sub -C cable announcement is part of that. And if you read a little bit, it’s fascinating. It’s an India -America connection, but it goes one way we are building is across Africa. So that’s a big reason. It’s a big reason to bring on board. And then the other way it goes from Singapore and Australia. So it’s a fascinating network, which, again, you can only build data centers, but what’s the point if you can’t actually use it and network and bring them closer to wherever the edge cases are? So super excited about those pieces. Now going to your point about why India.

So why not India, I think, is what I would start with. But most people know it’s a billion -plus user. It’s a great growth market. It’s a lot of young population who we think are going to be the frontier of the growth. It’s a lot of population who also is very eager about tech and tech adaptation. So if you think about what happened with fintech and the phones and digital tech, a lot of the APAC countries, Asia and Global South, jumped ahead. I see people who didn’t even have credit cards. Now everybody uses GPay and UPI and all of those, right? So there’s a whole revolution where you can skip and build. And I think that’s another big exciting part of investing in India is can a generation of innovators come up?

Who don’t have the linear growth that we’ve seen in other regions but can leapfrog it? and I from an operational perspective feel super excited the same way about clean energy you can talk a little bit more Prince about that but India is one of the fewer places where the math on clean energy just works there’s growth, so there’s tremendous demand, lots of solar wind potential tremendous research going on in battery long duration storage, good policies and then the biggest issue what we’ve seen in the US is grid but they’re trying to build a high frequency grid which is fabulous, which then you bring in the innovation on that layer and that’s the unblock and if only you could solve permitting issues then you’re solving the whole stack that’s the excitement, it’s where the math works where the business case works, where you’ve got the talent and the innovation potential and then you also have the users

Ujjwal Kumar

wow, that’s amazing I do understand now thank you Thanks. So moving from that side, we heard about the demand side, and I’d love to take the insights from Prince, who is actually building the digital public infrastructure for the power sector, and they have been doing some incredible work, which I’ve seen in the past few weeks, particularly about P2P trading. And Prince, with all the initiatives about grid reforms and the trading platform which you are launching at this summit, how do you think the AI’s energy demand is going? How are you supporting it? Your insights.

Prince Dhawan

Thank you. Thank you, Joel. Thank you, everybody, for being here this morning. Let me first start by putting the AI. Thank you. compute demand in context. I honestly, resonating from what Duan had also said in his remarks, I feel that AI essentially will not scale unless your power is programmable. Okay, and that is, so the AI, I would say, I don’t want to call it race, but the AI build will depend a lot on, not on chips, as we might think. We do have the capacity and capability world over to solve that problem. But I think the binding constraint would be grids. It would be how intelligent and resilient your grids are. And I believe that is where, what is going to define the development of most of the compute infrastructure.

Now, what, India has essentially started doing is, we are redefining the architecture. Okay, so we are redefining how we view the grid. India already has one nation, one grid, which is essentially meaning one frequency. And now we are also having one digital interoperable layer that is being brought in by the India energy stack. So what does this mean? What does the stack essentially do? So the India energy stack, it basically creates the interoperable rails for systems to interact with each other. If you have a data center, it is not just creating high demand, but it is creating high peak demand that needs the grid to respond. And that is where you need coordination at scale.

So what is going to be scarce in the times to come is not electrification, as Roshani said. We have enough math works when you talk about solar power, when you talk about wind, even hydro. So that is where the math does work. But what needs to be ensured is coordination at scale. And that is what the India Energy Stack is essentially doing by laying down those foundational building blocks. Now, what we started with was a first showcase of how you can use the stack to essentially source energy from distributed energy resources like the solar rooftop panels, which we have on top of our households. We can literally transact in energy the same way that we transact using GPay, UPI payments.

Or using other such applications, Paytm, PhonePay, etc. So similarly, just imagine. that the data center, instead of relying on long -term PPAs and then hoping that the grid will deliver, can essentially source its power from millions of such distributed rooftop assets dynamically at scale. Just imagine the power of that happening. So it can literally be generating livelihoods for a lot of people who may not even be in geographical proximity to the data center. So individual retail households can essentially monetize their rooftop solar power by supplying to such data centers. How does the stack enable it? The stack lays down standard rules for measurement, identification, and settlement all in near real -time. So that’s how the architecture of the grid itself is changing.

let me the grid evolves generally in decades as to one said we have not invested heavily in the grids world over might be that China is an exception there but India has started doing the plumbing work it has started doing the hard work on that layer and generally AI evolves in quarters but the grid would evolve in decades how would you keep pace right and so that is where the India energy stack comes in where we push that development frontier and we enable people to talk to each other on the grid so AI would need not just electrons not just chips not just electrons it actually needs intelligent electrons and that is where the India energy stack sits in I think that should be in the times to come one of another reasons beyond economics that companies like Google or other companies would take bets on India.

And let me also tell you, you did recount Sundar’s message about $15 billion, but there’s also Reliance’s message about a trillion dollars in the next seven years. So let’s not forget that as well. I’m just putting stuff in context.

Ujjwal Kumar

Two minutes to one. li

Tuan Ho

ke India. And by the way, India is very, very well represented in Cambridge, which is how I probably met half the people on this panel. But it really does create these global scale opportunities to reinvent, to create this other, to support this other industrial revolution beyond just what the AI and the intelligence is allowing us to do. T

Ujjwal Kumar

hanks, Juan. Yeah, that’s exciting. Now with that, we want to move to Dr. Tobias. We have been talking about physical layer infrastructure. He has been working in semiconductor innovation since last 20 years, building it across US, Europe, India. I wanted to ask him, what does the next innovation looks like to you? Or what are you working on at this point? Where are you placing? your bet.

Tobias Helbig

Yeah, thank you very much for the question. Thank you so much for having me here. It’s great. And I would like to build a little bit, Jeff, on what you said earlier where you had this, are we on the right track? What the heck are we doing? And let’s zoom out for a moment. 1942, the head of IBM made a statement. There’s a world market for about five computers. And he was right, given the kind of computers he was looking at. We know better now, some years later, there’s laptops, PCs, there’s mobile phones, there’s basically a computer in every device. There’s billions of computers. Now what we discuss is, hey, AI, huge disruption. Power hungry like hell.

Shall we build some new computers? Shall we build power plants? Or how do we run it with renewable energy? and I get this nagging feeling is this really it or are we missing what came after these five computers in what we’re discussing if I take benchmarks like here’s my brain and it takes 20 watts there’s a fly which is a pretty agile intelligent robot below a milliwatt there’s something there’s something which is going to happen which is different than what we’re discussing here at the moment and that is what’s driving us as a semiconductor company in building on what starts now and driving it out into the real world so we today have products where on whatever 10 watts or so you can run very meaningful LLMs you can interact, you can drive the intelligent into the edge, into our real world that goes hand in hand with what’s happening around here moving from, hey, I can perceive something, is it a dog, a cat, to I can think, generative AI, I can create something out of those models to the point that I can create agents, stuff which acts on my behalf out there in the real field, which drives the intelligence and this disruption you’re looking at here at the moment and which is driving all these conversations, drives it close to us into the real world to the point that these devices, these robots, these whatever you want to call it, they’ll be able to learn.

So what we discuss here, and this is a huge challenge, I totally agree with all statements made before, we’ll see a next phase. It will see this moving into the real world, moving close to me, moving into autonomous systems, which ultimately change my life and change industries. And there is this second wave building up and my expectation, to some extent my hope, is from now on, where we sit as a company, is that this huge thing you’re already discussing with data centers is the five computers. And what is coming is these billions of edge devices which we will also see in the AI space. And just giving an example, I’m running marathons with a watch with me.

I charged it before I left in Germany and it still stays 12 days battery power. And there’s a lot of intelligence in that watch. This is where we are going. So the one is feed the beast and make it happen. The second is avoid that the beast is hungry and look at totally different models which will come in the next phase. Thank you.

Ujjwal Kumar

This is very interesting. Now we are talking about taking AI out of data center now. Any comments from the fellow panelists?

Jeff Binder

I agree with him. I think that the IBM analogies are very good. Very good one. I think we are all focused on the core and centralization. And as we’ve seen in many markets, they move from centralization to decentralization to hybrid approaches. And so that’s, I think, an incredibly astute observation. I do think edge devices ultimately have to be the core component in the full proliferation of AI. And so that means that, you know, as he said, small amounts of power can generate lots of value. It doesn’t necessarily have to be tokens in the center of a data center. So that goes to my concern that I think that – and look, I think all of the resources that are being built will eventually be consumed.

That’s not a – that’s a given. It’s a question of when and what – on what ROI they’ll deliver as they’re being – being consumed and used. And I think that’s a huge risk because – agents at the edge, which are probably going to end up being in the end a much more likely modality a decade from now. And it’ll be interesting to watch for sure.

Prince Dhawan

Okay. I do have a small – and I completely agree, actually. That’s truly, as Jeff said, it was an absolutely astute observation. But you know where you can see this being played out in practice even today? And that’s when you talk about finance. Okay. So finance world knows this. So today, because I work for a non -banking financial company, and one of our main products is infrastructure financing, where data centers are a product that we finance. And Roshali was in a panel discussion that spoke about the trifecta of AI energy and finance. But you know the finance bros, they have figured it out because today if you go for financing of a data center, you won’t get debt financing for GPUs.

GPUs are mostly financed by equity because there is obsolescence risk in GPUs. You would get debt financing for the brake motor, maybe even for sourcing power, but you won’t get debt financing for GPUs. And there you have it because they are seeing the big picture being played out there. So completely on board there, yes.

Ujjwal Kumar

Again, Rukhsani has been…

Vrushali Gaud

No, no, no, I’m good.

Ujjwal Kumar

No, no, no. We want to hear from you. Please, go ahead.

Vrushali Gaud

No, I think the risk of strata assets, the way you said, and the ROI is real, like in a sense of where you’re investing and what. But I think your point is very astute. There are portions of this will be obsolete. There are portions of this which will be very easily replaced, whether it’s on the chip side or whether how you write the programs. Even the models, right? You went from large scale, smaller. How do you build them? but also what I’m hoping is the bets on some of the hard infrastructure are just good things to do like I think to me the fact that we are seeing a transition to renewables or seeing a transition of the grids being operated in a better way, some of the boring bits that people didn’t pay attention to is how do you run things efficiently those I think are good pieces of this and then it goes to you right size it, we’ll get over the FOMO and the extra investments and it’ll probably get right sized into where in the stack you really want to invest with the ROI

Ujjwal Kumar

Thank you, so with this I’d like to take it a little bit more deeper, like we spoke about some of the opportunities we agree on something we may not on some Tuan, you invest early on founders I wanted to check with you and understand with you how do you, can you tell us that is there a mismatch between what is getting funded and what needs to be funded?

Tuan Ho

That’s a good question. Is there a mismatch between what’s getting funded and what needs to be funded? Well, I mean, probably. Yeah, I mean, I think, well, okay, going to the theme of this, I think there is more likely to be a mismatch between what is getting funded in the sort of like the pure AI world, if we’re talking about the foundational models. I think, Jeff, I think you had made this comment a little bit earlier. You look at a lot of the AI companies out there, and it’s a little bit like the dot -com era where you’ll see 100 companies, and the reality is that in five years, there will be five of them that are left.

I think one reason why I like focusing on infrastructure -type businesses is because I think… I think there’s more durability. and clarity to exactly what the problems are that you’re trying to solve. I mean, every great startup begins with a really well -understood problem and a product, what they call a product -market fit, like a founder that’s able to build a great solution to that problem that has some sort of market validation in need. And what I find really exciting about infrastructure businesses is I think the problems are a lot clearer in terms of what you’re trying to solve. To your point, there’s a lot more risk in the GPUs. There’s a lot more risk in the models that you’re building, that you’re building around them.

And the reality, too, is that those things are also going to change a lot faster. I mean, if you look at a data center, as an example, a data center ultimately is a giant box that provides a lot of power at scale and it needs to be able to efficiently… efficiently cool what’s inside it. of it. In terms of what GPUs or compute you put inside, I mean, that can change over many, many generations. But the utility of the infrastructure you’ve built there will always have value. So, I don’t know if that answers your question.

Jeff Binder

To add to that, I think that if you look at the dot -com era, measuring, with the exception of hardware companies, which were in switches like Cisco and other players, it was very difficult to determine whether a product was good or not. For those who remember MySpace before Facebook, it looked like MySpace was going to own the social media space. Of course, most, half the people in here probably don’t even know who MySpace is. It’s much different now. There’s a measurability component in all aspects of AI that didn’t exist in the dot -com era. You know, you had commerce platforms, but it wasn’t clear what made one commerce platform better than another. The consumer would ultimately decide that over time and through iterations.

And if you remember, Amazon for a long time was known for one -click ordering. Well, none of us really want to do that because we don’t want to make a mistake and find out that we bought the wrong thing. I think now it’s different. Almost every aspect of artificial intelligence deployment from the foundational aspects all the way to the top of the stack are measurable. And so that’s going to make the success and failure of businesses much more clear, much sooner than it was in the case of the Doctomer. And I think that’s going to be ultimately the element that shakes out companies very quickly. And then to the point about obsolescence and GPUs, we don’t know what the hardware roadmaps look like, even inside of a Google or a Jensen’s company, NVIDIA.

Or somebody else that’s out there. And power, which is the fundamental thing I think we’re talking about foundationally. can be grossly disrupted by those advances because if somebody has a breakthrough on chip design that’s now 10 or 50 or 100x what somebody else deployed, their data center is now almost instantly, at least from a financing perspective, obsolete. And so that’s a huge danger, I think, for investors in those foundational areas.

Ujjwal Kumar

Thank you. Dr. Tobias, you have also been involved in the innovation ecosystem very strongly. What is your take on this? What are you seeing because you are also involved in India? I’ve seen your company running hackathons and competitions. I’d love to know more from you.

Tobias Helbig

Adding to what just was discussed, we have a tendency to overestimate the next two years and impact and underestimate what’s happening in 10 years. And at the moment, we are going into this with huge bang. which even maybe have these ups and downs things even much bigger. From my perspective, all what we are discussing on AI is absolutely real. This is a huge disruption. This is changing industries. This is changing lives. This is changing professions. Wherever there is data, there is change. And that in the end is driving what we are doing by developing the products we have, which is semiconductors products, by being in India for that since literally decades. Development centers on our DNA history as a company of Motorola, Freescale, NXP, here in the Noida, Delhi region, in Bangalore, and so on.

So very much working on that. And on your question from an innovation perspective, well, we all know the hype cycle. And that’s tough. Because it always means that there is disruption. And there is a trap of disillusionment. And we’ve seen it. for all major breakthroughs, especially when they are being hyped up like hell. There’s the self -driving cars. There’s other things. In the end, these things get real. They have the substance. They happen, and they transform things. And AI will. The way there, and also in the question, hey, what’s the risk? What’s the bad, and what’s coming from the sidelines? I think we will see still troughs of disillusionments and surprises. There was one some while ago when this wave had a deep -seating moment.

Such moments will come again. And there will be a recovery from that, I’m also sure. So I’m in innovation since literally decades. I love it. It’s a roller coaster. We overestimate, we get shocked, and we get it right.

Ujjwal Kumar

Thank you. With that, I’ll go to Sari. I was very excited when I saw Google Logo launching Google Climate Technology Center. Okay. And I would like you to quickly give your insights, like what is it about and what would the innovators be looking for it?

Vrushali Gaud

Yes, thank you. So super excited. This week we announced in partnership with the Office of Principal Scientific Advisory for the Government of India, Google’s Center for Climate Tech. So there’s a couple, you know, how did we get here? Because that’s interesting to you is we see a lot of innovation. I live in Silicon Valley. I was raised in India, across back and forth. I’ve lived across the world. There’s different innovation which comes from big institutes, big academic settings, big companies. But there’s also innovation that comes from different corners of the world. What we loved about the PSA philosophy was they’re trying to get more Tier 2 and Tier 3 cities and also a wider spread of universities and academia that can get involved in this.

So, you know, besides your premier one. So that was very enticing. The other thing is, how do we take innovation down to the root? which I think also helps with some of the hype cycle because you’re making it local and you’re also making contextual to where those cities are. So with that in mind, what our center is looking for, we have a couple big pillars. One is skilling. We think there’s green skilling. A lot of focus on AI skilling, but in terms of green skills, which are decarbonization, clean energy, in terms of just we are looking at materials, chemistry, there’s a lot of new things in those spaces which haven’t been brought into college curriculum or university curriculum.

So we want to build upon that. A lot of the construction and investments are happening in tier two cities, so we think it’s a great way to get a more diverse pool skill in that. So that’s number one pillar. The second one is low carbon materials. So you go to embodied carbon, something you’re all very passionate about. So how do you drive innovation in construction, which is going to be huge? And again, it’s not just data centers. What you learn from data centers can be for real estate, for commercial buildings. So it’s to do with low -carbon steel, low -carbon cement, and low -carbon materials as you see them go through that construction cycle. And the third one we have looked at is right now that we have is sustainable aviation fuel, which is a little different from data centers.

It’s not that, but I think it’s like a good growing area, which we are, again, one of the philosophies we have is where can we find first -of -a -kind pilots and places where we can build, bring the Google brand and innovation. And we think sustainable aviation fuel in a growing country that is like now has one of the fastest growing airports and air traffic, that would be a good one too. And our hope is as we go through this, we are trying to see very outcomes -based, so not pure research, but pilots and actual update.

Ujjwal Kumar

Thank you. Very quickly, Tuan, do you have any closing 30 seconds?

Tuan Ho

I was going to say there, one thing that I don’t think we had a chance to do. We didn’t discuss as much, but it is important. especially as we’re at an event like this is government financing. I think what’s really another thing that’s been really exciting about this is having a tech conference like this where you have the Prime Minister and multiple heads of states coming from around the world to say like, these are things that we need to invest in, these are things that we need to support, is I think from a tech VC side of things something that we’re not used to seeing. But I also think it’s very exciting. Both in the United States you’re seeing hundreds of billions, hundreds of billions of dollars being invested by the federal government into infrastructure.

And you’re seeing similar investments being made in countries like India, countries outside of, China’s been doing this for a while, but you’re seeing this happen around the world. And so yeah, I think, I mean, where are we right now? There is the . industrial, like I said at the beginning, there’s the industrial revolution that AI is ushering in, but there’s also the industrial revolution that the requirements of AI are also going to require or is also going to usher in. So I think it’s going to be a bright future for us all.

Ujjwal Kumar

Thank you. I think that’s a great closing for us, and I enjoyed talking to all of you. I really had so much fun. Thanks, and your insights are amazing. Hopefully the innovators looking here, they got something out of it, and we’ll see some new people coming to all of us doing the innovations. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
T
Tuan Ho
6 arguments149 words per minute1310 words525 seconds
Argument 1
Critical minerals supply chain vulnerability creates strategic risks and investment opportunities
EXPLANATION
Tuan Ho argues that over 90% of rare earth magnets come through China, creating strategic vulnerability for the United States since magnets are essential for manufacturing hard drives, motors, chips, and anything that moves. This vulnerability creates investment opportunities for venture capitalists to fund infrastructure that solves these supply chain problems.
EVIDENCE
XFund invested in Vulcan Elements, founded by a Navy veteran from Harvard who identified that over 90% of rare earth magnets were coming through China. Vulcan Elements is now backed by $1.4 billion US government partnership to bring America’s rare earth magnet supply chain.
MAJOR DISCUSSION POINT
AI Infrastructure Requirements and Investment Opportunities
DISAGREED WITH
Prince Dhawan
Argument 2
Infrastructure businesses offer more durability and clearer problem definition than pure AI models
EXPLANATION
Tuan Ho contends that infrastructure-type businesses have more durability and clarity regarding the problems they’re trying to solve compared to pure AI companies. He suggests that infrastructure like data centers will retain value across multiple generations of compute technology, while AI models and GPUs face higher obsolescence risk.
EVIDENCE
He notes that data centers are essentially giant boxes providing power and cooling that can adapt to different generations of GPUs or compute, while the foundational infrastructure retains its utility. He compares this to the dot-com era where 100 companies might exist but only 5 will remain in five years.
MAJOR DISCUSSION POINT
Investment Risk and Market Dynamics
AGREED WITH
Jeff Binder, Prince Dhawan
DISAGREED WITH
Jeff Binder
Argument 3
Power grids unchanged for decades create huge innovation opportunities for investors
EXPLANATION
Tuan Ho argues that power grids have not been upgraded for the better part of a century, creating significant opportunities for investors in industries that haven’t been innovated in for decades. He sees this as low-hanging fruit for investment as AI infrastructure demands require complete rebuilding of power capacity.
EVIDENCE
He mentions that power consumption for data infrastructure is already approaching 10% and asks how this growing demand will be met, pointing to the need for power grid upgrades, sustainable data centers, and new power generation capacity.
MAJOR DISCUSSION POINT
Energy and Grid Modernization for AI
AGREED WITH
Vrushali Gaud
Argument 4
US-India partnerships create global scale opportunities for infrastructure innovation
EXPLANATION
Tuan Ho sees significant opportunity in US-India collaboration to build companies that meet AI infrastructure needs on both sides of the world. He notes that India is well-represented in places like Cambridge, creating global connections for this industrial revolution beyond just AI intelligence capabilities.
EVIDENCE
He mentions meeting half the panel through India’s representation in Cambridge and references the broader industrial revolution that AI requirements will create, requiring coordination between countries for infrastructure development.
MAJOR DISCUSSION POINT
India as Strategic AI Hub
Argument 5
Mismatch exists between funding for AI models versus needed infrastructure investments
EXPLANATION
When asked about funding mismatches, Tuan Ho suggests there’s likely more mismatch in pure AI foundational models funding compared to infrastructure businesses. He argues that infrastructure problems are clearer and more durable, while AI companies face risks similar to the dot-com era where many will not survive.
EVIDENCE
He references the dot-com era comparison where 100 companies existed but only 5 remained after five years, and notes that infrastructure like data centers provides lasting utility regardless of what compute technology is placed inside them.
MAJOR DISCUSSION POINT
Investment Risk and Market Dynamics
Argument 6
Government financing at unprecedented scale supports infrastructure development globally
EXPLANATION
Tuan Ho emphasizes the unprecedented level of government investment in infrastructure, noting that having tech conferences where Prime Ministers and heads of state commit hundreds of billions of dollars is something venture capitalists aren’t used to seeing. He sees this as very exciting for the tech VC ecosystem.
EVIDENCE
He mentions hundreds of billions of dollars being invested by the federal government in the United States into infrastructure, with similar investments in India and other countries, noting that China has been doing this for a while but now it’s happening globally.
MAJOR DISCUSSION POINT
Investment Risk and Market Dynamics
J
Jeff Binder
5 arguments148 words per minute1352 words546 seconds
Argument 1
AI entrepreneurs now have unprecedented leverage with smart tools reducing capital requirements
EXPLANATION
Jeff Binder argues that AI tools are giving entrepreneurs massive leverage they didn’t have before, allowing them to deliver products with potentially a tenth of the capital required previously. He contends that entrepreneurs who leverage these tools most quickly will win, as it’s no longer possible to develop using methods from 2-3 years ago.
EVIDENCE
He notes that AI tools can bridge cultural differences in development work between countries like the US and India, particularly in user interfaces and front-end development, which was previously difficult for cross-border collaboration. He mentions entrepreneurs may reach market and revenue with single small seed rounds instead of multiple rounds.
MAJOR DISCUSSION POINT
AI Infrastructure Requirements and Investment Opportunities
Argument 2
Market evolution follows centralization to decentralization to hybrid approaches pattern
EXPLANATION
Jeff Binder agrees with the observation that markets typically move from centralization to decentralization to hybrid approaches. He believes edge devices will ultimately be the core component in full AI proliferation, meaning small amounts of power can generate lots of value without requiring tokens in centralized data centers.
EVIDENCE
He references the IBM analogy about five computers and notes that the focus on core centralization will shift, with edge devices becoming more important for AI deployment in the future.
MAJOR DISCUSSION POINT
Evolution from Centralized to Edge AI Computing
AGREED WITH
Tobias Helbig
Argument 3
Risk of infrastructure overbuild similar to fiber buildout, but resources will eventually be consumed
EXPLANATION
Jeff Binder warns of a potential grand overbuild in AI infrastructure, comparing it to the early web days and fiber buildout that took years to be absorbed. However, he believes all resources will eventually be consumed, with the main question being timing and ROI as they’re utilized.
EVIDENCE
He draws from his experience in the early web days of 1999-2001, noting the massive tech boom and infrastructure buildout that followed, and suggests we might see similar patterns with AI infrastructure investment.
MAJOR DISCUSSION POINT
Investment Risk and Market Dynamics
DISAGREED WITH
Tuan Ho
Argument 4
Cross-border talent collaboration becomes easier with AI tools bridging cultural differences
EXPLANATION
Jeff Binder argues that AI tools are changing the ability to leverage cross-border talent, particularly with India and China, by bridging cultural differences that previously made collaboration difficult, especially in front-end and consumer-facing products. This gives entrepreneurs access to global talent pools more effectively.
EVIDENCE
He notes that while entrepreneurs could leverage India for quality SQA and back-end development for decades, it was harder to get front-end products to match cultural necessities of given markets, but AI is changing this dynamic.
MAJOR DISCUSSION POINT
India as Strategic AI Hub
Argument 5
Measurability in AI makes success/failure clearer and faster than dot-com era
EXPLANATION
Jeff Binder contends that unlike the dot-com era where it was difficult to determine product quality, almost every aspect of AI deployment from foundational to top-of-stack is measurable. This measurability will make business success and failure much clearer and happen much sooner than in previous technology cycles.
EVIDENCE
He contrasts this with the dot-com era, citing examples like MySpace before Facebook, and Amazon’s one-click ordering, where consumer preference determined winners over time through iterations, whereas AI has built-in measurability components.
MAJOR DISCUSSION POINT
Investment Risk and Market Dynamics
AGREED WITH
Tobias Helbig
V
Vrushali Gaud
5 arguments189 words per minute1506 words477 seconds
Argument 1
Google’s $15 billion India investment focuses on full-stack AI infrastructure including data centers and subsea cables
EXPLANATION
Vrushali Gaud explains that Google’s investment spans the entire AI stack from foundational materials and data center construction to network infrastructure. The subsea cable announcement includes an India-America connection that goes through Africa one way and Singapore-Australia the other way, creating a comprehensive network infrastructure.
EVIDENCE
She details Google’s expo featuring AI for education, healthcare, and agriculture, plus the physical infrastructure announcements including subsea cables, data centers, and the $15 billion commitment. She mentions the cables create a fascinating network connecting multiple continents.
MAJOR DISCUSSION POINT
AI Infrastructure Requirements and Investment Opportunities
DISAGREED WITH
Tobias Helbig
Argument 2
India offers billion-plus users, young tech-eager population, and leapfrog innovation potential
EXPLANATION
Vrushali Gaud argues that India represents an ideal investment opportunity due to its billion-plus user base, young population eager about technology adoption, and potential for leapfrog innovation. She notes how populations jumped ahead in fintech and digital payments, skipping traditional infrastructure like credit cards.
EVIDENCE
She provides examples of how people without credit cards now use GPay and UPI, demonstrating how APAC countries and Global South can skip linear development stages. She sees potential for a generation of innovators who can leapfrog traditional growth patterns.
MAJOR DISCUSSION POINT
India as Strategic AI Hub
Argument 3
Clean energy math works in India due to growth demand, solar/wind potential, and grid innovation
EXPLANATION
Vrushali Gaud contends that India is one of the few places where clean energy economics are favorable due to tremendous demand growth, abundant solar and wind potential, good policies, and innovative grid development. She notes they’re building a high-frequency grid which enables innovation at the infrastructure layer.
EVIDENCE
She mentions tremendous research in battery and long-duration storage, good policies, and contrasts this with US grid issues. She notes that solving permitting issues would complete the entire clean energy stack, making the business case work where talent, innovation potential, and users align.
MAJOR DISCUSSION POINT
Energy and Grid Modernization for AI
AGREED WITH
Prince Dhawan
Argument 4
Innovation happens across full stack from materials to applications, not just software
EXPLANATION
Vrushali Gaud emphasizes that AI innovation spans the complete technology stack, from foundational materials and data center construction to software applications. She argues that while people focus on the ‘shiny objects’ of AI models and applications, significant innovation is happening in the physical infrastructure layer beneath AI.
EVIDENCE
She describes the stack from foundational materials, data center construction, access to energy and water, through to proper construction and operation methods, noting that these are physical buildings requiring proper engineering across all layers.
MAJOR DISCUSSION POINT
Innovation and Technology Development
AGREED WITH
Tuan Ho
Argument 5
Google Climate Technology Center focuses on green skilling, low-carbon materials, and sustainable aviation fuel
EXPLANATION
Vrushali Gaud announces Google’s Climate Technology Center partnership with India’s Office of Principal Scientific Advisory, focusing on three pillars: green skills development in decarbonization and clean energy, low-carbon materials for construction, and sustainable aviation fuel innovation. The center aims to engage Tier 2 and Tier 3 cities and broader university participation.
EVIDENCE
She explains the partnership philosophy of reaching beyond premier institutions to include more diverse universities and cities, focusing on outcomes-based pilots rather than pure research, and targeting growing areas like India’s expanding airport and air traffic infrastructure.
MAJOR DISCUSSION POINT
Innovation and Technology Development
P
Prince Dhawan
4 arguments129 words per minute884 words410 seconds
Argument 1
AI will not scale unless power systems become programmable and grids become intelligent
EXPLANATION
Prince Dhawan argues that AI scaling depends not on chips, which can be solved globally, but on intelligent and resilient grids. He contends that the binding constraint for AI development will be grid intelligence and programmability, not semiconductor capacity.
EVIDENCE
He notes that while chip capacity and capability can be solved worldwide, the real challenge lies in grid infrastructure. He emphasizes that AI creates not just high demand but high peak demand requiring grid coordination and response at scale.
MAJOR DISCUSSION POINT
Energy and Grid Modernization for AI
AGREED WITH
Vrushali Gaud
DISAGREED WITH
Tuan Ho
Argument 2
India Energy Stack enables peer-to-peer energy trading and coordination at scale for data centers
EXPLANATION
Prince Dhawan explains that the India Energy Stack creates interoperable rails allowing data centers to source power dynamically from millions of distributed rooftop solar assets, similar to how UPI enables payments. This allows individual households to monetize their solar power by supplying data centers, creating livelihoods regardless of geographical proximity.
EVIDENCE
He provides the analogy of transacting energy like using GPay or UPI payments, with the stack providing standard rules for measurement, identification, and settlement in near real-time. He mentions the first showcase of sourcing energy from distributed solar rooftop panels.
MAJOR DISCUSSION POINT
Energy and Grid Modernization for AI
Argument 3
India’s digital public infrastructure and grid reforms create unique advantages for AI deployment
EXPLANATION
Prince Dhawan highlights that India has achieved ‘one nation, one grid’ with unified frequency and is adding a digital interoperable layer through the India Energy Stack. This infrastructure enables coordination at scale and pushes grid development to keep pace with AI evolution, which typically happens in quarters while grids evolve in decades.
EVIDENCE
He explains that India already has one nation, one grid with unified frequency, and the Energy Stack creates interoperable systems. He contrasts AI evolution in quarters versus grid evolution in decades, noting India is doing the foundational plumbing work to bridge this gap.
MAJOR DISCUSSION POINT
India as Strategic AI Hub
Argument 4
Finance sector already recognizes obsolescence risk by refusing debt financing for GPUs
EXPLANATION
Prince Dhawan points out that the finance industry has already identified the infrastructure risk patterns by providing debt financing for data center infrastructure like buildings and power sourcing, but refusing debt financing for GPUs due to obsolescence risk. This demonstrates that financial professionals understand the durability differences in AI infrastructure components.
EVIDENCE
He explains that as someone working for a non-banking financial company that finances infrastructure including data centers, GPUs are mostly financed by equity due to obsolescence risk, while debt financing is available for brick and mortar infrastructure and power sourcing.
MAJOR DISCUSSION POINT
Evolution from Centralized to Edge AI Computing
AGREED WITH
Tuan Ho, Jeff Binder
T
Tobias Helbig
4 arguments150 words per minute874 words348 seconds
Argument 1
Current data center focus represents ‘five computers’ phase, with billions of edge devices coming next
EXPLANATION
Tobias Helbig draws an analogy to IBM’s 1942 prediction of a world market for five computers, arguing that today’s focus on power-hungry data centers represents a similar limited perspective. He contends that the real transformation will come from billions of edge devices, similar to how computers evolved from mainframes to personal devices everywhere.
EVIDENCE
He references the 1942 IBM statement about five computers being right for the computers of that era, then notes the evolution to laptops, PCs, mobile phones, and computers in every device. He contrasts his brain’s 20-watt consumption with a fly’s sub-milliwatt intelligent operation.
MAJOR DISCUSSION POINT
Evolution from Centralized to Edge AI Computing
AGREED WITH
Jeff Binder
DISAGREED WITH
Vrushali Gaud
Argument 2
Edge AI devices will enable autonomous systems that learn and act in the real world
EXPLANATION
Tobias Helbig argues that AI will evolve from perception (recognizing dogs/cats) to thinking (generative AI) to creating agents that act autonomously in the real world. These edge devices will be able to learn and will fundamentally change lives and industries by bringing intelligence close to users in the physical world.
EVIDENCE
He describes the progression from perception to generative AI to autonomous agents, noting that NXP has products running meaningful LLMs on 10 watts, enabling intelligence to move to the edge and real-world applications.
MAJOR DISCUSSION POINT
Evolution from Centralized to Edge AI Computing
AGREED WITH
Jeff Binder
Argument 3
Semiconductor innovation drives AI from data centers to edge devices with minimal power consumption
EXPLANATION
Tobias Helbig explains that semiconductor innovation is enabling AI to move from power-hungry data centers to efficient edge devices. He uses the example of a marathon watch that runs for 12 days on a single charge while containing significant intelligence, demonstrating the potential for low-power AI deployment.
EVIDENCE
He provides the example of his marathon watch with 12-day battery life containing substantial intelligence, and mentions NXP products that can run meaningful LLMs on approximately 10 watts, showing the progression toward efficient edge AI.
MAJOR DISCUSSION POINT
Innovation and Technology Development
Argument 4
Hype cycles create troughs of disillusionment but real transformation ultimately occurs
EXPLANATION
Tobias Helbig acknowledges that AI follows typical hype cycles with periods of disillusionment, but emphasizes that the underlying transformation is real and will change industries and lives wherever data exists. He notes that major breakthroughs often experience these cycles, including self-driving cars and other technologies.
EVIDENCE
He references the standard hype cycle pattern and mentions self-driving cars as an example of technologies that experience hype, disillusionment, and eventual real-world transformation. He notes there was a previous deep-seating moment in the current AI wave.
MAJOR DISCUSSION POINT
Innovation and Technology Development
AGREED WITH
Jeff Binder
U
Ujjwal Kumar
3 arguments64 words per minute916 words850 seconds
Argument 1
AI is forcing creative destruction of global infrastructure across energy, semiconductors, and critical minerals
EXPLANATION
Ujjwal Kumar argues that AI is fundamentally transforming how the world builds infrastructure, including energy systems, semiconductors, critical minerals, physical edge systems, and data centers. He positions this as a creative destruction process that requires new approaches to infrastructure development.
EVIDENCE
He cites Jensen Huang calling this ‘the largest infrastructure build-out in human history’ at Davos, and mentions the launch of FORGE by 54 countries as the first global framework for minerals that power AI.
MAJOR DISCUSSION POINT
AI Infrastructure Requirements and Investment Opportunities
Argument 2
US-India partnership represents unprecedented collaboration in AI infrastructure development
EXPLANATION
Ujjwal Kumar highlights the strategic partnership between the US and India in building AI infrastructure, including rare earth corridors, Google’s $15 billion commitment, and new subsea cables. He presents this as a model for international cooperation in AI infrastructure development.
EVIDENCE
He mentions rare earth corridors in India’s union budget, Google’s $15 billion commitment to India including a gigawatt-scale AI hub in Vizag, four new subsea cables between US and India, and Sundar Pichai’s announcements at the summit.
MAJOR DISCUSSION POINT
India as Strategic AI Hub
Argument 3
Focus should shift from AI models to the foundational infrastructure that enables AI at scale
EXPLANATION
Ujjwal Kumar argues that while much attention is given to AI models and their capabilities, the real opportunity lies in understanding and building the foundational infrastructure needed for AI-driven companies and solutions to operate at scale. He emphasizes the need to examine what AI requires rather than just what it can do.
EVIDENCE
He notes that ‘models are getting attention, the infrastructure is getting the money’ and structures the panel around having ‘the right people to figure out where is all this going and what do we need further.’
MAJOR DISCUSSION POINT
AI Infrastructure Requirements and Investment Opportunities
P
Participant
1 argument56 words per minute156 words166 seconds
Argument 1
Infrastructure development represents the real opportunity in closing the gap between AI commitments and capacity
EXPLANATION
The opening participant argues that while there are many commitments being made around AI, the real opportunity lies in building the actual infrastructure needed to support these commitments. They emphasize that closing this gap between what is promised and what can actually be delivered is where the focus should be.
EVIDENCE
The participant mentions ‘closing the gap between commitments and capacity’ and notes that ‘this is where the real opportunity lives’ in the context of infrastructure development.
MAJOR DISCUSSION POINT
AI Infrastructure Requirements and Investment Opportunities
Agreements
Agreement Points
Infrastructure businesses offer more durability and clearer problems than AI models
Speakers: Tuan Ho, Jeff Binder, Prince Dhawan
Infrastructure businesses offer more durability and clearer problem definition than pure AI models Market evolution follows centralization to decentralization to hybrid approaches pattern Finance sector already recognizes obsolescence risk by refusing debt financing for GPUs
All three speakers agree that physical infrastructure investments are more durable and less risky than AI models/GPUs, with the finance sector already recognizing this by providing debt financing for infrastructure but not for GPUs due to obsolescence risk
AI will evolve from centralized data centers to distributed edge computing
Speakers: Jeff Binder, Tobias Helbig
Market evolution follows centralization to decentralization to hybrid approaches pattern Current data center focus represents ‘five computers’ phase, with billions of edge devices coming next Edge AI devices will enable autonomous systems that learn and act in the real world
Both speakers agree that the current focus on centralized data centers represents an early phase, similar to IBM’s prediction of five computers, and that the future lies in distributed edge devices that can operate with minimal power
Grid modernization and intelligent power systems are critical for AI scaling
Speakers: Prince Dhawan, Vrushali Gaud
AI will not scale unless power systems become programmable and grids become intelligent Clean energy math works in India due to growth demand, solar/wind potential, and grid innovation
Both speakers emphasize that intelligent, modernized grids are essential for AI infrastructure, with India’s grid innovations and clean energy potential providing a strong foundation for AI deployment
Innovation happens across the full technology stack, not just software
Speakers: Vrushali Gaud, Tuan Ho
Innovation happens across full stack from materials to applications, not just software Power grids unchanged for decades create huge innovation opportunities for investors
Both speakers agree that while attention focuses on AI models and software, significant innovation opportunities exist in the physical infrastructure layer, including materials, construction, energy systems, and decades-old power grids
Measurability in AI creates clearer success/failure indicators than previous technology cycles
Speakers: Jeff Binder, Tobias Helbig
Measurability in AI makes success/failure clearer and faster than dot-com era Hype cycles create troughs of disillusionment but real transformation ultimately occurs
Both speakers acknowledge that while AI follows typical hype cycles with periods of disillusionment, the measurable nature of AI deployment makes business success and failure clearer than in previous technology booms like the dot-com era
Similar Viewpoints
Both see significant investment opportunities in AI infrastructure, with Tuan focusing on supply chain vulnerabilities creating opportunities and Jeff emphasizing how AI tools give entrepreneurs more leverage with less capital
Speakers: Tuan Ho, Jeff Binder
Critical minerals supply chain vulnerability creates strategic risks and investment opportunities AI entrepreneurs now have unprecedented leverage with smart tools reducing capital requirements
Both highlight India’s unique advantages for AI deployment, with Vrushali emphasizing the user base and leapfrog potential, while Prince focuses on the digital infrastructure and grid innovations that enable AI scaling
Speakers: Vrushali Gaud, Prince Dhawan
India offers billion-plus users, young tech-eager population, and leapfrog innovation potential India’s digital public infrastructure and grid reforms create unique advantages for AI deployment
Both emphasize the strategic importance of US-India collaboration in building AI infrastructure, seeing it as a model for international cooperation and global-scale innovation opportunities
Speakers: Ujjwal Kumar, Tuan Ho
US-India partnership represents unprecedented collaboration in AI infrastructure development US-India partnerships create global scale opportunities for infrastructure innovation
Unexpected Consensus
Risk of AI infrastructure overbuild despite current capacity concerns
Speakers: Jeff Binder, Vrushali Gaud
Risk of infrastructure overbuild similar to fiber buildout, but resources will eventually be consumed Innovation happens across full stack from materials to applications, not just software
Despite widespread concerns about AI infrastructure capacity shortages, both speakers acknowledge the risk of overbuilding, similar to the fiber buildout during the dot-com era. This is unexpected given the current narrative of infrastructure scarcity
Finance sector already pricing in AI infrastructure obsolescence risk
Speakers: Prince Dhawan, Tuan Ho
Finance sector already recognizes obsolescence risk by refusing debt financing for GPUs Infrastructure businesses offer more durability and clearer problem definition than pure AI models
The consensus that financial institutions are already sophisticated enough to distinguish between durable infrastructure and obsolescence-prone AI hardware is unexpected, showing the finance sector is ahead of the technology hype in risk assessment
Overall Assessment

The speakers show strong consensus on several key points: the superiority of infrastructure investments over AI models in terms of durability, the inevitable evolution from centralized to edge computing, the critical importance of grid modernization for AI scaling, and the strategic value of US-India partnerships. There’s also agreement on the full-stack nature of AI innovation and the measurable advantages AI has over previous technology cycles.

High level of consensus with complementary expertise – the speakers approach from different angles (investment, entrepreneurship, government policy, corporate strategy, semiconductor innovation) but arrive at similar conclusions about infrastructure durability, the importance of physical systems, and India’s strategic advantages. The consensus suggests a mature understanding of AI infrastructure requirements beyond the typical focus on models and software, with implications for more sustainable and realistic AI development strategies.

Differences
Different Viewpoints
Infrastructure investment risk and timing
Speakers: Jeff Binder, Tuan Ho
Risk of infrastructure overbuild similar to fiber buildout, but resources will eventually be consumed Infrastructure businesses offer more durability and clearer problem definition than pure AI models
Jeff Binder warns of potential grand overbuild in AI infrastructure with ROI challenges, while Tuan Ho emphasizes the durability and clarity of infrastructure investments compared to AI models
Future of AI computing architecture
Speakers: Tobias Helbig, Vrushali Gaud
Current data center focus represents ‘five computers’ phase, with billions of edge devices coming next Google’s $15 billion India investment focuses on full-stack AI infrastructure including data centers and subsea cables
Tobias argues current data center focus is limited and edge devices will dominate, while Vrushali represents Google’s massive investment in centralized data center infrastructure
Primary constraint for AI scaling
Speakers: Prince Dhawan, Tuan Ho
AI will not scale unless power systems become programmable and grids become intelligent Critical minerals supply chain vulnerability creates strategic risks and investment opportunities
Prince identifies intelligent grids as the binding constraint for AI scaling, while Tuan focuses on critical minerals and supply chain vulnerabilities as key limitations
Unexpected Differences
Obsolescence risk assessment
Speakers: Prince Dhawan, Jeff Binder
Finance sector already recognizes obsolescence risk by refusing debt financing for GPUs Risk of infrastructure overbuild similar to fiber buildout, but resources will eventually be consumed
Prince uses finance sector behavior to validate infrastructure durability, while Jeff warns of potential obsolescence from hardware breakthroughs that could make data centers instantly obsolete – an unexpected contradiction in risk assessment
Innovation timeline expectations
Speakers: Tobias Helbig, Ujjwal Kumar
Hype cycles create troughs of disillusionment but real transformation ultimately occurs AI is forcing creative destruction of global infrastructure across energy, semiconductors, and critical minerals
Tobias emphasizes caution about hype cycles and overestimation, while Ujjwal presents AI transformation as currently happening at unprecedented scale – unexpected disagreement on transformation timing
Overall Assessment

Main disagreements center on infrastructure investment timing and risk, computing architecture evolution, and primary constraints for AI scaling

Moderate disagreement level with significant implications for investment strategies, infrastructure development priorities, and policy focus areas. While speakers agree on AI’s transformative potential, they differ substantially on optimal approaches and risk assessments

Partial Agreements
Both agree that AI will evolve from centralized to edge computing, but Jeff sees this as a general market pattern while Tobias views current data center focus as fundamentally limited thinking
Speakers: Jeff Binder, Tobias Helbig
Market evolution follows centralization to decentralization to hybrid approaches pattern Current data center focus represents ‘five computers’ phase, with billions of edge devices coming next
Both agree India has strong clean energy potential and grid innovation, but Vrushali focuses on overall economic favorability while Prince emphasizes specific technical solutions for AI coordination
Speakers: Vrushali Gaud, Prince Dhawan
Clean energy math works in India due to growth demand, solar/wind potential, and grid innovation India Energy Stack enables peer-to-peer energy trading and coordination at scale for data centers
Both recognize changes in the investment landscape, but Tuan sees infrastructure as more durable while Jeff sees AI tools reducing capital needs for entrepreneurs
Speakers: Tuan Ho, Jeff Binder
Mismatch exists between funding for AI models versus needed infrastructure investments AI entrepreneurs now have unprecedented leverage with smart tools reducing capital requirements
Takeaways
Key takeaways
AI scaling requires a complete infrastructure revolution spanning critical minerals, energy systems, semiconductors, and grid modernization – representing the largest infrastructure buildout in human history The binding constraint for AI development will be intelligent, programmable power grids rather than chips or compute capacity Current centralized data center approach represents an early phase, with the future moving toward billions of edge AI devices requiring minimal power consumption India emerges as a strategic AI hub due to its billion-plus user base, leapfrog innovation potential, favorable clean energy economics, and digital public infrastructure capabilities Infrastructure investments offer more durability and clearer problem definitions compared to pure AI model companies, which face higher obsolescence risks AI tools are democratizing entrepreneurship by reducing capital requirements and enabling faster go-to-market strategies with unprecedented leverage Government financing at unprecedented scale (hundreds of billions globally) is supporting this infrastructure transformation Innovation must occur across the full technology stack from foundational materials to applications, not just software layers Risk of infrastructure overbuild exists similar to the dot-com fiber buildout, but resources will eventually be consumed as the market matures
Resolutions and action items
Google announced $15 billion investment in India including gigawatt-scale AI hub in Vizag and four new subsea cables between US and India Launch of Google Climate Technology Center in partnership with India’s Office of Principal Scientific Advisory focusing on green skilling, low-carbon materials, and sustainable aviation fuel India Energy Stack implementation enabling peer-to-peer energy trading for data centers to source power from distributed rooftop solar assets 54 countries launched FORGE framework for AI-powering minerals coordination US-India critical minerals corridor development to reduce dependency on China for rare earth magnets
Unresolved issues
Mismatch between current funding patterns (favoring AI models) versus actual infrastructure needs Uncertainty about hardware roadmaps and potential breakthrough technologies that could make current investments obsolete Timeline disconnect between AI evolution (quarters) and grid infrastructure development (decades) ROI challenges for massive infrastructure investments given rapid technological change Permitting and regulatory issues that could block clean energy infrastructure deployment How to balance centralized data center investments with emerging edge computing requirements Financing models for high-obsolescence components like GPUs versus durable infrastructure
Suggested compromises
Hybrid approach combining centralized and decentralized AI computing to balance current needs with future edge requirements Focus infrastructure investments on durable components (power, cooling, buildings) while treating compute components as equity-financed due to obsolescence risk Leverage cross-border talent collaboration using AI tools to bridge cultural and technical gaps between US and India Right-size investments after initial FOMO phase to focus on infrastructure with clear long-term value Combine government financing with private investment to share risks of large-scale infrastructure development
Thought Provoking Comments
AI essentially will not scale unless your power is programmable… I feel that AI, I would say, I don’t want to call it race, but the AI build will depend a lot on, not on chips, as we might think. We do have the capacity and capability world over to solve that problem. But I think the binding constraint would be grids. It would be how intelligent and resilient your grids are.
This comment reframes the entire AI infrastructure discussion by identifying grids, not chips or compute power, as the primary bottleneck. It challenges the conventional focus on semiconductors and processing power, introducing the concept of ‘programmable power’ and ‘intelligent electrons’ as fundamental requirements for AI scaling.
This shifted the conversation from traditional infrastructure concerns to a more nuanced understanding of energy systems. It prompted other panelists to consider the interconnected nature of AI infrastructure and led to deeper discussion about distributed energy resources and grid modernization as enablers of AI deployment.
Speaker: Prince Dhawan
1942, the head of IBM made a statement. There’s a world market for about five computers. And he was right, given the kind of computers he was looking at… I get this nagging feeling is this really it or are we missing what came after these five computers in what we’re discussing… my expectation, to some extent my hope, is from now on, where we sit as a company, is that this huge thing you’re already discussing with data centers is the five computers. And what is coming is these billions of edge devices which we will also see in the AI space.
This historical analogy brilliantly challenges the current centralized AI paradigm by suggesting that today’s massive data centers might be equivalent to IBM’s ‘five computers’ – technically correct but missing the bigger picture of distributed intelligence. It introduces a paradigm shift from centralized to edge computing.
This comment fundamentally altered the discussion’s trajectory, causing multiple panelists to reconsider their assumptions about AI infrastructure. Jeff Binder immediately agreed and built upon it, while Prince Dhawan connected it to real-world financing practices. It shifted the conversation from ‘feeding the beast’ of centralized AI to considering a more distributed, efficient future.
Speaker: Tobias Helbig
I think there’s a huge risk of an overbuild. It feels a lot like the leverage in terms of optimizing hardware and infrastructure is only going to get better, and it’s potentially going to leave us with actually a – I know right now we’re worried about power, we’re worried about compute, we’re worried about data centers, but I would project if we sat here two years from now, will be looking at a grand overbuild with a real challenge around ROI
This comment introduces a contrarian perspective amid the general enthusiasm for massive AI infrastructure investments. It draws parallels to the dot-com era and challenges the assumption that current infrastructure buildout is necessarily sustainable or profitable, adding crucial risk assessment to the discussion.
This sobering perspective tempered the discussion’s optimistic tone and prompted other panelists to consider the financial sustainability of current investments. It led to Prince Dhawan’s observation about GPU financing practices and Vrushali’s acknowledgment of ‘FOMO and extra investments,’ adding a layer of financial realism to the technical discussion.
Speaker: Jeff Binder
So individual retail households can essentially monetize their rooftop solar power by supplying to such data centers… Just imagine the power of that happening. So it can literally be generating livelihoods for a lot of people who may not even be in geographical proximity to the data center.
This comment transforms the discussion from technical infrastructure to socioeconomic impact, introducing the concept of democratized energy participation where ordinary citizens become stakeholders in AI infrastructure through distributed energy resources. It connects AI scaling with economic inclusion.
This shifted the conversation toward the social implications of AI infrastructure, demonstrating how technical solutions can create new economic opportunities. It reinforced the theme of distributed systems and showed how infrastructure innovation can have broader societal benefits beyond just technical efficiency.
Speaker: Prince Dhawan
Almost every aspect of artificial intelligence deployment from the foundational aspects all the way to the top of the stack are measurable. And so that’s going to make the success and failure of businesses much more clear, much sooner than it was in the case of the Doctomer [dot-com era].
This insight distinguishes the current AI boom from the dot-com era by highlighting the measurability of AI performance, suggesting that unlike the subjective nature of early internet products, AI solutions can be objectively evaluated, leading to faster market validation or rejection.
This comment added analytical depth to the discussion about investment risks and market dynamics. It provided a framework for understanding why the current AI infrastructure buildout might be different from previous technology bubbles, influencing how other panelists discussed the sustainability and evolution of AI investments.
Speaker: Jeff Binder
Overall Assessment

These key comments fundamentally shaped the discussion by challenging conventional assumptions and introducing paradigm shifts. Prince Dhawan’s focus on grids as the binding constraint reframed infrastructure priorities, while Tobias Helbig’s IBM analogy prompted a collective reconsideration of centralized versus distributed AI. Jeff Binder’s overbuild warning injected necessary skepticism into an otherwise optimistic narrative, leading to more nuanced discussions about financial sustainability. Together, these comments elevated the conversation from a typical ‘AI is great, let’s build more’ discussion to a sophisticated analysis of infrastructure evolution, economic implications, and technological paradigm shifts. The panelists built upon each other’s insights, creating a rich dialogue that moved beyond surface-level observations to explore fundamental questions about the future of AI infrastructure and its societal impact.

Follow-up Questions
How can we solve permitting issues for clean energy infrastructure to unlock the full potential of renewable energy deployment?
This was identified as a critical bottleneck that, if solved, would complete the clean energy infrastructure stack and enable full-scale deployment
Speaker: Vrushali Gaud
What will the hardware roadmaps look like for major companies like Google and NVIDIA, and how will this impact infrastructure investments?
The uncertainty around future hardware developments creates significant risks for infrastructure investments, as breakthroughs could make current data centers obsolete
Speaker: Jeff Binder
How can we develop more power-efficient AI models that require significantly less energy than current approaches?
This addresses the fundamental challenge of AI’s energy consumption by exploring whether we can achieve intelligence with dramatically lower power requirements, similar to biological systems
Speaker: Tobias Helbig
What are the specific technical requirements and challenges for implementing peer-to-peer energy trading at scale through the India Energy Stack?
While the concept was introduced, the detailed technical implementation and potential challenges of enabling millions of distributed energy resources to trade dynamically need further exploration
Speaker: Prince Dhawan
How can we better measure and predict ROI for AI infrastructure investments to avoid potential overbuilding?
There’s concern about a potential grand overbuild of AI infrastructure, and better methods are needed to assess and predict returns on these massive investments
Speaker: Jeff Binder
What specific innovations in low-carbon materials for construction can be developed and scaled for data center and infrastructure buildout?
This is a key focus area for Google’s Climate Technology Center, requiring research into low-carbon steel, cement, and other construction materials
Speaker: Vrushali Gaud
How can edge AI devices achieve the power efficiency of biological systems while maintaining meaningful intelligence capabilities?
The comparison between brain efficiency (20 watts) and fly intelligence (below 1 milliwatt) suggests there’s significant room for improvement in AI power efficiency
Speaker: Tobias Helbig
What are the optimal financing models for different layers of AI infrastructure, particularly given the obsolescence risks of various components?
The observation that GPUs get equity financing while basic infrastructure gets debt financing suggests a need for more sophisticated financing approaches
Speaker: Prince Dhawan

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges

Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges

Session at a glanceSummary, keypoints, and speakers overview

Summary

This transcript captures discussions from the AI Impact Summit, a collaborative event between France and India focused on building trusted AI partnerships and advancing AI for scientific discovery. The summit featured high-level participation from Prime Minister Modi and President Macron, highlighting the strategic importance of Franco-Indian cooperation in artificial intelligence.


Estelle David from Business France opened by showcasing the strong French AI delegation of about 100 companies across sectors like quantum computing, cybersecurity, and green tech. She emphasized that several concrete partnerships were signed during the summit, including agreements between French and Indian companies in engineering automation, space technology, and healthcare. Julie Huguet from LaFrenchTech noted that Paris now ranks as the third-largest AI ecosystem globally after San Francisco and New York, crediting the previous AI Summit in Paris for helping structure France’s innovation landscape.


The main panel discussion focused on trusted AI as the foundation for scaling artificial intelligence adoption. Industry leaders from Tata Communications, Thales, HCL Technologies, Dassault Systèmes, and quantum computing startup Candela shared their perspectives on building trust through transparency, explainability, data governance, and end-to-end security. They emphasized that trust cannot be a “bolt-on” feature but must be architectural and foundational to AI systems.


A second panel on AI for Science featured discussions on how artificial intelligence is revolutionizing scientific discovery by accelerating research timelines and enabling new methodologies. Speakers addressed the importance of international collaboration, the need for reproducibility in AI-generated discoveries, and concerns about maintaining scientific integrity while leveraging AI tools. The summit concluded with calls for responsible AI development that benefits all nations and bridges the global digital divide.


Keypoints

Overall Purpose

This transcript captures a multi-session AI Impact Summit focused on strengthening Franco-Indian cooperation in artificial intelligence, with particular emphasis on trusted AI development, AI for science, and building bridges between French technological expertise and Indian scale and innovation capacity.


Major Discussion Points

Franco-Indian AI Partnership and Collaboration: The summit showcased extensive cooperation between France and India, featuring over 100 French companies, strategic partnerships signed during the week (including Dacia-GT, ExoTrail-Druva Space, H-Company-St. James Hospital), and efforts to combine French deep tech excellence with Indian scale and engineering talent.


Trusted AI as Foundation for Scale: A central theme emerged that trust is essential for AI adoption at scale, with panelists defining trust through multiple dimensions including explainability, predictability, data lineage, governance, security, and compliance with regulations like EU AI Act and India’s DPDP.


AI for Scientific Discovery and Research: Extensive discussion on how AI is transforming scientific methodology – from traditional hypothesis-driven research to AI-enabled reverse engineering (defining desired properties first, then creating materials), with emphasis on the need for reproducibility, verification, and responsible use in scientific breakthroughs.


Democratization and Global Equity in AI: Multiple speakers addressed the digital divide and the need to ensure AI benefits reach underserved populations, including rural communities, developing nations, and “people at the bottom of the pyramid,” with calls for multilingual AI systems and accessible interfaces.


Institutional Innovation and Talent Development: Discussion of new models for AI research institutions (like India’s IRO), the need for indigenous research capacity, and strategies to retain and develop high-end AI talent domestically rather than losing it to international migration.


Overall Tone

The discussion maintained a consistently optimistic and collaborative tone throughout, characterized by mutual respect between French and Indian participants. The speakers demonstrated enthusiasm for technological possibilities while acknowledging challenges responsibly. The tone was professional yet warm, with frequent expressions of gratitude and partnership. There was a notable shift from technical discussions in early panels to more philosophical and policy-oriented conversations in later sessions, but the collaborative spirit remained constant throughout all sessions.


Speakers

Speakers from the provided list:


Estelle David – Representative from Business France, involved in organizing French AI delegation and partnerships


Julie Huguet – Director of LaFrenchTech Mission, supports growth of French startups in France and abroad


Moderator – Session moderator (multiple instances, likely different moderators for different sessions)


Arun Sasheesh – Associate Partner and Country Director, TNP Consultants; Panel moderator


Neelakantan Venkataraman – Vice President and Global Business Head, Cloud AI and Edge, Tata Communications


Valerian Giesz – Co-Founder and CEO of Candela (quantum computing company)


David Sadek – VP Research Technology and Innovation Global CTUI and Quantum Computing, Thales


Sandeep Kumar Saxena – Chief Growth Officer, HCL Technologies


Tanuj Mittal – Senior Director Customer Solution Experience, Dassault Systèmes


Raj Reddy – Professor, founding director of the Robotics Institute at Carnegie Mellon University, 1994 Turing Award winner


Abhay Karandikar – Professor, Secretary of Department of Science and Technology, Chair for AI for Science Working Group


Irakli Beridze – Head of Center of AI and Robotics, UNICRI (United Nations Interregional Crime and Justice Research Institute)


Antoine Petit – CEO and Chairman, CNRS France (Centre National de la Recherche Scientifique)


Joelle Pineau – Chief AI Officer (company not specified in transcript), academic background


Amit Sheth – Founder, Indian AI Research Organization


Audience – Various audience members asking questions during Q&A sessions


Additional speakers:


Saloni – Session coordinator/moderator (mentioned briefly when handing over to Arun Sasheesh)


Ekta – Session coordinator (mentioned when introducing Professor Karandikar)


Full session reportComprehensive analysis and detailed insights

This transcript captures a comprehensive AI Impact Summit that served as a pivotal moment in Franco-Indian technological cooperation, bringing together over 100 French companies and distinguished leaders from both nations to explore the future of artificial intelligence. The week-long summit, which featured high-level participation from Prime Minister Modi and President Macron, represented far more than a diplomatic gathering—it established a concrete framework for combining French deep tech excellence with Indian scale and innovation capacity.


Strategic Franco-Indian Partnership and Concrete Outcomes

The summit’s opening presentations by Estelle David from Business France (the trade and investment agency) and Julie Huguet from LaFrenchTech revealed the substantial scope of collaboration. The French delegation encompassed diverse sectors including quantum-ready photonics, secure edge AI, mobility systems, cybersecurity, digital twins, and green technology. This wasn’t merely a showcase but resulted in tangible partnerships: Dacia Technology and GT Solved signed strategic agreements in engineering automation, ExoTrail and Druva Space concluded contracts for 14 satellite propulsion systems, and H-Company partnered with St. John’s Hospital in Bangalore in healthcare applications—a partnership announced by President Macron himself.


The complementarity between the two nations became a recurring theme throughout the summit—France offering deep tech excellence and scientific rigour, whilst India provides unprecedented scale with its 1.4 billion population and 200,000 startups. This partnership model represents an approach of combining complementary strengths rather than competing for dominance.


Trust as the Architectural Foundation for AI Scaling

The summit’s central insight emerged from the high-level panel discussion on trusted AI, moderated by Arun Sasheesh from TNP Consultants: trust is not merely a desirable feature but the fundamental enabler of AI adoption at scale. Sasheesh’s opening observation that “trust is the only way to scale” set the tone for a sophisticated exploration of what trust means in practical terms.


Neelakantan Venkataraman from Tata Communications provided crucial historical context, explaining how trust requirements have evolved as AI systems moved from proof-of-concept pilots to production environments. His definition—”trust means I have your back and I will not fail you”—established that trust must be foundational and architectural rather than a bolt-on feature.


The technical requirements for trustworthy AI emerged through multiple perspectives. Valerian Ghez from photonic quantum computing startup Candela outlined five pillars: traceability, predictability, verifiability, security, and accountability. Dr. David Sadek from Thales, drawing from decades of experience in critical systems, emphasised that “trust is not a label, it’s not a promise, it’s a proof,” establishing four pillars including robustness, cybersecurity, explainability, and responsibility.


Tanuj Mittal from Dassault Systèmes connected these technical requirements to real-world outcomes, using India’s UPI payment system as a powerful example where trust enabled massive scale—achieving 21 billion transactions translating to some 30 lakh crore worth of money transactions, demonstrating how trust can drive adoption even among digitally inexperienced users.


AI for Scientific Discovery: A Paradigm Revolution

The AI for Science panel, moderated by Professor Abhay Karandikar (Secretary of the Department of Science and Technology and chair of the AI for Science Working Group), revealed that artificial intelligence is not merely accelerating existing scientific methods but fundamentally transforming the nature of scientific inquiry itself. Professor Antoine Petit from CNRS France (which employs 35,000 people including 30,000 scientists across all fields of science) articulated this transformation most clearly: traditional science involved defining materials and then studying their properties, whilst AI-enabled science allows researchers to specify desired properties and then design materials to meet those specifications.


This paradigm shift represents what Petit called a “reverse” approach to science, moving from discovery-based to design-based methodologies. Professor Joelle Pineau from Meta provided a concrete framework for understanding this transformation, describing AI as a ranking algorithm that dramatically reduces search times in scientific discovery. Rather than testing candidate solutions sequentially based on intuition, researchers can now rank possibilities algorithmically and focus experimental resources on the most promising options.


Dr. Amit Sheth, founder of the Indian AI Research Organization (IRO), advocated for compact custom neurosymbolic models that solve specific problems deeply rather than relying on general large models. His approach emphasises explainability, safety, and alignment—qualities essential for scientific applications where understanding the reasoning process is as important as the results. IRO focuses specifically on healthcare, sustainability, environmental science, and pharma sectors, aiming to create the ecosystem conditions that have historically driven talent migration from India to Silicon Valley.


Addressing Global Digital Divides and Democratisation

A sobering theme throughout the summit was the recognition that AI’s benefits remain unevenly distributed globally. Professor Raj Reddy’s keynote—delivered by the founding director of Carnegie Mellon’s Robotics Institute and Turing Award winner—highlighted this challenge starkly, noting that whilst the summit assumed participants were AI-enabled, people in villages have no knowledge of computers or AI and risk being left behind entirely. His vision for multilingual artificial general intelligence and “3T computers” (teraflop computational power, terabyte memory, terabit bandwidth) represents an ambitious attempt to democratise AI access.


Reddy also emphasized the concept of “human manner” AI—a human-centric approach that he noted was introduced by Prime Minister Modi. He highlighted the work of Indian startups Sarvam and Bharat Jain in developing multilingual AI solutions as examples of progress in this direction.


Irakli Beridze from UNICRI provided global context, revealing that only half the world’s countries have AI strategies or governmental allocations for AI development. This digital divide poses risks not only for equitable development but for global stability and cooperation. The UN’s development of responsible AI frameworks is being piloted in five countries: India, Kazakhstan, Nigeria, Oman, and Brazil, representing one approach to ensuring AI benefits reach underserved populations whilst maintaining ethical standards.


Institutional Innovation and Capacity Building

The summit revealed significant efforts to build indigenous AI research capacity across different models. CNRS’s virtual centre for “AI for Science, Science for AI” emphasises cooperation between AI producers and consumers across disciplines. IRO focuses on creating comprehensive support from research through to commercialisation, including IP creation, licensing, and startup incubation.


Sandeep Kumar Saxena from HCL Technologies provided a practical example of organisational transformation, describing how his entire organisation moved to AI-driven operations where voice queries replace Excel spreadsheets and PowerPoint presentations. HCL showcased seven AI solutions designed for enterprises, citizens, and governments. His emphasis on leaders embracing AI first—”if you have to embrace AI, it starts from the top”—highlighted the importance of organisational transformation alongside technical implementation.


Responsible AI Governance and Global Cooperation

The governance discussions revealed both progress and persistent challenges in developing responsible AI frameworks. Irakli Beridze’s work with law enforcement agencies demonstrated how AI governance can move from abstract principles to practical implementation. The UN’s toolkit for responsible AI use in law enforcement, being piloted in the five countries mentioned above, provides a model for translating ethical principles into operational guidelines.


Beridze quoted the UN Secretary General’s observation that “policy should be as smart as the technology it aims to guide,” capturing the challenge facing policymakers who must develop technical sophistication matching the technologies they aim to govern.


However, significant challenges remain unresolved. The reproducibility crisis in AI-generated scientific discoveries lacks established standards or methodologies for validation. These challenges require new institutional mechanisms and international cooperation frameworks.


Technological Approaches and Future Directions

The technical discussions revealed divergent approaches to AI development that reflect deeper philosophical differences about AI’s future. The debate between general large models and specialised compact models represents more than a technical choice—it reflects different visions of how AI should be deployed and controlled. Raj Reddy’s emphasis on personal sovereign edge models that operate without cloud connectivity prioritises privacy and autonomy, whilst others advocate for ecosystem approaches that leverage cloud infrastructure and partnerships.


The quantum computing perspective, represented by Candela’s work with photonic quantum computers, introduced additional complexity by emphasising the need to break down walls between quantum and AI communities. Their release of the MERLIN framework for benchmarking quantum machine learning applications represents an attempt to build shared baselines and reproducible results across these emerging technologies.


Joelle Pineau’s advocacy for open-sourcing scientific models, citing the success of LAMA models with 3 billion downloads, highlighted the tension between democratisation and commercial interests, where fundamental science models remain public whilst commercially applicable versions may become private.


Summit Sponsors and Organization

The summit was co-organized by IFKI (Indo-French Chamber of Commerce) and supported by key sponsors including Platinum sponsors CMS CGM and Total; Gold sponsors BNP Paribas, Capgemini, and Schneider Electric; and Silver sponsor MBDA, demonstrating significant private sector investment in Franco-Indian AI cooperation.


Implications for Global AI Development

The AI Impact Summit demonstrated that successful AI development requires more than technical excellence—it demands institutional innovation, international cooperation, and sustained attention to equity and access. The Franco-Indian partnership model, combining complementary strengths rather than competing for dominance, offers a potential template for other international collaborations.


The summit’s emphasis on trust as the foundation for scale provides a framework for understanding why some AI applications succeed whilst others fail to achieve widespread adoption. The UPI example demonstrates that when trust is established through transparent, reliable, and beneficial operation, even the most digitally inexperienced users will embrace new technologies.


Perhaps most significantly, the summit revealed that AI for science represents not just an acceleration of existing research methods but a fundamental transformation in how scientific inquiry is conducted. This transformation requires new institutional structures, collaboration models, and governance frameworks that are only beginning to emerge.


The path forward requires continued attention to the digital divides that risk leaving significant populations behind, whilst simultaneously pushing the boundaries of what AI can achieve in scientific discovery, economic development, and social benefit. The Franco-Indian partnership, with its combination of deep tech expertise and massive scale, represents one promising approach to meeting these dual challenges of innovation and inclusion.


Session transcriptComplete transcript of the session
Estelle David

We were also very proud yesterday to welcome the different leaders who came for the summit and especially Prime Minister Modi and President Macron to come on the pavilion and discover the companies and speak with our companies. So as you see, through this week, the French AI delegation was actually more than what you are seeing on the pavilion. Altogether, it was about 100 French companies who came. And actually, when you will meet them, you can find in different sectors like quantum -ready photonics, secure edge AI, mobility systems. cybersecurity, digital twin, and green tech. And actually, all of them wrote, and they are all convinced and trust. that AI is the next frontier. So now just to share with you what is making this week very special.

Actually it’s as you with what I said you can see that was very intense that’s for sure but it’s not only intensity actually as you will see it’s also a lot of results achieved and results with real partnerships real signature and real commitments between our two countries. I would just name a few for the AI just maybe the first with that Dacia technology and GT solved where they signed a strategic partnership on Monday evening in Bangalore at the French consulate during the French AI night and that really shows strengthening of Franco -Indian cooperation and engineering automation in intelligence. Thank you. A second one in a different sector between ExoTrail and Druva Space, where they signed a major contract in the space industry to deliver 14 satellite propulsion systems, which is also a very strong symbol of the cooperation between France and India in terms of space.

Another signature between H -Company and St. James Hospital. And a final one that I can mention is actually a partnership between North France Invest and the TIAB that are actually uniting all together, which will create new bridges between actually one of the most Europe, most dynamic industrial region. And the other one is the T -U -B, which is actually a partnership between the two. One of India’s most powerful innovation ecosystem. So as you can see, when we see all these signatures, and I’m not just talking about AI. you can see that the dynamism between France and India is very strong but now actually when you see all this it wouldn’t have been possible without the strength of our collective network and Business France the trade and investment agency is really proud to collaborate and we have collaborated very closely with different partners with definitely LaFrenchTech and thank you Julie for the long standing partnership supporting the French startup and for bringing all these startups here in India with Numium the leading French digital and tech association helping the structure and mobilize the presence of French AI champions in India also some other partners Yuja Advisory Achoo but also the co -organizer of this event, this panel at the main summit, the Franco -Thai Chamber of Commerce, Indo -French Chamber of Commerce, IFKI.

I’m still in my… So thank you, thank you to all of you. Now we are actually arriving to today’s session where we are gathering today most influential leaders shaping the future of AI. So I won’t be long, but we are really honored to welcome Julie Huguet, Director of the Mission French Tech. Also Arun Sadesh, Associate Partner and Country Director for TNP Consultants. Nila Khan, Veta Karam, Vice President and Global Business Head, Cloud, AI and Age. From Tata Communication. Valerian Ghez, Co -Founder and CEO of Canvela. Dr. David Sadek, VP Research Technology and Innovation Global CTUI and Quantum Computing from Thales. Sandeep Kumar Saxena, Chief Growth Officer from HCL Technologies. And finally, Tanuj Mittal, Senior Director Customer Solution Experience from Dassault Systèmes.

So we’ll be really happy to hear your experience. And before I conclude, just two thanks also to our partners, because you know this event has been also been possible thanks to them. Our Platinum sponsors, CMS CGM, Total. Our gold sponsors, BNP Paribas, Capgemini, Schneider Electric, and the silver sponsor, MBDA. Again, thank you very much, all of you. Thank you to our co -organizer, IFKI, and I wish you a fruitful session. maybe just before I end also a big thanks to the teams the different teams, business friends teams but all the French team all together who worked like crazy to make this week possible

Moderator

applause applause thank you very much Estelle we now move forward to our keynote address it is my pleasure to invite Miss Julie Rouget director of LaFrenchTech Julie leads one of the world’s most dynamic innovation ecosystems LaFrenchTech representing thousands of deep tech companies and scale -ups shaping Europe’s technological leadership Julie over to you applause

Julie Huguet

thank you good morning everyone thank you I’m Julie Rouget I’m director of the French Tech mission, so we support the growth of French startups in France and abroad. I’m truly delighted to discover the tech ecosystem here in India, a country that trains around 1 .5 million engineers every year. I think it’s the highest number in the world, so I’m very impressed. The AI Impact Summit is an opportunity to create more bridges between France and India, and exactly one year ago, actually, we hosted the AI Summit in Paris. That moment helped us, helped our ecosystem to structure itself. It was the opportunity to attract investment, to unlock talent, to accelerate the creation of French startups. Today, the French tech ecosystem is strong and ambitious.

According to Deal Room, the top three AI ecosystems globally are now San Francisco, New York, and Paris. We are very proud of it and we are really sure that the AI summit helped us to build this strong ecosystem. Across France, AI is becoming a pillar of our industrial transformation. We already have major European leaders such as Mistral AI or H -Company. And I’m convinced that the AI Impact Summit here in Delhi would be as valuable for India as it was for us. For the French tech, this week in India was of course a great opportunity to showcase French innovation. But it was also an opportunity to deepen our partnership with India. Beyond business, I’m truly convinced that we share common values, trustworthy, low environmental footprint, positive impact for humanity.

We support innovation when it reinforces our economies. Of course, we are committed to making the world a better place for all of us. but also when it brings real progress for humanity. Innovation only makes sense when it serves the greatest number. And to give you a concrete example, the French President Macron announced yesterday that H -Company and St. John’s Hospital in Bangalore have started a collaboration to make hospitals more efficient and to contribute to save thousands of lives. In healthcare, in agriculture, climate, and many other sectors, Franco -Indian partnerships are key for innovation with real impact. This is why I was really happy the whole week to be here with outstanding French startups, companies already working with India, like Estelle told us a bit earlier, and others ready to build strong and strategic partnerships here.

And thank you. And maybe I will introduce a few of them. Agri -Co is transforming agriculture through digital tools that connect farmers directly to markets. White Lab Genomics uses artificial intelligence to accelerate gene therapy development. Candela is building scalable quantum technologies that will shape the future of computing. And Edge Company develops advanced AI agents capable of computer use to perform complex tasks autonomously, just like a human would. For these innovations to become global leaders, international development is key. And we all know that the world is changing. Economic alliances are evolving. We see it with Canada, Latin America, Gulf countries, and obviously here in India. Today, India represents a scale of 1 .4 billion people. 200 ,000 startups. It’s huge.

France represents deep tech excellence, scientific force, industrial capability. And I think this complementary is powerful. In France, we like to schedule meetings weeks in advance. In India, we learn to be a bit more flexible. And honestly, innovation also requires agility and perhaps a bit of Indian wisdom. That’s what we learned as well this week. And it was, like Estelle said, a very important week for the startups who came with us. So I wish you all a good session and a great day. And thank you for being here with us this morning. And .

Moderator

Thank you so much, Julie. We will now move to our high -level panel discussion, where leaders from telecom, quantum, industrial AI, cloud infrastructure, and enterprise digital transformation will reflect on how our two countries can jointly accelerate trusted AI across sectors. I am pleased to introduce our moderator for this session, Mr. Arun Sardesh, Associate Partner and Country Director, TNP Consultants. Joining Arun on the panel are an exceptional group of leaders, Neelakantan Venkataraman, Vice President and Global Business Head, Cloud AI and Edge Data Communications. Valerian Ghiaz, Co -Founder and COO, Coindella. Dr. David Sadeg, Vice President, Research, Technology and Innovation, Global CTO, AI and Quantum Computing, Thales. Mr. Sandeep Kumar Saxena, Chief Growth Officer, HCL Technologies Tanuj Mittal, Senior Director, Customer Solution Experience, Daso System With that, ladies and gentlemen, it is my pleasure to hand over the session to our moderator.

Arun Sasheesh

Thank you, Saloni. Good morning, everyone. It’s actually a pleasure and a privilege to be part of this summit and being a moderator to such an esteemed panel. I would like to start by thanking Business France, IFKI, and the AI Impact Summit organizers for giving us the opportunity to discuss something that is very important about trusted AI. So maybe I’ll start with actually what happened here yesterday. Our prime minister talked about human manner is the concept that he introduced. Our French president talked about scaling, and he used UPI, the Indian payment system, as a good example of scale. And if you really think about it, there is a large element of trust involved in it. The way that in India we accepted UPI means we trust it.

And when we trust things, scale is possible. So usually when people talk about topics such as scale or, sorry, so trust or safety, there’s a bit of pessimism at times talking about challenges. But if you really think about it, there is a large element of trust involved in it. But in this particular session, I’d like to be more optimistic. and present trust as the only way to scale. If you want the large corporations, the banks, the governments to adopt AI, they need to trust us. And only when these organizations adopt AI, we can really achieve scale. So that’s the, you know, I’d like to set the tone with that comment. And maybe, you know, in the last five years, especially after COVID, we have facing changes quite rapidly, right?

I mean, things are moving from one thing to another. We all started our career, and today we are talking about AI. So a lot of evolution in our lives as well. So I want to start from that point to introduce yourself, but also tell us. The evolutions that you have gone through, and how do you define trust? Maybe we’ll start with you, Neil.

Neelakantan Venkataraman

Thank you. A very warm good morning to all of you, and thank you, Business France, for having me here. It’s a pleasure to be here and talking to all of you, and hopefully we’ll have a nice interaction. So personally, you know, we’ve been… So just to introduce myself, I head the cloud business for TataCom, which includes your general purpose cloud. Now AI cloud. Edge and dedicated private clouds for our enterprise customers. We are an international company. 80 % still comes from India, and 20 % comes from outside of India. So we were… As part of our cloud business, we did have a large AI ML offering. And about four years back, when suddenly the transformer architecture came into the scene, and we were able to do it, we were, you know, we didn’t know about it at all.

Actually, we were, I would reckon that we were like, we didn’t know about it at all. And so when it came up, you know, we thought, what is this new architecture which has come up and how it’s going to impact? And OpenAI and ChatGPT came up. And then we started thinking how we’re going to apply this to our businesses internally and also how we’re going to offer it as a service to our customers. So our journey has been a journey of learning a lot in the last three years, I would say. All of us are learning and it’s been pretty fast -paced. It’s been pretty steep in terms of technical. We had to, you know, through the organizational levels, right from the CEO to the bottom most, we had to do learning of what will it take for this new world to adopt Gen AI and how do we adopt Gen AI within the company and how do we adopt Gen AI within the company and how do we adopt Gen AI outside and offer it to our customers.

So tremendous scale of changes and the potential for innovation for our customers and for the company. So now we have established an AI COE within the company about three and a half years back. We had a lot of pilots which were going on within the company, and now they are into production. And similarly for our customers and enterprise world and beyond enterprise government and institutions which work very closely with government, who work on citizen -scale projects, all of us have seen that, right? So truly in the last five years, it’s moved from, I would say, POCs and pilots to now production. And production at an entry level. I would say scale. It is yet to be achieved.

It’s production. to say that, okay, there is a return on investment in the enterprise context and there is a reasonable outcome for citizen scale projects. And therefore, we should start putting it into production and then, of course, scale it. And scaling means that trust has to be put on steroids. So let me talk about trust now. So I would, you know, describe trust as something which is, in a very simple word, I have your back and I will not fail you. That’s trust. You know, beyond that, there’s nothing. So when we deploy these systems, the stack, and then when we deploy the use cases and the applications, you know, inherently, trust has to be foundational element.

It cannot be a bolt -on on top of what we have built. So it has to be built at every layer. And trust has also evolved within AI system. In the last five years, you know, it started off. by, you know, because it was a POC pilot, so you’re not really exposing it to the end users in a big way. It was in a closed group, user group, and therefore it was more of good to have. But now it’s moved to foundational, it’s more architectural in nature, right? Every element of the architecture needs to have trust built in. And from a regulatory point of view also, trust has also evolved, right? So, earlier it was all about, okay, a soft guidance on trust, saying that you need to be, you know, ethical, you need to have transparency, but now it’s in the, baked in into the regulatory policies and requirements, whether it is the DPDP, which has been operationalized in India, or the EU AI Act, which is already operational.

So now it is, you know, it is in black and white. And from a technology point of view, as I said, trust is foundational, it is architectural whether you have explainability built in in terms of the outcomes, whether the behavior of the systems is predictable it is explainable, you should be able to explain, it should be auditable the data which is fed into the models and trained and the inferencing happens and the outcomes which happen you need to have a very clear data lineage, you need to have end to end governance and we talked about edge computing, I think we talked about edge so you need to have governance, end to end governance, we talked about billions of devices which could be inferencing at scale and therefore whatever happens in the cloud and what happens at the edge, you need to be able to you know the entire workflow and the process has to have end to end visibility in terms of the governance and finally resiliency is also trust, it should not be broken, so from Tadak’s communications point of view when we talk about trust being the bedrock and foundational element of AI And therefore, it will scale while you put it to production.

We meant at every scale at the infra level, we build in some of the trust components, including, you know, zero trust networking, because, you know, networking is the invisible layer which carries data across AI platforms to the, you know, the software layer and the platform layer. We have advanced guardrailing technology, data lineage, data governance models, and the entire end -to -end data pipelining and management. So I’ll just hand it back to you. Long answer. Sorry for that.

Arun Sasheesh

No, no, not at all. It’s very important. And, you know, for us, Tata is synonymous to trust. So I have to mention that. So, well, you know, being a French company, I know about Quandela. But what do you like to talk about Quandela, your evolution, and how do you define trust in a quantum computing perspective? Thank you

Valerian Giesz

very much. Yeah, so maybe you know, I will just introduce a little bit Candela. It’s a startup coming from the CNRS lab. We use CNRS technology to build photonic quantum computers. Actually, we are a full -stack company developing software and hardware. And now, actually, we partner with industries like Thales to move quantum from the lab to industry, to the real world, and to deploy systems. And basically, as a CEO, trust is a key, is a pillar in our roadmap because actually we need to build reliable systems. We need to demonstrate compliance, security in order to demonstrate scaling. That’s very important for us. So for me, when you asked about what means trust with my vision, and I’m an engineer, basically, it’s easy.

First, trust. It’s trust. Trustability. Trustability because we need to trace the systems, the models, the data that we use for AI. Even for quantum, we use quantum artificial intelligence, we develop quantum machine learning. And for all of this, it’s important to trace the results and to get reproducible runs. Second thing will be predictability. Predictability is you need to know basically where are the limits of the models and where are the failures as well. And this is also why it’s important to investigate this. Verifiability is the third one because we need to benchmark the performance. Actually now we are at this step. At Candela we released a framework which is called MERLIN for machine learning. And it’s very useful.

And it’s very useful. It’s used to benchmark applications and performance on quantum computers and using AI techniques and run stress tests on the applications. Fourth, security. And the fifth pillar, which is accountability as well. How to make sure that we have a clear ownership along the value chain of AI on quantum computing between hardware providers, software providers, certificate providers. We need to have clear ownership about everything. And with this, all together, we will be able to work in trust. We will be able to build the trust for the end users, and we will be able to scale. That’s for me. Thank you. Thank

Arun Sasheesh

you, Valeria. And Dr. David, you are in charge. You are in charge of AI and quantum computing at Thales. Both evolving topics. How do you see this? And what is trust for you? You have multiple… topics in hand so hello

David Sadek

team doing what we call friendly hacking, which actually friendly attacks our own algorithms to identify their breaches, their vulnerabilities, and to propose countermeasures. And by the way, this team won a challenge from our MOD, French MOD, two years ago because the team succeeded in retrieving sensitive data which were used to train the system. The third pillar is explainability of our system. So, if you have a digital copilot in a cockpit recommending to a pilot to make a left in 45 miles, for example, so the pilot should be entitled to ask the question, why should I do that, especially if she or he has had in mind to do something different. And the system should be able to answer because there is a threat, there is a thunderstorm, and not because the layer number three of the neural net was activated at 30%.

Okay? and finally the fourth pillar which is last but not least is what we call responsibility and responsibility actually is twofold there is one stream uh which is the uh compliance of ethics principles of laws of regulation principles as you know in europe we have this ai act and talus also issued a digital ethics charter a few years ago which comes in 10 commitments actually we are really working to achieve it’s on our strategic roadmap business roadmap now and the second stream is about the uh uh full carbon footprint and energy consuming so we have teams working on frugal ai to minimize the volume of data which are used to train systems for example this is minimizing the the footprint of the technology itself ai technology And we have also the complement of this is what we call AI for green, how to use AI to minimize the footprint of applications like working on optimizing the trajectories of aircraft, for example, to minimize what we call the condensation traits which are generated by the aircrafts.

So just to conclude this first part, I would say that trust actually is not a label. It’s not a promise. It’s a proof. Things have to be proved in our business. Thank you.

Arun Sasheesh

Thank you, David. Sandeep, coming to you, we are in the service industry. Our whole operation is built on relationship and trust. So how are you coping up with these new challenges? This of new technologies coming up, what’s your take on this?

Sandeep Kumar Saxena

Thank you. Thank you for inviting me here. So it’s a very valid question. And I will not answer it in a very technical way because I’m sure all of you have covered all the aspects around technology, architecture, governance. So my name is Sandeep. Been in London for the last 24 years. And I’m moving to India next month to accelerate the India business. And, of course, when I was in, I was managing the European business for HCL Tech. We’re just about a $15 billion company providing services. Services, and I took this job of growth markets, too, which is India, Middle East, Africa, France. It gave me a very different perspective because I’m managing about $1 .5 billion business. And now here I come in a completely different world.

And I started like a startup. So I built my own systems, which was based on AI. Like we say, before you preach anybody. You learn yourself. so I built all my systems today for growth markets too which is what I lead is built on AI so my inside sales engine my business analytics my forecasting everything is based on AI so I have reached from analytics to reasoning I am hoping I will reach to predictability in some way because the agents are still not predictive they are still reasoning but that’s where I started so if you look at my business and every person in my sales team or my delivery teams is certified on AI I myself started it, see if you have to embrace AI, it starts from the top, starts from the leader and we talked about trust, it starts from you if you as a leader in Vive there is no excel sheet in my world there is no powerpoint in my world you ask a question using voice you get an answer on a dashboard I can show you right here of course I will not tell you what is my forecast for this quarter but you ask a question you have it you ask a question about a company you get it in 2 and half minutes and that is the power of AI we were having you know earlier lot of people trying to dig data from here from there it doesn’t exist it is 2 and half minutes you ask for the market approach or anything that you want to do so in my view imbibe yourself it is an iterative process you do not build trust just like that you build it over a period of time you have to be patient you have to learn you have to make somebody learn and that is the learning process that continues over a period of time and then you build trust.

So my advice to anybody, and the reason I moved to India is very exciting. It’s a land of opportunity, saying, coming home. And you are in NCR, which we call it Delhi. It is the home of HCL Tech. So we have a very unique proposition or advantage in India or globally, which is we have what we call as AI products. Very proudly, it is made in India for India and for the world, which is HCL software. We have expertise of our global services, working with a lot of customers across the globe. So what it gave me an opportunity is to bring AI products, services together into what I call as AI solutions. so in this AI impact summit we have lost 7 solutions which is not just for enterprises it is for citizens it is for the governments as well more than welcome hall 4, 4 .5 please if you have not visited go and visit what we are talking about so these are the solutions which will make you know it will help us protect ourselves, fraud detection system, compliance system, training system, skilling system, not just enterprises so to me AI is about people progress and planet thank you

Arun Sasheesh

coming to you Tanuj Dassault is such a flag bearer of French innovation how do you how do you see this whole evolution and what is trust means at Dassault thank you

Tanuj Mittal

Arun and good morning everyone I represent the systems which champions the cause of industrial AI platforms. Now to this point of trust, the definition, the expectation itself has evolved I would say over the last several years. Five years back, for example, AI was still in silos and the definition of trust was mostly centered around the accuracy of the output. So you have a model, you feed data, you put a query, if the results are near to your expectation you are happy. But that is no more the situation because of widespread understanding of AI as a topic and adoption as well. Now there are new dimensions which got added to make it trustworthy and quite a few points which I wanted to highlight.

I think the highlight is already covered with my fellow panelists but for the sake of clarity and at the cost of repetition I will say it again the first one is of course the lineage of the data so the AI platform the industrial AI platform needs to ensure by design that the data which is being leveraged to solve a problem is ethical it has traceability there is no mischievous data which is being leveraged that done when the output comes it is credible and it is trustworthy by the people who are going to use it the second point which I wanted to highlight is about people in the loop we still have to go a long way where we trust a totally automated system without human intervention we still like to have at least at the governance level, people in the loop who will ensure that the processing, the output given by the machines is indeed in line with the objective for which it was created.

100 % trust only on machines is still a little far. So people in the loop is definitely which built trust for all of us. Another aspect and particularly in an industrial AI perspective is to simulate the result of an AI model in a real world environment. For example, when you design a car, you design a car in context. The car has to run on roads and the condition of roads changes from place to place. And if you really need to trust a car, which was, for example, developed. elsewhere in the world but being used in India, people will trust if that car at least is tested in the real world environment of India as a context. You have virtual twins of not only the product now, for Dassault system you also have virtual twins of the environment.

So you can simulate how that car will behave when it actually gets on road in Indian conditions. That builds trust. Another example is what kind of checks and balances which are there in the model itself that it does not let you make mistake whether the mistake is unintentional or whether it is deliberate. What kind of compliance you have already built in the model. If that is robust, the chances of getting a wrong output or a broken output is very low. The system is very robust. is far lesser and that builds trust. And the last one, point which I wanted to highlight, AI applications, unless it is end -to -end, from conceptualization to decommissioning, if it is still in silos, the overall output is less trustworthy as compared to, imagine a situation where right from conception up to decommissioning, you have been able to simulate the whole process multiple times again, prove it, streamline it, and then launch it.

That builds a lot of trust for the people who are actually going to build that system in the physical world and the consequent people who are going to use it. So these are some of my views. Arun, back to

Arun Sasheesh

Thank you. Thank you, Tanuj. I think we have some more time, but I’m glad that a lot of you guys, all of you, in fact, went. Thank you. The deep strength of French innovation, French technology, and two star walls of Indian scale and speed, in a way. So I just maybe quickly want everybody’s point of view on what is the mindset change that you are looking for to build trust and the democratization of AI at scale. So what is the mindset that you are looking for, a change of mindset, Neela, quickly?

Neelakantan Venkataraman

I think I would say that the mindset change which we have to move towards is a mindset of an ecosystem. Because we can’t do it all. For example, we partner with Thales on many of the security components which we provide as part of a solution. So it’s an ecosystem play. And we need to work very closely to make… …make sure the trust is not broken. and the trust architecture is maintained across the ecosystem.

Arun Sasheesh

Valerio?

Valerian Giesz

I think on my side, priority should be to break the walls between quantum and AI and build a huge community. And also this is why at Candela we released Merlin, which is a framework which aims to do that. Because that’s the point. Trust comes from benchmarking and reproducibility and not from one -off charts. And Merlin has been released with one very pragmatic first mission, establish trust between AI community, AI developers, using quantum computers that are brand new technology, which is now available. And we actually published some reproductions of papers. We are here to show quantum machine learning results in a controlled environment. We are turning scattered clays, names into… shared baseline and to build a community and invite people to use them.

So, yeah, my main topic is let’s break the walls and let’s share about what we learned in order to establish trust all together and build a common baseline, especially between France and India. In France, we can develop the technologies. In India, we can scale the technologies. So we have an ecosystem and a community.

Arun Sasheesh

What’s your take, David?

David Sadek

Well, I would say that in France, we have spent like decades to build something which is really supposed to work in context where failure is forbidden. I mean, with companies as Thales, as Dassault, as Airbus, and it has taken us, you know, decades to do this. and so we are living in a world of certification, of regulation of mathematics proofs so trust has to be proved this is very important we cannot afford as I said earlier that you know just declare trust, say ok please trust us when you deal with critical systems you have to prove the trust and I used to say that trust is gained by drop and is lost by bucket so this is very important and in India has been doing something equally extraordinary I would say in record time with this digital infrastructure for billion human scale which is really extraordinary and I think that the combination between depth and scale between France and India is really the very challenge here.

And to keep trust within this challenge is probably the way to go to make people adopt AI at large scale. Thank you.

Arun Sasheesh

Sandeep, for you. Can you just say one word?

Sandeep Kumar Saxena

Yeah. Just be open -minded and learn to adopt change. Adaptability. Very simple. There is nothing else.

Arun Sasheesh

And you, Tanul?

Tanuj Mittal

Yeah, quickly. The scale is directly proportional to the trust we built in the system, for sure. Yeah. And I’ll build on the example you gave initially and our prime minister also quoted. UPI, when it was launched in 2016, last year in December, it clocked some 21 billion transactions, translating to some 30 lakh crore worth of money transactions. with each other and today UPI is being used even by the most digitally illiterate person in India he doesn’t hesitate to put his trust in a system with his money so if you build the trust then the scale comes automatically

Arun Sasheesh

thank you gentlemen I think we are almost finished our time thank you very much I encourage you to meet with the speakers and thank you very much for your time

Moderator

thank you once again to our moderator and to all our distinguished panelists I would now invite all the speakers to please remain on stage for a brief momentum presented by Mr. Mark Vialmopillier and for a group photo ladies and gentlemen please join me in applauding our speakers as we take this moment together Thank you. He was the founding director of the Robotics Institute at the Carnegie Mellon University and he was instrumental in helping to create the Rajiv Gandhi University of Knowledge Technologies in India to cater to the educational needs of the low -income gifted rural youth. He and Edward Fonningham won the 1994 Turing Award, sometimes known as the Nobel Prize of Computer Science, for their exemplary work in the field of artificial intelligence.

Now, I now request Professor Raj Reddy to take the stage to deliver his keynote. note.

Raj Reddy

phone in your pocket, it was listening to you and using it to guide your discussion. I’m hoping we’ll create user -friendly interfaces so that when I speak in Telugu, you can hear in Hindi, and when you speak in English, I can hear in my preferred language. And I think we are there. We can get there very quickly. And it’s being done already. There are two startups in India called Sarvam and Bharat Jain. Both are trying to do it. My request is that we create a quantitative measurable matrix. That we have achieved this goal. What that means to me is, it’s not enough. Already people will say, we already have multilingual intelligence. We have systems that will speak, and you can speak in one language.

But it’s not usable. It is not, especially if you’re a person in a village, and you don’t even know where to begin. So the first issue is, how do we create a multilingual AGI, and how do we make sure that we have measurable progress? There’s a statement, if you can’t measure it, you can’t improve it. We need to improve the existing models, and they will probably need more computation, more memory, and more bandwidth. In the 50 years ago, we created a thing called 3M computers, MIP, megabyte, and… megapixel. Today, we should create 3T computers, a terabyte of memory and teraflop of computational power and terabit bandwidth. That’s where we should aim for. That means every one of us should have in our pocket an AI companion that actually has what we call foundation edge models.

And they require not, right now, the many models that are on the edge are like three billion bytes or nine billion bytes. We’re off by a factor of 100. And we need to get there. And India can kind of, where am I? How am I doing for time? Anyway, somebody, it used to be that there’ll be a time map. thing here but whenever it is time tell me I’ll stop okay so that’s one the second important point I want to make is people at the bottom of the pyramid most of the talks I’ve heard most of the expectations assume you are AI enabled and you can actually make you effective use of AI I come from a little village I guarantee you not one of them knows anything about computers or AI and they simply you know are not going to be benefit from this whole technology so what we need to do is just like the agricultural revolution of some Swaminathan we need to figure out a way how to get this technology to people at the bottom of the pyramid.

Again, I’m sure you’ll find, I’d be happy to talk about any of these for much longer, but we only have a short time. Then, in order to do both of these things, I said we need a teraflop, terabyte systems, and what we need are personal sovereign edge models. And currently, if you talk to anyone, they’ll say, already we can have access to AI. It is not private. It is not, you know, personal and secure. We need systems because they’re always going to the cloud to access the AI models. As soon as you do that, you have no privacy. In the future, we want systems which are personal, autonomous, and can be used to do things.

So, I’m going to talk about the AI model. cognitive assistants that are always on, always working, always learning. And that is the challenge of how to get there without… We have to cut it off from the grid. We cannot let it go to the grid because then it’s no longer private. And so anyway, there is a whole set of issues of that kind. How much time do we have? Anyway, somebody tell me. There are three or four other topics we can talk about. One is, I had a child come and say, if AI is going to teach me and knows everything, why should I go to school? Yeah. And so the answer to that will take longer than two minutes, but I only have two minutes.

But you can figure it out. But basically what we need… to do is essentially teach the kid learning to learn using AI, have a dialogue, learning to think, you have to teach them critical thinking. Right now, most kids in India don’t even open their mouth in classrooms. They’re afraid. So we need to kind of get over the barrier, let them talk and think and go through critical thinking and learning to do. You have to learn how to execute. With that, I’m going to stop, but I want to leave you with one other thing which you can figure out. One of the things I remember from Vedas is Om Shanti Shanti Shanti. Peace. . . . . . . . . . . . . . . . . . one of our keynote speakers, they said autonomous weapons are going to AI -based autonomous weapons are going to destroy the world.

That’s a risk. Why don’t we have humane weapons? When a missile is going to hit a hospital or a school, it is easy with AI to discover that and deflect the missile. Why should we even kill the soldiers? They’re innocent. They’re just somebody recruited and they’re being bombed and killed. We should build weapons, humane weapons, that will disable them rather than destroy them. There are lots of very interesting issues of this kind. We need to think about that. Thank you. Thank you. Namaskar.

Moderator

A very good morning, ladies and gentlemen. Our next session is a panel discussion on AI for science. The panel will be moderated by Professor Abhay Karandikar, Secretary, Department of Science and Technology, and he’s also the chair for the AI for Science Working Group. I would now request the panelists to please come on the dais, Professor Karandikar. The other panelists for the session are Mr. Irakli Berids, Head of Center of AI and Robotics, UNICRI, Professor Abhay Karandikar, Professor Antoin Petit, CEO and Chairman, CNRS France. We have Ms. Joelle Pino, Chief AI Officer. And we also have Mr. Amit Sheth, Founder, Indian AI Research Organization. A very warm welcome again to the panelists. I will… Right. Group photograph.

Okay, I request all on the dais to please come forward for a group photograph. We’ll have the photograph for you on your mementos. Thank you, panelists. Thank you, Professor Karandikar. I now hand it over to our moderator, Professor Abhay Karandikar, Secretary, Department of Science and Technology, to carry forward the panel discussion. Sir, over to you.

Abhay Karandikar

Thank you. Thank you, Ekta. So, distinguished panelists, we have a very distinguished panelist today on the panel, colleagues and all the members of the global scientific community. It is my pleasure to welcome you to this panel on AI for Science, and we consider it to be a very core pillar of our vision for this India AI Impact Summit. And as today we stand at the threshold of a new research, paradigm, our goal is not just to witness the AI revolution. but to steer it towards a more equitable, inclusive and transparent future. You know, in today’s AI world, we are moving beyond traditional methods where AI -driven models and automated experimentations have a potential to compress the decades of research into months.

And the rapid advances of these technologies, however, has not been so far equitably distributed and that is one challenge. Many regions still face the significant barriers. But still, the realm of possibility for using AI for scientific discovery continues to have, you know, a lot of excitements. Today, we are joined by leaders who represent the entire spectrum of scientific innovations, policy makers, institution builders. and from the governance and national research ecosystem. I look forward to the panelists’ insights on, you know, what are the exciting possibilities in AI for science and how we can bridge the digital divide and build a genuinely reciprocal global scientific ecosystem. So with this, I think I will begin with, you know, first a few questions.

I will request the panelists to answer. Of course, they are free to elaborate on any other things. And then I think we will open this floor to the audience for the introduction. So let me begin with, you know, Dr. Amit on the far end. So, Amit, you have been building, I think, IRO as a national -style institution in India. If you can just tell, you know, how can this be a national -style institution in India? How can this model? help overcome the specific barriers that we have identified in this region, you know, such as inadequate compute and fragmented data sets. And also, you know, I would like you to elaborate how can we ensure that AI research which gets conducted in our center of excellence actually can reach the translational stage addressing the real world challenges.

So if you can just, you know, take five to seven minutes on this. I think you can just do this.

Amit Sheth

Hello. Yeah. Thank you very much, Professor Karandika. This is a perfect question for me to talk about. This is why I’m here. I moved from USA after 44 years here to address, exactly the question you asked. I was on. Two days ago, I was on another panel, and I asked this question to the audience. How, if I were to be the founder of DeepSeek, had all the funding that he had and has, can I find those 200, 250 engineers, AI engineers and researchers that he had access to, to build DeepSeek? Out of around 100 people in the audience, three people raised their hand, saying, yeah, we might, we may. Of those three, two were students. So only one, you know, mature person basically thought that we can have that.

And I think that gives an answer of what we need to do. So India is well on its way, I mean, to grow. Many people who know something about the AI. and they will certainly have the ability, the skills necessary. Say, India has been big in IT services and whatever IT services need, they will be able to supply. The skill set that people would have here, that would be adequate. But we have noticed that two members, very important members of IRO’s board are Ajay Chaudhary and Sharath Sharma. And they have extensively talked about or lamented that India has not been a product nation. They have not made any global products. Virtually, I mean, hardly, you know, any global brands exist, have been developed in India.

And for that, we need more than skills. We need people at high end of expertise. That means our own indigenous research capacity, our own ability to train innovatively. And that’s what we need to do. And that’s what we need to do. A very good model has been that, you know, we do bachelors here. Take an example of Arvind Srinivasan. He did IIT Madras. Then you go outside. He did his PhD in Berkeley. I did mine in Ohio State. And then he worked for companies, three companies, DeepMind, OpenAI, and Google. And then he did his company. But that also in U .S. We want that to be done here, right? So the same ecosystem in which he got trained after leaving India, we want to provide that in India, right?

And there are, I think, a lot of things happening. As you know, there is a 40 % decrease in Indians going to the United States for studies. And that will continue for a while now, right? With most of you. You know of the results. You know of the results. So, first and foremost, IRO is developing an environment to create high -end talent of innovators. Secondly, and by the way, if you see, IRO’s founders are professors who have graduated nearly 200 top -end PhDs. So we know how to create that. Secondly, we have created a broad variety of collaborations with various universities, and we are starting to do that in industry. And we are creating a significant infrastructure to support IP creation, to licensing that, or to work with the corporates and startups to who will make the products.

So the idea would be that we’ll co -innovate, join. We’ll jointly work at IRO with the companies, with the startups, with the entrepreneurs. and we have already lined up large amount of investors, angels, seed, as well as growth stage. They are all hungry for deep tech AI startups and that we will provide comprehensive environment for us to take. Now, some of us also, founders have also done companies. Three of my four companies that I have done are AI companies licensing the research I did in my university. Ramesh Jain has done more companies than I have, and he’s also a co -founder. So we have the understanding of that entire pipeline it takes from lab to global products.

And so this is what we are going to do for India. And this was it. Okay. Thank you. Thank you.

Abhay Karandikar

Now, let me just switch the gears and go to Professor Antonin. You have been the chairman and CEO of CNRS France. so I think CNRS as you know operates at a scale you know that most research organizations can only imagine so two questions what do you think what structural shift the national research and funding agency need to make to support the interoperable scientific ecosystem that can sustain AI research beyond just short term pilot and so the added question is that is there a need to build an AI for science platform like as a mega science facility

Antoine Petit

so thanks for this invitation yes two words about CNRS CNRS in French means Centre National de la Recherche Scientifique and probably you don’t need an AI translator to understand that it means National Center for Scientific Research and And it’s true that we’re a big institution. We employ more than 35 ,000 people, among which 30 ,000 scientists. And we cover all fields of science. And clearly, AI opened a new era in science, in some sense, because AI is not only an accelerator of existing techniques. It forces us to imagine new ways to do science. Just to illustrate this, if you look at material sciences, what I will see is roughly, before you define new materials and then you study the properties of these materials.

Now you say, I would like to have a material with such properties. And then thanks to AI, you will build the material. With high probability that it will verify these properties. So in some sense, you see, it’s not that global acceleration. It’s a reverse, in some sense, of a way to do science. And this opens a new era in which you need really to have talents, of course, but you also need cooperation between different sciences. And that’s probably a challenge for an old institution, if I may. Like CNRS, we were organized classically in science. We cover all sciences, including humanity and social sciences. But you see that with AI, you need really new ways to cooperate between scientists.

And this means that, as usual, the key point is talents. And it means that we have to build ways to push people to interact. And that’s why we created, some years ago, a virtual center called AI for Science, Science for AI. And that’s why we created a virtual center called AI for Science, Science for AI. And we have to create some kind of virtual loop. And that’s why we created a virtual center called AI for Science, Science for AI. between, in some sense, producers of AI, mathematicians, computer scientists, and consumers of AI, which can come from every discipline. But the trick is that this producer will not produce tools or software that will be simply used by consumers, but consumers will have new, in some sense, new attempts for new ways to do research.

And that’s clearly something we try to do. And, of course, in addition, we need absolutely to have computer facility at the highest level, even if we also try, as a lot of people try to work, to have more frugal AI in order to not have a carbon footprint which will stop to develop this AI. And so that’s clearly a challenge for a center like Celeris, but I know that it is a challenge all over the world. And probably a key point is to really start from scientific use cases in order to, as I said, to rethink the way to do science. So do we need to have a platform for that? I don’t know. We clearly need to have cooperation.

That’s absolutely key. And Celeris, we have a long tradition of cooperation with India and with DST in particular. And clearly, from my point of view, the way I feel India approach AI in a very, very pragmatic way can be an example for us. You really try to apply AI for your citizens. And in some sense, for science, I think that… The process should be the same. we should start from very pragmatic scientific questions in different fields and to see, thanks once again to cooperation between data scientists, computer scientists, mathematicians and colleagues from the other fields, how we can apply AI. But also for science, AI for science has also some risk. In particular, you can produce a lot of papers thanks to AI.

And it’s not clear whether these papers were right or not. And in some sense, we can lose all our time by producing false papers by AI and then refaring these papers also by AI. And that’s a difficulty we all face. I think that none of us has a solution right today. But… But it’s clearly also an issue, but… be optimistic and let us think that AI for science once again will allow us to make progress and to discover also new results but also new ways to access to these results and in particular there are right now fascinating applications of AI to mathematics a bit frightened in some sense because new results have been obtained in mathematics without the help of any human and does it mean that AI will replace scientists I

Abhay Karandikar

ok so do you think AI will replace scientists or it will act as a co -scientist or a hybrid scientist that for me so let me just introduce I think

Joelle Pineau

Professor Zuel Pino so you have an academic background as well as you are now a chief AI officer so you have worked in the industry street as well. So just your take. the properties of new crystals. And in this particular case, once you’ve done the ranking, you take your top -ranked candidates, and you still need to run them through a wet lab to verify the properties. Your mathematical model has some imperfections, some approximations, some errors. But by having the ability to rank the candidate’s solutions, you cut down the search times drastically. In the old days, you had to list the list of possible solutions, and you had to test them one by one in the lab using your intuition of the order in which to test them.

But now you have a ranking algorithm that tells you in what order to rank them. So for those of you who remember the web pre -page rank algorithm where the search tree to find a website of interest was incredibly long, and all of a sudden you had a good ranking algorithm. It was a complete game -changer in order to retrieve information. And now it’s a complete game -changer in terms of finding candidate solutions to problems in AI. And so this process that I described for this one case applies across… across all sorts of other areas, whether it’s biology, whether it’s mathematical theorems, and so on and so forth. So this is not like magic. There is like an organization to how you take the data, how you use it in a generative model, how you do the ranking, and then how you verify your solutions.

And the verification process changes depending on what the domain is. In some cases, the better your model of the data, and we hear a lot about world models, the ability to predict the properties of the system means that you can accelerate further the discovery. However, you get better ranking, and you have to take fewer solutions to the lab. And so that’s just to give you a sense of how to use it in practice to make this a little bit more concrete for people. Thank you. Now let me come to Dr. Irakli Behriz. Irakli leads the United Nations Interregional Crime and Justice Research Institute Center for AI, where he manages one of the first sort of UN programs dedicated to AI research.

So, Irakli, what did you do? What is your take on the, you know, this risk versus benefits, you know, if you see that in your experience this AI for science can potentially pose and what, you know, even other speakers have raised?

Irakli Beridze

Thank you very much. Thank you for the question and thanks to the organizers for putting this together and inviting me to the panel. It’s a really pleasure to share the panel with the distinguished speakers who spoke before me. I will give some reflections what we are doing and how we’re looking at the discoveries of the science, including the social science and other things, how it translates into the policy developments at some of the United Nations streams and how we are working with that. So I’m leading a center for artificial intelligence and robotics for one of the UN agencies called UNICRI. And our mandate is anything related to AI. Crime prevention, criminal justice, rule of law, human rights, AI literacy now.

The center itself opened in 2017 in The Hague in the Netherlands, and we have a global mandate supporting law enforcement agencies all over the world to use AI and in a responsible way. We develop specialized toolkits and policy frameworks for that. We also support investigators to use AI to solve concrete crimes. And at the same time, we are assessing risks, how criminals and malicious actors can use artificial intelligence, and how we can support sort of global frameworks to ensure that AI is used in a beneficial way and risks are mitigated properly. So this is the type of framework what we are doing. A couple of questions now sort of starting from the broad side, from the United Nations.

Obviously, UN just approved a scientific advisory board. This is an extremely positive development. And just an hour ago, there was a panel about science related to the AI governance and how it is so crucial to understand and especially for the policy makers and sort of broader audience what we are trying to actually govern and what we are hoping is that the Scientific Advisory Board is going to do just that and quoting Secretary General of the United Nations who said that policy should be as smart as the technology it aims to guide and it is so true and right now there is quite a lot of sort of misconceptions and misconnects in that sense. Now a little bit about the law enforcement and how sort of how we are looking at it.

There are a number of things and there is a lot of aspects that could be touched upon. Several years ago when I started the center itself and we started sort of our programs especially on the responsible use of AI by law enforcement, most of the law enforcement agencies were not using AI. We are talking about back in 2018. or they didn’t even know what were the tools. And we had sort of a really handful of examples here and there. And now, last summer, we conducted regular global meetings, AI for law enforcement, and this one was hosted in Brazil. And we had so many use cases that we didn’t know actually sort of what to showcase. Right?

On the one hand, this is a really good development. So we have law enforcement needs to use AI and it needs to solve problems. And right now, without AI tools, the vast amount of data which exists there cannot be interpreted, cannot be put in place, but at the same time, it has to be done in a responsible way. So what we are doing is that we’re developing specialized toolkits for responsible use of AI, and that involves the multi -stakeholder dialogues. And we bring scientists there, we bring law enforcement agencies, governments, and academia to put together those findings and frameworks so that… this could be applied directly in the policy translation. So India is one of the pilot countries right now.

We have five countries where this toolkit has been implemented and this is India, Kazakhstan, Nigeria, Oman and Brazil. A couple of days ago we had a meeting at the Central Bureau of Investigation and we understood that there’s a lot of progress already made in the implementation of this particular project. At the same time we are, we have launched a rather sort of a scientific project on how to ensure that public trusts use of AI by law enforcement and in a few weeks we’re going to issue policy recommendations and the report which comes out of it which is again a very crucial form of the governance of AI in that particular field where AI is being used.

AI has been used by law enforcement but public has a fear to it and has a misunderstanding. perhaps or right understanding on how it is being used and applied in reality. So all of this stuff is being happening there. Thank you.

Abhay Karandikar

Thank you all the panelists. I think before we just open, I just had one quick question not in any order, but just to Dr. Pino, I had this question for you since you made a very important point of AI to be looked at as an instrument. Now, you know, one question I had is that there is this reproducibility crisis in science. You know, so what do you think? Do you need any standard or any methodology so that, you know, AI generated discoveries are considered, you know, as real or as reliable as, you know.

Joelle Pineau

I do appreciate the question. I’ve been in I’m quite concerned about the reproducibility more generally in the field of AI for a number of years, starting at around 2018, and have published quite a few papers specifically on this topic of reproducibility. I’ll keep it very, very short. I do think this is an issue. I do think AI can be an instrument to accelerate the reproducibility of scientific findings, because specifically in those cases, the question is already there often. There’s a candidate methodology, and so that means we can apply the wheels of AI in using reasoning methods and generative methods to accelerate reproducibility. We’ve looked at doing that and running reproducibility challenges. I’ve run an annual reproducibility challenge around some of the AI conferences, and so I think there’s a lot of opportunity there to do that.

I would emphasize there’s two ingredients that are necessary, which often are associated with discussions of responsible use of AI. One. So that is transparency. So to facilitate reproducibility, it helps to have the artifacts of the scientific process be publicly available. and the second one is evaluations. And so just to reproduce a method without being very specific about how you’re going to specify the criteria can be difficult. So I think by spending some time on transparency and evaluation, we can really facilitate this process.

Abhay Karandikar

Okay. Amit, your…

Amit Sheth

Yeah, so I think we’ve gotten great things out like productivity and other things that Kali from Cohit mentioned. About using very large models trained on arbitrary data, we are bringing… We plan to bring to India something very unique. From the very beginning, in fact, when I had a chance to talk to the Prime Minister, we said that we need to have… India make its mark in the particular… in a new form of AI. And in this case, I get the chance to perfectly explain what we are doing. We want to solve, instead of using a big model and use it as an instrument or partner, we are developing models that are very specific. We call it compact custom neurosymbolic models such that we solve specific problem deeply.

IRO has taken the topics of healthcare, sustainability and environmental science and pharma as initial domains. And recently in pharma, there is a company called Benevent AI, and they had FDA approval of a new drug, remote arthritis drug, where it was developed by use of knowledge graph and deep learning. So in our case, we want to create specific model for specific problem, problem solving. And trained, neurosymbolic means that we can make the models explainable, safe, aligned, grounded, with deeper reasoning options and planning and so on and so forth. And so I think this is an alternative model for AI that is likely to come up and would solve the problems deeply, very specifically with high value.

Abhay Karandikar

Okay. Just quickly, I just wanted to ask you this question that what do you think that AI for science can act as a bridge to solve problems in some of the priority sectors, like climate resilience or agriculture or energy, particularly for countries which have a limited experimental facility?

Antoine Petit

I have two hours, right? Yes. No, no. Clearly, as I said before, AI will play a key role in particular because it has this ability to treat a huge amount of data. I said before that… We are also a consumer of AI. If I look at the domains who produce the most amount of data, it’s not at all mathematics, computer science. It’s particle physics and astronomy. And they need new techniques based on AI to treat properly this data. But coming back to North -South relations, as you said, I’m convinced that we need cooperations. We live at a period where sovereignty becomes a buzzword. But sovereignty does not mean, from my point of view, isolation. We need to collaborate.

We need to share. We need to develop open science and open software. And clearly this is not in opposition with the will of sovereignty. And clearly, to be brief, I think that we need to… start from use case either use case coming from civil society or use case coming from science and we as developed countries we do not have as you know France has a history with Africa which is particular and during a long time we try to explain to African people what they need and now we have understood at least I hope that the main point is to understand what all they need and to try to develop cooperation in order to to feel these things so thank you,

Abhay Karandikar

actually you made an important point of the responsible AI what do you think you know that about the shared global ethics you know for the AI that AI driven scientific breakthroughs are governed by some kind of a shared ethical frame

Irakli Beridze

Yes. Okay. Yes. Thanks a lot. So there are not, I mean, many, many things happening at the moment in the world. On the one hand, we have the global digital divide where a lot of countries are investing in the technology and advancing and including in education and scientific breakthroughs. And then you have quite a large portion of the world which is staying either behind or may have a potential to stay behind. For example, right now only half of the world has either AI or digital strategies and have governmental spendings or allocations to that. Another half doesn’t. So that digital divide is very dangerous and there are numerous calls how to minimize that. And on the level of the United Nations, there are many type of streams there, but I don’t think it’s enough and I think that a lot more has to be done.

And hopefully the scientific breakthroughs… through the AI and some shared platforms and some shared collaboration that can be bridged and this could be benefited. And when you see the title of this AI Impact Summit, I cannot share it more or cannot resonate more that welfare of all, happiness for all, AI should certainly benefit all and not selected few. And I think that summits like this and hosting a summit in Global South should give a renewed impetus for doing all of that. Thank you very much.

Abhay Karandikar

Thank you very much. Now since we are running out of time, we just have time for two quick questions. So we can take from here. Yes, please, go ahead.

Audience

So my question is for Dr. Pino and Dr. Kashi. You know, I work at the intersection of AI and synthetic biology. Google defined release Alka -Volume from the mobile site. And then they announced Alka -Volume 4. What is it? Or ground discovery? And we have chosen to get… So it’s very interesting that the fundamental model in fundamental science was released in public domain. But the one which has commercial applications and drug discovery, Google has chosen to keep private. My question is, do you see this as a trend where the scientific foundation models as far as they relate to fundamental science will be released in open source, but if they are fine -tuned for commercial applications, they will be kept private?

Do you see this as a trend, and what do we do about that, Professor Sheth, in India?

Joelle Pineau

Of course I can’t speak to DeepMind’s strategy. That belongs to them. I’ve been in deep disagreement about their open sourcing strategy for many years, respectfully so. I do think that the circulation of scientific assets and ideas is absolutely for the benefit of all. I will say it is possible to go against that trend. I was, in 2023, responsible for a language model called LAMA. At the time, all of these… The industry was against open sourcing large language models. against that. We open source the Lama 1 model, Lama 2, Lama 3. Today we’re looking at over 3 billion downloads of these family of models. It’s possible to see disturbances to those trends and I think specifically in the field of scientific research there’s so much more to be gained by sharing assets and sharing ideas than keeping it closed.

But that takes courage, that is going against the grain and it takes vision.

Amit Sheth

I want to express deep admiration for that approach and trend that you started in making open source model. India has to develop its own model so we just had a whole day yesterday with the pharma industry, they are our partners and with the access to information they can provide, that is they can provide, data they can provide, we will develop our own model for drug discovery. we are ourselves developing a very large pharma knowledge graph we have already developed a good one decent one now and we will be training our own model with deep pharma drug related you know knowledge and our version thank you

Abhay Karandikar

so just one last question we will have in the end just be brief I think 30 seconds and then I will have one of the panelists to answer another 40 seconds

Audience

my question is

Abhay Karandikar

yeah go ahead

Audience

my question is is there any government guidelines for responsible global AI

Abhay Karandikar

any you want to answer this right

Irakli Beridze

so there are numerous guidelines on the responsible use of AI in many different domains from our side the sort of angle of the UN where I am working we did develop guidelines and not only guidelines but practical framework on the responsible use of AI in law enforcement and law enforcement is one of the probably most sensitive applications of artificial intelligence and that guidelines or that toolkit, that practical framework is now unveiled and it’s working and it’s been tested in many countries and as I mentioned it India is one of the first country which is implementing it and it’s very admirable. Thank you. So

Abhay Karandikar

thank you very much. With this I think we are time up and we have to close the session. I would like to thank all the panelists. Thank you. Thank you all. I just would like to give away the mementos for the panel discussion. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
E
Estelle David
2 arguments118 words per minute742 words374 seconds
Argument 1
French AI delegation brought 100 companies across sectors like quantum, cybersecurity, and green tech to strengthen cooperation
EXPLANATION
Estelle David highlighted that the French AI delegation consisted of about 100 French companies spanning various sectors including quantum-ready photonics, secure edge AI, mobility systems, cybersecurity, digital twin, and green tech. All these companies believe that AI is the next frontier and came to strengthen Franco-Indian cooperation.
EVIDENCE
Specific sectors mentioned: quantum-ready photonics, secure edge AI, mobility systems, cybersecurity, digital twin, and green tech
MAJOR DISCUSSION POINT
Franco-Indian AI cooperation and business partnerships
Argument 2
Strategic partnerships signed between French and Indian companies in AI, space, and healthcare sectors
EXPLANATION
David outlined several concrete partnerships that were signed during the summit, demonstrating real results and commitments between the two countries. These partnerships span multiple sectors from AI and engineering automation to space technology and healthcare.
EVIDENCE
Specific partnerships mentioned: Dacia technology and GT solved (AI/engineering automation), ExoTrail and Druva Space (14 satellite propulsion systems), H-Company and St. James Hospital (healthcare), North France Invest and TIAB partnership
MAJOR DISCUSSION POINT
Concrete business outcomes and strategic partnerships from the summit
J
Julie Huguet
2 arguments128 words per minute624 words291 seconds
Argument 1
France and India share complementary strengths – France has deep tech excellence, India has scale of 1.4 billion people
EXPLANATION
Huguet emphasized that France brings deep tech excellence, scientific force, and industrial capability, while India represents a massive scale of 1.4 billion people and 200,000 startups. This complementarity creates powerful opportunities for collaboration and innovation.
EVIDENCE
India trains around 1.5 million engineers every year (highest in the world), has 1.4 billion people and 200,000 startups; Paris is now the third largest AI ecosystem globally after San Francisco and New York
MAJOR DISCUSSION POINT
Complementary strengths between France and India for AI development
Argument 2
Franco-Indian partnerships are key for innovation with real impact in healthcare, agriculture, and climate
EXPLANATION
Huguet argued that partnerships between France and India are essential for creating innovation that has genuine positive impact for humanity. She emphasized that innovation only makes sense when it serves the greatest number of people and brings real progress.
EVIDENCE
H-Company and St. John’s Hospital collaboration in Bangalore to make hospitals more efficient and save thousands of lives; French startups working in agriculture (Agri-Co), gene therapy (White Lab Genomics), quantum computing (Candela), and AI agents (Edge Company)
MAJOR DISCUSSION POINT
Innovation with real humanitarian impact through Franco-Indian collaboration
AGREED WITH
Antoine Petit, Joelle Pineau
N
Neelakantan Venkataraman
2 arguments156 words per minute1114 words428 seconds
Argument 1
Trust means ‘I have your back and I will not fail you’ and must be foundational, not a bolt-on feature
EXPLANATION
Venkataraman defined trust in simple terms as having someone’s back and not failing them. He emphasized that in AI systems, trust cannot be added as an afterthought but must be built into every layer of the architecture from the beginning.
EVIDENCE
Trust has evolved from being ‘good to have’ in POC/pilot stages to being foundational and architectural in nature as AI moves to production; regulatory frameworks like DPDP in India and EU AI Act now mandate trust requirements
MAJOR DISCUSSION POINT
Trust as a foundational architectural requirement for AI systems
AGREED WITH
David Sadek, Valerian Giesz
Argument 2
Trust requires an ecosystem approach with partnerships across the value chain
EXPLANATION
Venkataraman argued that no single organization can build trustworthy AI systems alone. It requires working closely with partners across the ecosystem to ensure trust architecture is maintained throughout the entire value chain.
EVIDENCE
TataCom partners with Thales on security components as part of their solutions; mentions need for end-to-end governance across billions of devices at edge and cloud
MAJOR DISCUSSION POINT
Collaborative ecosystem approach needed for trustworthy AI
AGREED WITH
Arun Sasheesh, Tanuj Mittal
DISAGREED WITH
Raj Reddy
V
Valerian Giesz
1 argument132 words per minute541 words244 seconds
Argument 1
Trust requires explainability, predictability, verifiability, security, and accountability
EXPLANATION
Giesz outlined five key pillars for building trust in AI and quantum systems: the ability to trace systems and data, knowing the limits and failure points of models, benchmarking performance, ensuring security, and having clear ownership along the value chain.
EVIDENCE
Candela released MERLIN framework for machine learning to benchmark applications and performance on quantum computers and run stress tests; emphasis on reproducible runs and tracing results
MAJOR DISCUSSION POINT
Technical requirements for trustworthy AI and quantum systems
AGREED WITH
Neelakantan Venkataraman, David Sadek
D
David Sadek
2 arguments128 words per minute555 words258 seconds
Argument 1
Thales implements four pillars: robustness, cybersecurity, explainability, and responsibility including ethics compliance
EXPLANATION
Sadek described Thales’ comprehensive approach to trustworthy AI through four pillars: ensuring systems work in all conditions, protecting against cyber attacks through friendly hacking, providing explainable recommendations, and maintaining compliance with ethics and environmental standards.
EVIDENCE
Thales team won French MOD challenge by successfully retrieving sensitive training data; example of digital copilot needing to explain why it recommends a left turn; Thales issued digital ethics charter with 10 commitments; work on frugal AI and AI for green applications like optimizing aircraft trajectories
MAJOR DISCUSSION POINT
Comprehensive framework for trustworthy AI in critical systems
Argument 2
Trust is gained by drop and lost by bucket – it must be proved, not just declared
EXPLANATION
Sadek emphasized that in critical systems where failure is forbidden, trust cannot simply be declared but must be mathematically proven. He stressed that trust is built slowly over time but can be lost very quickly, making proof essential.
EVIDENCE
Decades of experience building certified systems at companies like Thales, Dassault, and Airbus; living in a world of certification, regulation, and mathematical proofs
MAJOR DISCUSSION POINT
Trust as a provable requirement rather than a promise
AGREED WITH
Neelakantan Venkataraman, Valerian Giesz
S
Sandeep Kumar Saxena
2 arguments142 words per minute687 words289 seconds
Argument 1
Leaders must embrace AI first – entire sales teams certified on AI with voice-driven analytics
EXPLANATION
Saxena argued that AI adoption must start from the top, with leaders embracing the technology first. He described building his entire business operations on AI, from inside sales engines to business analytics and forecasting, eliminating traditional tools like Excel and PowerPoint.
EVIDENCE
Built AI-powered systems for $1.5 billion growth markets business; every person in sales and delivery teams certified on AI; voice-driven dashboard that answers questions in 2.5 minutes; HCL launched 7 AI solutions for enterprises, citizens, and governments
MAJOR DISCUSSION POINT
Leadership-driven AI adoption and organizational transformation
Argument 2
Be open-minded and learn to adopt change through adaptability
EXPLANATION
Saxena emphasized that the key mindset change needed for AI adoption is simply being open-minded and learning to adapt to change. He stressed that adaptability is essential in the rapidly evolving AI landscape.
MAJOR DISCUSSION POINT
Mindset change required for AI adoption
T
Tanuj Mittal
2 arguments134 words per minute745 words332 seconds
Argument 1
People-in-the-loop governance and simulation in real-world environments build trust
EXPLANATION
Mittal argued that trustworthy AI requires human oversight at the governance level and the ability to simulate AI model results in real-world environments. He emphasized that 100% trust in fully automated systems is still far away, and people need to remain involved in the process.
EVIDENCE
Example of designing cars that need to be tested in real Indian road conditions using virtual twins of both products and environments; Dassault Systems provides virtual twins of environments for simulation
MAJOR DISCUSSION POINT
Human oversight and real-world simulation for trustworthy AI
Argument 2
Scale is directly proportional to trust built in systems, as demonstrated by UPI’s success in India
EXPLANATION
Mittal used India’s UPI payment system as an example of how building trust enables massive scale. He noted that UPI went from launch in 2016 to 21 billion transactions worth 30 lakh crore in December of the previous year, with even digitally illiterate people trusting the system with their money.
EVIDENCE
UPI clocked 21 billion transactions in December, translating to 30 lakh crore worth of money transactions; used even by the most digitally illiterate person in India
MAJOR DISCUSSION POINT
Trust as enabler of scale in digital systems
AGREED WITH
Arun Sasheesh, Neelakantan Venkataraman
A
Antoine Petit
3 arguments135 words per minute1028 words456 seconds
Argument 1
AI is not just an accelerator but forces new ways to do science, like reverse engineering materials with desired properties
EXPLANATION
Petit explained that AI represents a paradigm shift in scientific methodology, not just an acceleration of existing techniques. Instead of defining materials and then studying their properties, scientists can now specify desired properties and use AI to build materials that will likely have those properties.
EVIDENCE
CNRS covers all fields of science with 35,000 employees including 30,000 scientists; particle physics and astronomy produce the most data and need AI techniques; created virtual center called ‘AI for Science, Science for AI’
MAJOR DISCUSSION POINT
AI as a paradigm shift in scientific methodology
AGREED WITH
Joelle Pineau, Abhay Karandikar
Argument 2
AI for science requires cooperation between AI producers and consumers from different disciplines
EXPLANATION
Petit emphasized that successful AI for science requires breaking down traditional disciplinary silos and creating cooperation between mathematicians and computer scientists who produce AI tools and researchers from other fields who consume them. This creates a virtuous loop where consumers provide new requirements that drive AI development.
EVIDENCE
CNRS created virtual center for AI for Science, Science for AI to push interdisciplinary interaction; covers all sciences including humanities and social sciences
MAJOR DISCUSSION POINT
Interdisciplinary cooperation needed for AI in science
Argument 3
Sovereignty doesn’t mean isolation – need cooperation, open science and shared global ethics
EXPLANATION
Petit argued that while sovereignty has become a buzzword, it should not mean isolation. He emphasized the need for international cooperation, open science, and shared approaches, particularly in addressing global challenges and supporting developing countries.
EVIDENCE
France’s particular history with Africa and learning to understand what African countries need rather than telling them what they need; emphasis on developing cooperation to meet actual needs
MAJOR DISCUSSION POINT
International cooperation and open science despite sovereignty concerns
AGREED WITH
Joelle Pineau, Julie Huguet
J
Joelle Pineau
3 arguments171 words per minute836 words291 seconds
Argument 1
AI acts as a ranking algorithm to cut down search times drastically in scientific discovery
EXPLANATION
Pineau explained how AI functions in scientific discovery by ranking candidate solutions, similar to how PageRank revolutionized web search. Instead of testing solutions one by one using intuition, AI provides a ranking that tells researchers which candidates to test first, dramatically reducing search times.
EVIDENCE
Example of discovering properties of new crystals where AI ranks candidates and top-ranked ones are tested in wet labs; comparison to pre-PageRank web search where finding websites took much longer
MAJOR DISCUSSION POINT
AI as a ranking and optimization tool for scientific discovery
AGREED WITH
Antoine Petit, Abhay Karandikar
DISAGREED WITH
Amit Sheth
Argument 2
Open sourcing scientific models benefits all, as demonstrated by LAMA model with 3 billion downloads
EXPLANATION
Pineau advocated for open sourcing AI models, particularly for scientific research, arguing that sharing assets and ideas benefits everyone. She cited her experience with the LAMA language model family, which achieved over 3 billion downloads after being open sourced despite industry opposition.
EVIDENCE
LAMA 1, LAMA 2, LAMA 3 models achieved over 3 billion downloads; industry was initially against open sourcing large language models in 2023
MAJOR DISCUSSION POINT
Open source approach to AI model development and sharing
AGREED WITH
Antoine Petit, Julie Huguet
DISAGREED WITH
Audience
Argument 3
Need for transparency and evaluation criteria to facilitate reproducibility in AI-driven science
EXPLANATION
Pineau emphasized that reproducibility in AI-driven scientific findings requires two key ingredients: transparency (making artifacts of the scientific process publicly available) and clear evaluation criteria. She noted the importance of addressing the reproducibility crisis in science through these mechanisms.
EVIDENCE
Has published papers on reproducibility since 2018; runs annual reproducibility challenges at AI conferences; emphasizes need for publicly available artifacts and specific evaluation criteria
MAJOR DISCUSSION POINT
Reproducibility and transparency in AI-driven scientific research
R
Raj Reddy
2 arguments113 words per minute950 words502 seconds
Argument 1
Need multilingual AGI and 3T computers (teraflop, terabyte, terabit) to reach people at bottom of pyramid
EXPLANATION
Reddy argued for creating multilingual artificial general intelligence with user-friendly interfaces that allow people to speak in their preferred language and hear responses in their language. He called for 3T computers with teraflop computational power, terabyte memory, and terabit bandwidth to make this accessible to everyone.
EVIDENCE
Current edge models are 3-9 billion bytes, off by factor of 100; mentions startups Sarvam and Bharat Jain working on multilingual AI; need for quantitative measurable metrics for progress
MAJOR DISCUSSION POINT
Democratizing AI access through multilingual capabilities and powerful edge computing
AGREED WITH
Irakli Beridze, Abhay Karandikar
Argument 2
Personal sovereign edge models required for privacy and security without cloud dependency
EXPLANATION
Reddy emphasized the need for AI systems that are personal, autonomous, and can function as cognitive assistants without connecting to the cloud. He argued that current AI systems lack privacy because they always go to the cloud, and future systems need to be cut off from the grid to maintain privacy.
EVIDENCE
Current AI access requires going to cloud, eliminating privacy; need for always-on, always-working, always-learning cognitive assistants that are personal and secure
MAJOR DISCUSSION POINT
Privacy-preserving AI through edge computing and local models
DISAGREED WITH
Neelakantan Venkataraman
A
Amit Sheth
2 arguments130 words per minute1046 words480 seconds
Argument 1
Building indigenous research capacity and high-end expertise for product innovation rather than just services
EXPLANATION
Sheth argued that India needs to move beyond being just an IT services provider to becoming a product nation with global brands. He emphasized the need for indigenous research capacity and high-end expertise, noting that while India has adequate skills for IT services, creating global products requires more advanced capabilities.
EVIDENCE
Asked audience if they could find 200-250 AI engineers like DeepSeek founder had – only 3 out of 100 raised hands; mentions 40% decrease in Indians going to US for studies; IRO founders have graduated nearly 200 top-end PhDs; examples of successful Indians like Arvind Srinivasan who did IIT Madras, Berkeley PhD, worked at DeepMind/OpenAI/Google
MAJOR DISCUSSION POINT
Building indigenous AI research and product development capabilities
Argument 2
Need for compact custom neurosymbolic models that solve specific problems deeply rather than general large models
EXPLANATION
Sheth advocated for developing AI models that are very specific to particular problems rather than using large general models. He described IRO’s approach of creating compact custom neurosymbolic models that can solve specific problems deeply with explainability, safety, and grounding.
EVIDENCE
IRO focuses on healthcare, sustainability, environmental science, and pharma; example of Benevent AI getting FDA approval for arthritis drug developed using knowledge graphs and deep learning; neurosymbolic approach enables explainability, safety, alignment, grounding, reasoning and planning
MAJOR DISCUSSION POINT
Specialized AI models for specific problem domains
DISAGREED WITH
Joelle Pineau
I
Irakli Beridze
2 arguments162 words per minute1140 words421 seconds
Argument 1
UN developing frameworks for responsible AI use in law enforcement across multiple countries including India
EXPLANATION
Beridze described UNICRI’s work developing specialized toolkits and policy frameworks for responsible AI use in law enforcement, supporting agencies globally while assessing risks from malicious actors. The frameworks involve multi-stakeholder dialogues bringing together scientists, law enforcement, governments, and academia.
EVIDENCE
Toolkit implemented in 5 pilot countries: India, Kazakhstan, Nigeria, Oman, and Brazil; recent meeting at Central Bureau of Investigation showed progress in India; center opened in 2017 in The Hague with global mandate
MAJOR DISCUSSION POINT
International frameworks for responsible AI governance in law enforcement
Argument 2
Digital divide exists where only half the world has AI strategies and governmental allocations
EXPLANATION
Beridze highlighted the significant global digital divide, noting that only half of the world’s countries have AI or digital strategies with governmental spending allocations. He emphasized that this divide is dangerous and that AI should benefit all people, not just a selected few.
EVIDENCE
Only half the world has AI or digital strategies with governmental allocations; references the summit title ‘AI Impact Summit’ emphasizing ‘welfare of all, happiness for all’
MAJOR DISCUSSION POINT
Global digital divide in AI access and capabilities
AGREED WITH
Raj Reddy, Abhay Karandikar
A
Arun Sasheesh
2 arguments124 words per minute652 words312 seconds
Argument 1
Trust is the only way to achieve scale in AI adoption – large corporations, banks, and governments need to trust AI systems before they will adopt them at scale
EXPLANATION
Sasheesh argued that trust is not just a safety concern but the fundamental enabler of scale. He used the example of UPI in India, where widespread trust led to massive adoption, demonstrating that when people trust technology, scale becomes possible.
EVIDENCE
UPI payment system in India as example of how trust enables scale; mentions that banks, governments, and large corporations need to trust AI before adopting it
MAJOR DISCUSSION POINT
Trust as enabler of AI scale and adoption
AGREED WITH
Tanuj Mittal, Neelakantan Venkataraman
Argument 2
Evolution in careers and technology requires adaptation – from traditional careers to AI-focused roles
EXPLANATION
Sasheesh noted how rapidly things have changed, especially after COVID, with professionals who started their careers in traditional fields now discussing AI. He emphasized the need to adapt to these rapid technological changes.
EVIDENCE
Reference to changes over last five years, especially post-COVID; career evolution from traditional roles to AI discussions
MAJOR DISCUSSION POINT
Professional adaptation to technological change
M
Moderator
1 argument39 words per minute525 words805 seconds
Argument 1
France and India can jointly accelerate trusted AI across multiple sectors through collaboration between industry leaders
EXPLANATION
The moderator framed the panel discussion around how the two countries can work together to advance trustworthy AI across telecom, quantum, industrial AI, cloud infrastructure, and enterprise digital transformation sectors.
EVIDENCE
Panel included leaders from telecom (TataCom), quantum (Candela), industrial AI (Dassault), cloud infrastructure, and enterprise transformation (HCL)
MAJOR DISCUSSION POINT
Franco-Indian collaboration for trusted AI development
A
Audience
2 arguments166 words per minute158 words56 seconds
Argument 1
Concern about selective open-sourcing where fundamental science models are public but commercial applications remain private
EXPLANATION
An audience member raised concerns about Google’s strategy of releasing AlphaFold for fundamental science in the public domain while keeping the commercially applicable drug discovery version (AlphaFold 4) private, questioning if this represents a problematic trend.
EVIDENCE
Google’s AlphaFold release strategy – fundamental model public, commercial drug discovery version private
MAJOR DISCUSSION POINT
Open source vs proprietary strategies for AI models
DISAGREED WITH
Joelle Pineau
Argument 2
Need for government guidelines on responsible global AI
EXPLANATION
An audience member asked about the existence of government guidelines for responsible AI use at a global level, indicating concern about governance frameworks for AI development and deployment.
MAJOR DISCUSSION POINT
Global AI governance and regulatory frameworks
A
Abhay Karandikar
3 arguments123 words per minute858 words418 seconds
Argument 1
AI for science represents a new research paradigm that can compress decades of research into months
EXPLANATION
Karandikar argued that AI-driven models and automated experimentation have the potential to dramatically accelerate scientific discovery, moving beyond traditional research methods to achieve breakthroughs much faster than previously possible.
EVIDENCE
AI-driven models and automated experimentation capabilities; potential to compress decades of research into months
MAJOR DISCUSSION POINT
AI acceleration of scientific discovery
AGREED WITH
Antoine Petit, Joelle Pineau
Argument 2
Need to bridge digital divide and build genuinely reciprocal global scientific ecosystem
EXPLANATION
Karandikar emphasized that while AI advances are exciting, they have not been equitably distributed globally, with many regions facing significant barriers. He called for building a more inclusive and reciprocal global scientific ecosystem.
EVIDENCE
Many regions face significant barriers to AI access; advances not equitably distributed
MAJOR DISCUSSION POINT
Equitable global access to AI for scientific research
AGREED WITH
Raj Reddy, Irakli Beridze
Argument 3
AI for science can act as bridge to solve problems in priority sectors like climate, agriculture, and energy for countries with limited experimental facilities
EXPLANATION
Karandikar suggested that AI for science could help countries with limited experimental infrastructure address critical challenges in climate resilience, agriculture, and energy by providing computational alternatives to physical experimentation.
EVIDENCE
Focus on climate resilience, agriculture, and energy as priority sectors; consideration of countries with limited experimental facilities
MAJOR DISCUSSION POINT
AI as equalizer for scientific capabilities across countries
Agreements
Agreement Points
Trust must be foundational and architectural, not a bolt-on feature
Speakers: Neelakantan Venkataraman, David Sadek, Valerian Giesz
Trust means ‘I have your back and I will not fail you’ and must be foundational, not a bolt-on feature Trust is gained by drop and lost by bucket – it must be proved, not just declared Trust requires explainability, predictability, verifiability, security, and accountability
All three speakers emphasized that trust cannot be added as an afterthought but must be built into AI systems from the ground up, with specific technical and architectural requirements
Trust enables scale in AI adoption
Speakers: Arun Sasheesh, Tanuj Mittal, Neelakantan Venkataraman
Trust is the only way to achieve scale in AI adoption – large corporations, banks, and governments need to trust AI systems before they will adopt them at scale Scale is directly proportional to trust built in systems, as demonstrated by UPI’s success in India Trust requires an ecosystem approach with partnerships across the value chain
Speakers agreed that trust is the fundamental enabler of large-scale AI adoption, using UPI as a successful example of how trust leads to massive scale
Need for international cooperation and open science approaches
Speakers: Antoine Petit, Joelle Pineau, Julie Huguet
Sovereignty doesn’t mean isolation – need cooperation, open science and shared global ethics Open sourcing scientific models benefits all, as demonstrated by LAMA model with 3 billion downloads Franco-Indian partnerships are key for innovation with real impact in healthcare, agriculture, and climate
Speakers advocated for collaborative, open approaches to AI development that benefit global communities rather than isolated national efforts
AI represents a paradigm shift requiring new methodologies
Speakers: Antoine Petit, Joelle Pineau, Abhay Karandikar
AI is not just an accelerator but forces new ways to do science, like reverse engineering materials with desired properties AI acts as a ranking algorithm to cut down search times drastically in scientific discovery AI for science represents a new research paradigm that can compress decades of research into months
Speakers agreed that AI fundamentally changes how scientific research is conducted, not just accelerating existing methods but enabling entirely new approaches
Need to address digital divides and ensure equitable access
Speakers: Raj Reddy, Irakli Beridze, Abhay Karandikar
Need multilingual AGI and 3T computers (teraflop, terabyte, terabit) to reach people at bottom of pyramid Digital divide exists where only half the world has AI strategies and governmental allocations Need to bridge digital divide and build genuinely reciprocal global scientific ecosystem
Speakers emphasized the importance of making AI accessible to underserved populations and addressing global inequalities in AI access and capabilities
Similar Viewpoints
Both emphasized the need for organizational transformation and capacity building, with leaders driving AI adoption and India moving beyond services to product innovation
Speakers: Sandeep Kumar Saxena, Amit Sheth
Leaders must embrace AI first – entire sales teams certified on AI with voice-driven analytics Building indigenous research capacity and high-end expertise for product innovation rather than just services
Both emphasized the need for comprehensive frameworks that include human oversight, explainability, and real-world testing for trustworthy AI in critical applications
Speakers: David Sadek, Tanuj Mittal
Thales implements four pillars: robustness, cybersecurity, explainability, and responsibility including ethics compliance People-in-the-loop governance and simulation in real-world environments build trust
Both highlighted the strategic value of Franco-Indian cooperation, emphasizing complementary strengths and concrete business partnerships across multiple sectors
Speakers: Estelle David, Julie Huguet
French AI delegation brought 100 companies across sectors like quantum, cybersecurity, and green tech to strengthen cooperation France and India share complementary strengths – France has deep tech excellence, India has scale of 1.4 billion people
Unexpected Consensus
Privacy through edge computing and local models
Speakers: Raj Reddy, Amit Sheth
Personal sovereign edge models required for privacy and security without cloud dependency Need for compact custom neurosymbolic models that solve specific problems deeply rather than general large models
Unexpected consensus between a veteran AI researcher and a startup founder on moving away from cloud-based large models toward specialized local models, representing a counter-trend to mainstream AI development
Importance of reproducibility and transparency in AI research
Speakers: Joelle Pineau, Valerian Giesz
Need for transparency and evaluation criteria to facilitate reproducibility in AI-driven science Trust requires explainability, predictability, verifiability, security, and accountability
Unexpected alignment between an industry AI leader and a quantum computing startup founder on the critical importance of reproducibility and benchmarking, suggesting broad recognition of this challenge across different AI domains
Ecosystem approach to AI development
Speakers: Neelakantan Venkataraman, Antoine Petit, Irakli Beridze
Trust requires an ecosystem approach with partnerships across the value chain AI for science requires cooperation between AI producers and consumers from different disciplines UN developing frameworks for responsible AI use in law enforcement across multiple countries including India
Unexpected consensus across telecom, research, and governance sectors on the need for collaborative ecosystem approaches rather than isolated development efforts
Overall Assessment

Strong consensus emerged around trust as the foundational requirement for AI scale, the need for international cooperation over isolation, AI as a paradigm shift in methodology, and the importance of addressing digital divides. Speakers from different sectors and countries aligned on core principles of responsible AI development.

High level of consensus with significant implications for AI governance and development. The agreement across industry, academia, and government representatives suggests a mature understanding of AI challenges and a shared vision for addressing them through collaborative, trust-based approaches. This consensus could facilitate more effective international cooperation and policy coordination.

Differences
Different Viewpoints
Open source vs proprietary AI models for commercial applications
Speakers: Joelle Pineau, Audience
Open sourcing scientific models benefits all, as demonstrated by LAMA model with 3 billion downloads Concern about selective open-sourcing where fundamental science models are public but commercial applications remain private
Pineau advocates for open sourcing AI models, particularly for scientific research, citing the success of LAMA models with 3 billion downloads. An audience member expressed concern about the trend where fundamental science models are released publicly while commercially applicable versions remain private, using Google’s AlphaFold strategy as an example.
General large models vs specialized compact models for AI development
Speakers: Amit Sheth, Joelle Pineau
Need for compact custom neurosymbolic models that solve specific problems deeply rather than general large models AI acts as a ranking algorithm to cut down search times drastically in scientific discovery
Sheth advocates for developing compact, custom neurosymbolic models that solve specific problems deeply with explainability and safety, focusing on particular domains like healthcare and pharma. Pineau describes AI’s role as a ranking algorithm that works with large models to accelerate scientific discovery across broad applications.
Cloud-based vs edge-based AI systems for privacy and autonomy
Speakers: Raj Reddy, Neelakantan Venkataraman
Personal sovereign edge models required for privacy and security without cloud dependency Trust requires an ecosystem approach with partnerships across the value chain
Reddy argues for completely autonomous edge-based AI systems that are cut off from the cloud to maintain privacy, emphasizing personal sovereign models. Venkataraman advocates for an ecosystem approach that involves cloud and edge integration with partnerships across the value chain, suggesting that complete isolation is not practical.
Unexpected Differences
Role of human oversight in AI systems
Speakers: Tanuj Mittal, Raj Reddy
People-in-the-loop governance and simulation in real-world environments build trust Personal sovereign edge models required for privacy and security without cloud dependency
While both speakers focus on trustworthy AI, Mittal emphasizes the continued need for human oversight and governance, stating that ‘100% trust only on machines is still a little far.’ In contrast, Reddy envisions fully autonomous AI companions that are ‘always on, always working, always learning’ without human intervention, suggesting a more automated future.
Approach to AI model development and sharing
Speakers: Amit Sheth, Joelle Pineau
Building indigenous research capacity and high-end expertise for product innovation rather than just services Open sourcing scientific models benefits all, as demonstrated by LAMA model with 3 billion downloads
Unexpectedly, these speakers represent different philosophies toward AI development. Sheth emphasizes building indigenous, self-reliant capabilities and developing India’s own models for specific domains, while Pineau advocates for open sharing and collaboration through open-source models. This reflects a tension between sovereignty/self-reliance and global collaboration approaches.
Overall Assessment

The main areas of disagreement center around fundamental approaches to AI development: open vs proprietary models, general vs specialized AI systems, cloud vs edge computing, human oversight vs automation, and national self-reliance vs global collaboration. These disagreements reflect deeper tensions between different visions for AI’s future.

Moderate level of disagreement with significant implications for AI development strategies. While speakers generally agree on the importance of trust, transparency, and beneficial AI, they differ substantially on implementation approaches, governance models, and the balance between collaboration and sovereignty. These disagreements could influence policy directions and international cooperation frameworks for AI development.

Partial Agreements
All speakers agree that trust is foundational and requires multiple technical pillars including explainability, security, and accountability. However, they differ in their specific frameworks – Sadek focuses on four pillars for critical systems, Giesz emphasizes five pillars including verifiability and predictability, while Venkataraman stresses ecosystem-wide governance and end-to-end visibility.
Speakers: David Sadek, Valerian Giesz, Neelakantan Venkataraman
Thales implements four pillars: robustness, cybersecurity, explainability, and responsibility including ethics compliance Trust requires explainability, predictability, verifiability, security, and accountability Trust means ‘I have your back and I will not fail you’ and must be foundational, not a bolt-on feature
Both speakers agree on the need for international cooperation and addressing global inequalities in AI access. However, Petit focuses on maintaining open science and cooperation despite sovereignty concerns, while Beridze emphasizes the urgent need to address the digital divide where half the world lacks AI strategies and governmental support.
Speakers: Antoine Petit, Irakli Beridze
Sovereignty doesn’t mean isolation – need cooperation, open science and shared global ethics Digital divide exists where only half the world has AI strategies and governmental allocations
Takeaways
Key takeaways
Trust is the fundamental enabler for AI scaling – without trust from corporations, banks, and governments, AI cannot achieve widespread adoption Franco-Indian AI cooperation represents a powerful complementarity where France provides deep tech excellence and India offers scale (1.4 billion people) AI for science is creating a paradigm shift from traditional research methods to AI-driven discovery that can compress decades of research into months Trust in AI systems must be built foundationally across all layers (infrastructure, platform, application) rather than as an add-on feature The digital divide remains a critical challenge with only half the world having AI strategies and governmental allocations Open sourcing of scientific AI models benefits global progress, as demonstrated by successful examples like LAMA with 3 billion downloads AI requires an ecosystem approach with partnerships across the value chain rather than isolated development Personal sovereign edge models are needed to ensure privacy and security without cloud dependency Responsible AI governance frameworks are being developed and implemented globally, including UN guidelines for law enforcement use
Resolutions and action items
Strategic partnerships signed between French and Indian companies in AI, space, and healthcare sectors during the summit India identified as pilot country for UN responsible AI toolkit implementation in law enforcement IRO (Indian AI Research Organization) established to create high-end AI talent and indigenous research capacity in India CNRS created virtual center called ‘AI for Science, Science for AI’ to foster cooperation between AI producers and consumers Business France and partners committed to continued collaboration supporting French startups in India Development of compact custom neurosymbolic models for specific domains like healthcare, sustainability, and pharma Implementation of AI COE (Center of Excellence) within Tata Communications with pilots moving to production
Unresolved issues
How to effectively bridge the global digital divide where half the world lacks AI strategies Risk of AI producing false scientific papers and the challenge of verification without clear solutions identified Whether AI will replace scientists or act as co-scientists – the relationship remains undefined Reproducibility crisis in AI-generated scientific discoveries lacks established standards or methodologies Tension between open sourcing fundamental science models versus keeping commercially applicable models private How to reach people at the bottom of the pyramid who have no knowledge of computers or AI Challenge of finding sufficient high-end AI engineers and researchers (only 1 out of 100 people could identify 200-250 qualified engineers) Need for shared global ethics framework for AI-driven scientific breakthroughs remains unaddressed
Suggested compromises
Sovereignty in AI development doesn’t require isolation – countries can maintain sovereignty while engaging in international cooperation and open science Hybrid approach to AI model development where fundamental science models are open-sourced while commercial applications may remain private People-in-the-loop governance as a middle ground between full automation and human control in AI systems Ecosystem partnerships where no single entity tries to do everything – collaboration across the value chain for trust and scaling Gradual transition from proof-of-concepts to production to scale, allowing trust to be built incrementally Balance between deep tech excellence (France) and scale capabilities (India) through strategic partnerships rather than competition
Thought Provoking Comments
Trust is not a label. It’s not a promise. It’s a proof. Things have to be proved in our business.
This comment cuts through the abstract discussions about trust to establish a concrete, actionable definition. It shifts the conversation from philosophical concepts to practical implementation requirements, emphasizing that trust in AI systems must be demonstrable and verifiable rather than merely claimed.
This statement became a foundational principle that other panelists referenced and built upon. It elevated the discussion from general trust concepts to specific implementation strategies, with subsequent speakers addressing how to actually prove trustworthiness through explainability, auditability, and governance frameworks.
Speaker: David Sadek (Thales VP Research)
Trust has also evolved within AI system… it started off by, you know, because it was a POC pilot, so you’re not really exposing it to the end users in a big way… But now it’s moved to foundational, it’s more architectural in nature, right? Every element of the architecture needs to have trust built in.
This observation provides crucial historical context showing how trust requirements have fundamentally changed as AI moved from experimental to production systems. It highlights that trust is no longer an afterthought but must be embedded at the architectural level from the beginning.
This comment established the evolutionary framework for understanding trust in AI, helping other panelists contextualize their own experiences and solutions. It shifted the discussion from current challenges to understanding how we arrived at this point and what it means for future development.
Speaker: Neelakantan Venkataraman (Tata Communications)
The scale is directly proportional to the trust we built in the system… UPI, when it was launched in 2016, last year in December, it clocked some 21 billion transactions… if you build the trust then the scale comes automatically
This comment brilliantly connects the abstract concept of trust to concrete, measurable outcomes using India’s UPI system as a powerful real-world example. It demonstrates how trust directly enables mass adoption and provides a tangible model for AI scaling.
This insight reframed the entire discussion by positioning trust not as a constraint or compliance requirement, but as the primary enabler of scale. It connected the technical discussions to the summit’s broader theme of scaling AI for societal benefit, influencing how other speakers approached the relationship between trust and adoption.
Speaker: Tanuj Mittal (Dassault Systèmes)
AI opened a new era in science… before you define new materials and then you study the properties of these materials. Now you say, I would like to have a material with such properties. And then thanks to AI, you will build the material… it’s not that global acceleration. It’s a reverse, in some sense, of a way to do science.
This comment reveals a fundamental paradigm shift in scientific methodology enabled by AI – moving from discovery-based to design-based science. It’s profound because it shows AI isn’t just accelerating existing processes but completely inverting the traditional scientific approach.
This observation shifted the AI for Science panel from discussing AI as a tool to recognizing it as a transformative force that changes the very nature of scientific inquiry. It influenced subsequent discussions about the need for new institutional structures and collaboration models to support this reversed scientific methodology.
Speaker: Antoine Petit (CNRS France CEO)
We want that to be done here, right? So the same ecosystem in which he got trained after leaving India, we want to provide that in India… we have the understanding of that entire pipeline it takes from lab to global products.
This comment addresses a critical gap in India’s innovation ecosystem – the ability to retain and nurture talent domestically rather than losing it to foreign ecosystems. It articulates a vision for creating indigenous innovation capacity that can compete globally.
This statement highlighted the strategic importance of building domestic research and innovation infrastructure, influencing the discussion toward practical solutions for talent retention and indigenous capability building. It connected individual career trajectories to national innovation strategy.
Speaker: Amit Sheth (Indian AI Research Organization)
Policy should be as smart as the technology it aims to guide… right now there is quite a lot of sort of misconceptions and misconnects in that sense.
This quote from the UN Secretary General, shared by Beridze, captures a fundamental challenge in AI governance – the gap between technological advancement and policy understanding. It highlights how governance frameworks often lag behind or misunderstand the technologies they attempt to regulate.
This comment introduced the governance perspective into the scientific discussion, emphasizing the need for better science-policy interfaces. It influenced the conversation toward considering how scientific breakthroughs in AI need to be accompanied by equally sophisticated governance frameworks.
Speaker: Irakli Beridze (UNICRI)
Innovation only makes sense when it serves the greatest number… Franco-Indian partnerships are key for innovation with real impact.
This comment establishes a philosophical foundation that innovation should be inclusive and serve broad societal needs rather than narrow commercial interests. It connects technological advancement to social responsibility and international cooperation.
This statement set the tone for the entire summit by establishing that the goal isn’t just technological advancement but equitable impact. It influenced how subsequent speakers framed their contributions in terms of societal benefit and international collaboration.
Speaker: Julie Huguet (LaFrenchTech Director)
Overall Assessment

These key comments fundamentally shaped the discussion by establishing several critical frameworks: trust as a provable, architectural requirement rather than a promise; the recognition that AI is not just accelerating science but reversing traditional methodologies; the understanding that scale and trust are directly proportional; and the emphasis that innovation must serve broad societal needs. The comments created a progression from abstract concepts to concrete implementation strategies, while consistently connecting technical discussions to broader themes of societal impact, international cooperation, and equitable development. The most impactful insight was the reframing of trust from a constraint to an enabler of scale, which influenced how all subsequent speakers approached the relationship between technical excellence and mass adoption.

Follow-up Questions
How do we create a quantitative measurable matrix to achieve multilingual AGI goals?
Raj Reddy emphasized that ‘if you can’t measure it, you can’t improve it’ and stressed the need for measurable progress in creating multilingual artificial general intelligence that can serve people in villages who don’t know where to begin with technology.
Speaker: Raj Reddy
How do we get AI technology to people at the bottom of the pyramid who have no knowledge of computers or AI?
Raj Reddy highlighted that most discussions assume people are AI-enabled, but people in villages have no knowledge of computers or AI and won’t benefit from the technology without specific solutions to reach them.
Speaker: Raj Reddy
How do we develop personal sovereign edge models that are private and secure without going to the cloud?
Raj Reddy pointed out that current AI systems require cloud access which compromises privacy, and there’s a need for systems that are personal, autonomous, and can work as cognitive assistants without grid connectivity.
Speaker: Raj Reddy
If AI is going to teach me and knows everything, why should I go to school?
This represents a fundamental question about the role of education in an AI-driven world that Raj Reddy acknowledged would take longer to answer but is crucial for understanding how education needs to evolve.
Speaker: Raj Reddy (quoting a child’s question)
Why don’t we develop humane weapons that disable rather than destroy using AI?
Raj Reddy suggested that instead of autonomous weapons that destroy, AI could be used to create weapons that deflect missiles from hospitals/schools or disable soldiers rather than kill them, raising important ethical questions about AI in warfare.
Speaker: Raj Reddy
How can we find 200-250 AI engineers and researchers needed to build systems like DeepSeek in India?
Amit Sheth highlighted the talent gap in India by noting that when he asked an audience of 100 people if they could find the engineering talent that DeepSeek had access to, only three people raised their hands, indicating a critical need for high-end AI talent development.
Speaker: Amit Sheth
How do we ensure AI-generated discoveries are as reliable as traditional scientific discoveries?
This addresses the reproducibility crisis in science and the need for standards or methodologies to validate AI-generated scientific discoveries, which is crucial for maintaining scientific integrity.
Speaker: Abhay Karandikar
Will AI replace scientists or act as co-scientists?
Antoine Petit raised concerns about AI producing mathematical results without human help, questioning whether AI will replace scientists entirely or work alongside them, which has fundamental implications for the future of scientific research.
Speaker: Antoine Petit
How do we prevent AI from producing false papers that are then peer-reviewed by AI, creating a cycle of misinformation?
Antoine Petit identified a risk where AI could generate numerous papers of questionable validity, and if these are also reviewed by AI systems, it could create a dangerous cycle of false scientific information.
Speaker: Antoine Petit
Do we need a mega science facility or AI for science platform?
This question addresses whether there’s a need for large-scale infrastructure specifically designed to support AI for science research, similar to other mega science facilities.
Speaker: Abhay Karandikar
How do we ensure public trust in AI use by law enforcement?
Irakli Beridze mentioned launching a scientific project on ensuring public trust in AI use by law enforcement, indicating this is an active area requiring further research and policy development.
Speaker: Irakli Beridze
Will scientific foundation models be open source while commercial applications remain private?
An audience member questioned whether there’s a trend where fundamental science AI models are released publicly but commercial applications are kept private, using Google’s AlphaFold as an example, which has implications for scientific collaboration and access.
Speaker: Audience member
What government guidelines exist for responsible global AI?
This question seeks clarification on existing governmental frameworks for responsible AI development and deployment on a global scale, indicating a need for better understanding of current regulatory landscapes.
Speaker: Audience member

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Panel Discussion AI in Healthcare India AI Impact Summit

Panel Discussion AI in Healthcare India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on the opportunities and challenges of implementing AI in healthcare systems, particularly in India and other emerging markets. The panel featured Dr. Sabine Kapasi, Chris Ciauri (Managing Director at Anthropic), and Dr. Aditya Yad (India Relations Advisor at Invalude), who explored how AI can transform healthcare delivery in low- and medium-income countries.


Chris Ciauri highlighted two major areas where AI can create significant impact: reducing administrative burden in developed countries like the US, where doctors spend only 30% of their time on patient care, and improving healthcare access in India, where primary care visits average just two minutes. Anthropic has opened operations in Bengaluru and trained their Claude model on 12 Indic languages to address multilingual barriers in healthcare delivery. The company emphasizes safety in healthcare AI applications, ensuring their models acknowledge uncertainty rather than providing overconfident but potentially incorrect responses.


Dr. Aditya Yad discussed Switzerland’s perspective, noting the country’s leadership in global innovation and its expensive but high-quality healthcare system. He explained how AI is being integrated into drug discovery, manufacturing, and clinical processes to reduce costs and improve efficiency. The recent Switzerland-India free trade agreement includes a commitment to invest $100 billion in India over 15 years, with healthcare as a key sector.


The panelists identified several promising use cases for AI in healthcare, including administrative task automation, drug discovery acceleration, diagnostic improvements, and workflow optimization. They emphasized that AI should serve as a preparation tool for clinicians rather than replacing medical judgment. Key challenges discussed included building trust around medical data usage, training healthcare workforces for AI adoption, and ensuring equitable access to AI-powered healthcare solutions across different economic contexts.


Keypoints

Major Discussion Points:

AI’s transformative potential in healthcare with emphasis on safety: The panelists discussed how AI can revolutionize healthcare delivery, from reducing administrative burden (70% of clinicians’ time in the US) to improving access in countries like India, while emphasizing the critical need for safety-first approaches that acknowledge uncertainty rather than providing overconfident but potentially wrong answers.


Geographic opportunities and challenges in AI healthcare adoption: The conversation explored how different regions face distinct healthcare challenges – administrative burden in developed countries versus access issues in India and the Global South – and how AI solutions must be tailored accordingly, including multilingual capabilities and culturally appropriate implementations.


Drug discovery and manufacturing transformation through AI: Participants highlighted how AI is revolutionizing pharmaceutical development, reducing drug development timelines from eight weeks to eight hours in some cases, and enabling more efficient biomanufacturing processes with better yields and lower costs.


Workforce enablement and the human-AI collaboration model: The discussion emphasized that AI should serve as a preparation tool while clinicians retain judgment responsibilities, focusing on how to train healthcare workers to effectively leverage AI while maintaining appropriate skepticism and professional oversight.


Data trust, policy considerations, and market dynamics: The conversation addressed critical concerns around medical data privacy, the need for public trust in AI systems, and the evolving landscape between large language models versus smaller, targeted healthcare applications.


Overall Purpose:

The discussion aimed to explore near-term opportunities for AI adoption in healthcare, particularly in India and other emerging markets, while identifying strategies to strengthen healthcare systems for real-world AI implementation over the next 3-5 years.


Overall Tone:

The tone was consistently optimistic and collaborative throughout, with participants expressing genuine excitement about AI’s potential to transform healthcare globally. The conversation maintained a professional yet accessible atmosphere, with speakers acknowledging both opportunities and challenges while emphasizing the importance of responsible AI development. There was notable mutual respect between the technologist, clinician, and policy perspectives, creating a balanced and constructive dialogue that remained forward-looking and solution-oriented from start to finish.


Speakers

Speakers from the provided list:


Dr. Sabine Kapasi: Clinician/surgeon, discussion moderator. Has experience practicing as a surgeon and seeing 200 patients per day in OPD (outpatient department).


Chris Ciauri: Managing Director at Anthropic, leads global expansion across EMEA, APAC and Latin America. Has over 25 years of experience scaling SaaS and cloud businesses, including senior leadership roles at Salesforce, Google Cloud, and was previously CEO of Unilever. Brings expertise in enterprise AI adoption and national technology growth.


Dr. Aditya Yad: India Relations Advisor at Invalude (innovation and investment promotion agency of Canton Broad, Switzerland). Based in Lausanne, he is a biotechnologist who works at the intersection of tech, investment, and innovation. He facilitates Switzerland-India collaboration, supports startups, and enables market entry for Indian companies into Swiss and European innovation ecosystems. Also serves as a policymaker and legislator in Switzerland.


Additional speakers:


– No additional speakers were identified beyond those in the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion on AI in healthcare brought together diverse perspectives from technology, clinical practice, and policy-making to explore the transformative potential of artificial intelligence in healthcare systems, particularly focusing on opportunities in India and other emerging markets. The panel featured Dr. Sabine Kapasi, a surgeon with extensive experience in resource-constrained healthcare settings; Chris Ciauri, Managing Director at Anthropic with previous leadership roles at Salesforce and Google Cloud; and Dr. Aditya Yad, a biotechnologist, legislator, and policy advisor facilitating Switzerland-India healthcare collaborations. This session represented the final day of a multi-day event, with the discussion focusing on near-term opportunities over the next 3-5 years.


The Global Healthcare AI Landscape: Divergent Challenges, Convergent Solutions

The conversation began with a striking revelation about the fundamentally different healthcare challenges facing developed and developing nations. Chris Ciauri highlighted that in the United States, clinicians spend only 30% of their time on actual patient care, with the remaining 70% consumed by administrative tasks and paperwork—representing a massive efficiency problem. This administrative burden contrasts sharply with the access challenges in countries like India, where healthcare providers face overwhelming patient volumes. Dr. Kapasi shared her experience of seeing 200 patients per day in her OPD, illustrating the scale of demand that healthcare systems must manage.


This dichotomy established a crucial framework for understanding how AI solutions must be tailored to address region-specific challenges rather than adopting a one-size-fits-all approach. In developed markets, AI’s primary value proposition lies in reducing administrative burden and freeing up clinician time for patient care. In emerging markets, the focus shifts to improving access, extending healthcare reach, and enabling more efficient care delivery within existing resource constraints.


India’s Emergence as a Global AI Healthcare Leader

Perhaps the most surprising revelation of the discussion was India’s position as a global leader in AI adoption. Chris Ciauri disclosed that India has the highest adoption rate of Anthropic’s Claude AI model outside the United States, ranking second globally in usage. More remarkably, usage and revenue from India had doubled in just four months, demonstrating not only high adoption but accelerating growth. This challenges conventional narratives about technology adoption patterns and positions India not as a recipient of Western innovation but as a leader driving global AI implementation.


This leadership position is underpinned by India’s remarkable digital infrastructure achievements. The country has built what Ciauri described as “a digital healthcare system that’s the envy of the world,” providing an excellent foundation for AI implementation. Combined with smartphone ownership rates approaching 90% in urban areas and 75% in rural regions, India possesses the digital infrastructure necessary for widespread AI healthcare deployment.


Recognising this potential, Anthropic has established operations in Bengaluru and specifically trained their Claude model on 12 Indic languages to address multilingual barriers in healthcare delivery. This investment reflects a strategic recognition that successful AI healthcare implementation in India could serve as a model for the entire Global South, potentially influencing how AI-driven healthcare evolves worldwide.


Safety-First Approach: Balancing Innovation with Responsibility

Throughout the discussion, safety emerged as a paramount concern, with Chris Ciauri emphasising that “AI can do a lot of good. It also can create a lot of harm if done carelessly.” This acknowledgement set a responsible tone that permeated the entire conversation, moving beyond typical technology optimism to address real risks and implementation challenges.


Anthropic’s approach to healthcare AI safety centres on several key principles. First, their models are designed to acknowledge uncertainty, freely using language like “I don’t know” and “I’m not certain” rather than providing overconfident but potentially incorrect responses. This approach proved decisive in their partnership with Banner Health in the United States, where the healthcare system specifically chose Claude because they wanted a model that would acknowledge uncertainty rather than display false confidence.


The safety framework also establishes clear boundaries between AI capabilities and human responsibility. As Ciauri articulated, “AI is for preparation. Clinicians are for judgment.” This delineation ensures that AI serves as a support tool for healthcare professionals rather than attempting to replace clinical decision-making. Additionally, Anthropic maintains strict data governance policies, with Chris emphasising that Claude will “never use someone’s patient data to train our models”—a crucial ethical boundary for healthcare applications.


Transforming Drug Discovery and Manufacturing

The discussion revealed how AI is revolutionising pharmaceutical development and manufacturing processes. Chris Ciauri shared compelling examples from partnerships with major pharmaceutical companies like Novo Nordisk and Sanofi, where AI has dramatically reduced drug development lifecycle times from eight weeks to eight hours for regulatory and administrative tasks.


Dr. Aditya Yad expanded on this theme from a manufacturing perspective, describing how AI is enabling more efficient biomanufacturing processes. Smaller, AI-controlled bioreactors can now produce high-quality pharmaceutical products with better yields than traditional large-scale manufacturing, challenging conventional assumptions about economies of scale. These AI-optimised systems continuously monitor and adjust production parameters, leading to improved quality control and reduced production costs.


This transformation is particularly relevant given Switzerland’s position as a global pharmaceutical hub. With approximately 1,700 healthcare and life sciences companies and research institutions operating in a country of just 9 million people, Switzerland has maintained its ranking as the world’s most innovative country for the past 15 years. Dr. Aditya also highlighted India’s national priority focus on biofoundry policy, demonstrating how both countries are positioning themselves at the forefront of AI-driven pharmaceutical innovation.


Economic Implications and Investment Opportunities

The discussion highlighted significant economic opportunities emerging from AI healthcare implementation. The recent Switzerland-India free trade agreement includes a commitment to invest $100 billion in India over the next 15 years, with healthcare identified as a key sector. This investment is expected to create one million direct jobs in India, demonstrating the scale of economic opportunity that AI healthcare represents.


Dr. Aditya Yad noted that in Switzerland, $2.5 billion was invested in startups in the previous year, with many companies successfully raising funds by demonstrating AI integration in their development strategies. This trend reflects how AI has become not just a technological tool but a fundamental component of business strategy and investor appeal in the healthcare sector.


However, the panellists also identified significant economic challenges, particularly around preventive care and screening programmes. Dr. Sabine Kapasi highlighted the fundamental problem that screening precedes felt need for healthcare—people are reluctant to pay for healthcare services when they don’t perceive an immediate problem. This creates a systemic bias towards treatment rather than prevention, despite the well-established principle that prevention is more cost-effective than cure.


Workforce Development and Human-AI Collaboration

The discussion extensively explored how healthcare workforces can be prepared for AI adoption while maintaining appropriate clinical oversight. Dr. Sabine Kapasi raised critical questions about training healthcare professionals to leverage AI tools while preventing over-reliance on AI recommendations for direct patient care.


The panellists agreed that successful AI implementation requires a balanced approach to workforce development. Healthcare professionals need education on AI capabilities and limitations, but this must be coupled with maintaining clinical scepticism and professional judgement. The goal is to enhance rather than replace human expertise, enabling healthcare workers to be more efficient and effective in their roles.


Dr. Aditya Yad described Switzerland’s government-funded programme targeting CEOs and leadership teams of healthcare companies through structured cohorts. This programme focuses on strategic AI implementation from the company’s inception rather than retrofitting AI into existing processes. However, he acknowledged the scale challenge of reaching 40,000 small and medium enterprises in Switzerland alone, highlighting the broader challenge of ensuring widespread AI adoption across diverse healthcare organisations.


Real-World Applications and Use Cases

The conversation was grounded in practical examples that illustrated AI’s potential impact. Dr. Sabine Kapasi shared a compelling clinical anecdote about treating a dengue patient in a remote region where diagnostic tests were more expensive than treatment drugs. This story highlighted how economic constraints often force healthcare providers to make treatment decisions based on clinical judgement rather than optimal diagnostic protocols—a situation where AI could potentially improve both diagnostic accuracy and cost-effectiveness.


Chris Ciauri provided concrete examples of AI applications already showing results. Banner Health’s use of Claude to summarise complex oncology reports demonstrated how AI can dramatically reduce information processing time, allowing clinicians to move from hours of information gathering to immediate clinical decision-making. This represents not just efficiency gains but fundamental workflow transformation that could improve patient outcomes.


The discussion also explored how AI could enable healthcare workforce optimisation by allowing general practitioners and frontline workers to handle cases that would traditionally require specialist referral. This capability could significantly improve healthcare access in regions with limited specialist availability while reducing system costs and patient wait times.


Data Governance and Trust Building

A critical theme throughout the discussion was the importance of building public trust in AI healthcare systems. Dr. Aditya Yad emphasised that data trust and privacy concerns remain ongoing debates that must be resolved before widespread AI adoption can occur. People need confidence in how their medical data is collected, stored, used, and protected within AI systems.


This trust-building challenge extends beyond technical data security to encompass broader questions about AI governance and accountability. Healthcare systems must demonstrate not only that AI tools are technically safe and effective but also that they operate within ethical frameworks that respect patient autonomy and privacy rights.


The panellists recognised that different regions may have varying approaches to data governance, but establishing trust remains universally critical. Success in building this trust could determine whether AI healthcare solutions achieve widespread adoption or remain limited to early adopters and specific use cases.


Future Outlook and Strategic Implications

Looking towards the next five years, the panellists expressed optimism about AI’s potential to transform healthcare while acknowledging significant implementation challenges. Chris Ciauri noted that Anthropic releases new, more capable models every 2.5 months, with each iteration being exponentially more intelligent and powerful than the previous version, including the recent Claude 4.6. This rapid advancement suggests that current AI capabilities represent just the beginning of what may be possible.


The discussion revealed an emerging consensus that successful AI healthcare implementation requires coordinated efforts across multiple dimensions: technology development, policy frameworks, workforce training, and economic incentive alignment. The recognition of countries like India as innovation leaders rather than just markets represents a significant shift in global technology discourse, suggesting that solutions developed for emerging market challenges could ultimately benefit healthcare systems worldwide.


Conclusion

This comprehensive discussion demonstrated that AI’s transformation of healthcare is not a distant possibility but a current reality requiring immediate attention to implementation challenges. The conversation successfully moved beyond typical technology hype to address practical concerns about safety, workforce development, economic sustainability, and global equity in healthcare access.


The panellists’ diverse perspectives—combining clinical experience, technology expertise, and policy insight—created a nuanced understanding of both opportunities and challenges. Their collaborative approach and focus on practical implementation over the next 3-5 years suggests a mature and responsible approach to AI healthcare development.


Perhaps most significantly, the discussion reframed the global healthcare AI narrative from one of Western innovation being deployed to emerging markets to one of collaborative development where countries like India serve as leaders and innovators. This shift towards more inclusive and globally representative AI development could ultimately benefit healthcare systems worldwide, ensuring that AI solutions address the full spectrum of global healthcare challenges rather than just those prevalent in developed markets.


The path forward requires continued collaboration between technologists, clinicians, and policymakers, with safety and human-centred design remaining paramount. As the discussion concluded with an invitation to continue these conversations at next year’s AI Summit in Geneva, Switzerland, it became clear that the global healthcare AI community is committed to thoughtful implementation that maintains trust while ensuring equitable access to AI’s transformative potential for global health outcomes.


Session transcriptComplete transcript of the session
Dr. Sabine Kapasi

The last day of the event is a little slow today. You know the energy of the last three days seems to have gotten people a little right. A big week. Yeah, I know. So today, unfortunately, we don’t have a couple of people who are supposed to be here, namely Rizwan sir as well as R .S. Sharma sir. Both of them stalwarts in the industry of setting context in both Indian healthcare systems but also in setting up global standards for digital public infrastructure in India. But let’s make do without them for today. So today, we are talking about AI in healthcare, right? I’m Dr. Sabine Kapasi. We have Dr. Aditya as well as Chris here with us.

I’ll give their intro in a bit. We recognize today that AI will transform healthcare. Given that India and many other… Other low – and medium -income countries have very low levels of digital adoption, though… it’s important to determine where AI solutions are likely to have the largest ROI rather largest opportunity in the next 3 to 5 years. So in addition we also need to ensure that doctors, hospitals and other healthcare professionals are getting ready to leverage AI as well. So today we are going to focus on identifying near term opportunities for India and India as a leader in the LMIC space. That’s low and medium income countries space. And discuss strategies to strengthen the healthcare system for adoption of real use cases of AI.

I think that’s going to be one of the challenges as well as one of the longest value gain that we are able to deliver as we go. So before we go ahead I would love to introduce my co -panelists here. Chris is the managing director at Anthropic. He leads. Global expansion across EMEA, APAC and Latin. with over 25 years of experience scaling SaaS and cloud businesses, including senior leadership roles at Salesforce, Google Cloud, and most recently as the CEO of Unilever, he brings deep expertise in enterprise AI adoption and national technology growth. He is known for building high -performance global teams and driving transformation through collaborative leadership. Thanks a lot, Chris, for being here.

Chris Ciauri

Thank you for having me.

Dr. Sabine Kapasi

Before we introduce Dr. Aditya, I would love to throw a question to you. So how does Anthropic, as a company now, view opportunities in healthcare AI, not just in the U .S. and in Western Europe, but also countries like India and the global south, of how AI is being adopted, especially in the healthcare industry?

Chris Ciauri

Thank you for having me. I’d say we think healthcare, is certainly one of the areas where we’re going to be able to do a lot of things. AI can do a lot of good. It also can create a lot of harm if done carelessly. And Anthropic was founded with a mission around safety, and we focus a lot on that. So we like the tension between capability of AI models but also making sure that the safety is right so that we can deliver on some of the opportunities. I think maybe I’ll use two examples just to frame areas that we think big impact can happen. And I’ll use a U .S. example, I’ll use an India example.

If you think about certainly one of the biggest challenges in the U .S., India has this too, Sangeeta mentioned some of this, but it’s really around the burden of administration. So in the U .S., only 30 % of a clinician’s time, a doctor’s time, is spent on patient care. The rest is on paperwork and administrative tasks. I think in India, one of the biggest challenges is just access. So, you know, there’s data over the last decade that says that, you know, the average primary care visit only lasts two minutes. So if you think about where AI can impact those, and we believe it can have a huge impact, you know, if we can decrease the paperwork, decrease the administrative burden, we can have doctors in the U .S.

and other places spending much more time on patient care. Huge outcome. Can have phenomenal ROI. In India, you know, we think we can help. Solutions like ours can help. Make your health. Care system much more broadly accessible. And the other thing that’s uniquely exciting about India is. you’ve built a digital healthcare system that’s the envy of the world. And we look at that with excitement because we think that gives the AI a really great place to land when you’ve got that kind of digital infrastructure. So maybe my last comment for those that sort of don’t know Anthropic and don’t know we’re up to, we’re so excited about opportunities like this that we announced that we launched our operations recently.

Recently, we’ve opened an office in Bengaluru because we think to address a problem like this, we want to be here on the ground building with you. And we also think, you know, people have talked about the scale of India, leader of the global south. If we can make this work in India, we think we have the possibility to shape how AI -driven healthcare evolves in the rest of the world.

Dr. Sabine Kapasi

No, I think you’re right. Right, and you have worked with every tech company under the sun. which is amazing. Someday you’ll have to tell me what that looks like because, God, I’m a little further away from tech. I’m a clinician by…

Chris Ciauri

I’m as far away from being a clinician as you are from being a technologist.

Dr. Sabine Kapasi

I think that shouldn’t be so, right? And coming back, that is where I think it would be great to introduce Dr. Aditya Yad. So Dr. Aditya is the India Relations Advisor at Invalude, the innovation and investment promotion agency of Canton Broad, Switzerland. Based in Lausanne, he plays a strategic role in strengthening Switzerland -India collaboration by facilitating cross -border partnerships, supporting high -growth startups, and enabling market entry for Indian companies into the Swiss and European innovation ecosystem. He himself is a biotechnologist and has worked in the interaction or rather at the cross -section of tech, investment, and innovation. He has done a lot of investments and I believe biotech as well. With a focus on technology and research -driven enterprises and global expansion pathways, Aditya acts as a key bridge between Indian entrepreneurs, investors, academic institutions, and the vibrant innovation landscape of what is more as vibrant?

Honestly, I doubt it. India is far more vibrant, to say the least. We can have a debate on that, yeah? Yeah, no, maybe we’ll discuss that in a bit. But, you know, as we mentioned, you are as far from being a clinician as I am probably from being a technologist, but we need a middle bridge. And when we are talking about healthcare and AI systems, we need a middle bridge. So you have looked at ecosystems on both fronts, right, and innovation happening on both fronts in India and Switzerland. Switzerland having a deep research in biotech as well as a deep legacy of research in biotech, and now adoption of new technologies. New technologies on top of that legacy versus India who has leapfrogged into an area of…

of fast growth and fast technology adoption. How do you see these two systems playing out and interacting with each other for a larger good in outcomes, especially when we are looking at health care?

Dr. Aditya Yad

Thank you for the question and for the invitation. So, you know, as you said, Switzerland and India, when you look at the size of the country, the size of the population, of course there will be very different challenges for both countries. On the Swiss side, to continue on the debate, you know, Switzerland has been ranked number one in the Global Innovation Index for the past 15 years straight. And largely part of that is thanks to the health care industry, the biotech, the pharma, the life census industry. Today we have around 1 ,700 companies or research institutions based in Switzerland for a small country like this that is really giving this vibrant ecosystem of innovation that we have. The second point also is that, you know, Switzerland is not a big domestic market, right?

We are 9 million people. So all the products…

Dr. Sabine Kapasi

That’s not even Delhi. Like, that’s not even Delhi.

Dr. Aditya Yad

That’s why I usually like to have this scale, you know. Somehow, you know, by just a parenthesis, so India and Switzerland have signed this free trade agreement now, right? So we are concretely in business between Switzerland and India. Part of that free trade agreement that was signed by both governments, now Switzerland and the EFTA countries now has a commitment to invest $100 billion into India in various sectors. That includes also healthcare, by the way. And to create also 1 million jobs, direct jobs in India in the next 15 years. So now there’s a concrete engagement with both countries. So when it comes to healthcare, you know, Switzerland is known for two things. A very efficient and highly qualitative healthcare system, but a very expensive healthcare system.

So you get the price and the quality that goes with it. So this is where AI is actually going to play a very big role. If you talk about cost reduction, optimization of all the processes from research to putting medication on the market, using technology, using AI will tremendously, we believe, bring down the cost of health care. People are now, because I’m also a policymaker in Switzerland, I’m a legislator, in the public debate there’s a lot of heat or a lot of pressure from the public that health care premiums are too high for what they’re getting. So this is exactly the very easy definition where we can say, okay, now we have these tools that can accelerate drug development, not to spend a few billions in developing a new drug, but using AI tools to speed up the process, to increase the probability of finding the right targets and clinical validation and market access.

So this is where I think there is a very big potential. And from the industry’s perspective, we also see that a lot of companies are now So either shifting from traditional pathways into AI initiatives, or the smaller companies now, the startup companies with which we work, they include and they embed AI strategy within the development of their company overall. So it’s become completely normal that AI has to be included from the very start. And this is also what startup companies, new innovative products, they’re using in their own pitch in order to convince also investors to get these investments. Last year, I just published this report. So we had $2 .5 billion of investment going into Swiss startups just for last year.

Many of them are using and have been able to raise funds because they’ve been integrating AI tools in their development in that sense.

Dr. Sabine Kapasi

Thank you so much, Aditya. Chris, back to you. So first of all, I’m really glad that we have people who have been making for healthcare systems but are not native to healthcare. Because… sometimes when we think about healthcare, we only think about doctors, right? Or we think about hospitals. But thankfully, we know that healthcare is so much more. It’s not just about doctors or hospitals. And you as a company, of course, as I said, you have worked across, so please feel free to share your experience across several different domains that you have worked around, but have worked with several technology companies that were not native to healthcare, but now see healthcare as a huge opportunity as well.

So which are the specific use cases that companies like Anthropic view or are targeting for to solve healthcare problems that they are looking to solve? And how do you test for the risks, especially when you’re building LLMs which are quite generalized? Because healthcare has a very immediate outcome of risk, and that’s something that needs to be tested for or at least covered for. So how do you guys look at it?

Chris Ciauri

Maybe I’ll do the risk first, and then I’ll talk about a few use cases. And by the way, thank you for the comments that basically date me, because you know all the companies I’ve been around a long time. But I think I’m privileged to be part of Anthropic and what’s going on right now in AI, because I think by far this has the opportunity to transform health care more than any other technology transformation that we’ve seen over the last three decades or so. But coming back to the risk point, I think I made the point up front that AI can do a lot of good in health care, and it can do a lot of harm if you’re not very careful about the way you use it.

I think because we’ve been so focused on safety, Claude, uses language like, I don’t know. and I’m not certain quite freely. And we think that’s critical in an industry where the stakes are so high. And I’ll give you one example. One of our customers is Banner Health in the United States. They’ve used us to summarize 100 -page oncology reports where previously a clinician comes in, and they’re getting information that was across multiple appointments and specialists, and it took them eight hours just to get to the point that they could start to provide an opinion, care, judgment. That is now summarized concisely. So all of that time, all that administrative time or information retrieval time, is now quickly moving into judgment and delivering care for patients.

So that’s both a use case, but it’s also like, you know, I think a demonstration. of why did they choose us? Ultimately, the final decision came down to they wanted a model that was so based in safety that it said, I don’t know, or I’m not certain. What they didn’t want was a model that was confident or felt confident, but it was wrong. So I think safety is paramount for us. We think it’s table stakes in health care in everything we do. Maybe use cases, and I briefly hit on this before, but I think certainly administrative burden is one, and we think that’s pervasive everywhere. That speaks to your take costs out of the system issue.

The 70 % of time that I talked about doctors in the U .S. spending on admin, that’s a $1 trillion problem or opportunity. So the magnitude, if we could start to address, things like that with AI globally is huge. A second one, drug discovery. So in some of our work with customers like Novo Nordisk and Sanofi, we’ve been able to reduce drug development, life cycle times, heavy paperwork, heavy regulation from eight weeks to eight hours. Like just phenomenal difference in terms of how quickly we could get amazing drugs and medicines to market if that becomes pervasive. The last one, I’ll be really India -specific. You know, I mentioned access here is a challenge, and that certainly if the country can get to the point where they can get the health care system to serve all Indians, that would be game -changing for India and I think for the global south.

One of the big barriers is multilingual. So. So you can’t use a model that’s good in English, but it’s not good in other languages. So as part of our entry into Indian over the last six months, we’ve trained Claude on 12 Indic languages, and that’s not to say that it’s done and over, there’s more languages, there’s dialects, but I think those are the types of things where AI can improve access for health care.

Dr. Sabine Kapasi

No, I think just about 15 days ago when you guys launched your Pro model, Pro Max, I think that was the one, right?

Chris Ciauri

It was a couple of weeks ago, it was our version, Claude 4 .6.

Dr. Sabine Kapasi

Yes, and I had a friend call me up and seeing the stock market very heavily when you guys launched your recent version, and they’re like, you know, there are jobs that are in serious danger. So you should actually, and I remember… I remember telling my team to go back and start playing around with… with the new version because the 11 attachments that have come through have been fascinatingly interesting in the way they are going to be adopting new workflows. But one of the things that EI is adding a lot of value in is in B2B space. So before we go back to that and we’ll talk a little bit more about how EI is shifting biotech as an industry because healthcare is not just about patient delivery but it’s also about how we get there as you touched upon in drug discovery.

And I think AlphaFold was a phenomenal change the way new protein discoveries are being done today. And I mean I could not believe that in my lifetime I have seen such a jump in technology and there is so much more to come. So it could not have been possible without large scale models. But that being said, when we look at countries like India and this is when I was practicing as a surgeon, I used to see my OPD used to consist of 200 people a day. So the amount of time we spend per patient in the lower half of the world, let’s put it that way, is very different. The workflows are very different. And even though the clinical logic might remain the same, the clinical skills deployed are extremely different in terms of action items.

So, once we, I’ll circle back to you on that, but we would love to discuss how you are adopting those kind of use cases to deliver value, not just in countries which have optimized for outcome, especially optimized time for outcome, but use the same principles for a global scale outcome difference as well. But before we do that, I would love to have your thoughts about how, in your perspective, you know, drug discovery, clinical trials, regulatory sciences, as well as manufacturing, is being affected by the new innovations in AI, and how do you see that happening in the next 10 years? Or five years, I think. AI is moving fast enough. We can’t predict 10 years at this point.

Dr. Aditya Yad

Hello, hello. One thing, so we touched upon drug discovery and the impact of AI in the healthcare system in general, so with hospitals, with clinics, and so on. Manufacturing is actually very interesting, and it’s also very relevant to India because there’s a new policy also in India to have this biofoundry. So biomanufacturing in general in India has become a national priority. And the inclusion of AI tools into this is very, very relevant. We see some companies already shifting into that also in Switzerland, also smaller companies. If you really talk about the manufacturing of different products in a controlled environment where parameters are being monitored over time and self -learning about optimizing those parameters in order to increase yields, for instance, or increase.

The ROIs on these production systems is becoming a very interesting trend. We have a bunch of companies. We have the large companies, Novartis, Roche, Lonza. producing massively but you now have these what you call bioreactors that are smaller in size but more qualitative because they can use the AI tools and so you are able to produce very highly qualitative products that could be very expensive but because AI is being used in a small control with a better yield the prices and the production costs are being limited so this is typically one example where from the very industry’s perspective there is a big interest and a big potential over the next few years because at some point these drugs they will have to be manufactured, they will have to be distributed they will have to reach the patients at the end of the day this is the main goal right so how do we streamline all the process from R &D to the drug delivery at the patient level all of these can be used and AI being infused into all of these different steps that will be the challenge but on manufacturing I think there is a lot of potential in the next five years.

Dr. Sabine Kapasi

Thank you so much. I think one of the things that also has a massive potential is screening and also health from an insurance and finance perspective. One of the things that are shifting a lot of use cases and creating a lot of different frameworks are the way we now have diagnostic technology evolve so fast that we can take it to people’s homes directly and make sure that the signals that we are capturing or biomarkers that we are discovering at pace reduces costs of testing drastically so that screening as a solution becomes available across the world and not just specified to areas where large capex into diagnostic capacity building is being evolved. So I think that is one thing I would love to have some thoughts on.

I’m so sorry, I was off there for a bit. Okay. As we spoke about in a bit, that, you know, you said countries like India and other low -income countries, or rather other global south countries. I mean, I would never say low -income anymore. Look at us right now. But other global south countries are shifting now in terms of adoption. And at least countries like India have massive digital adoption coming through as well. In urban areas, there’s close to 90 % smartphone ownership. In rural also, it’s touching 75%, which is just fabulous and mind -boggling altogether. But in such ecosystems, how do you see the potential of such markets actually developing solutions within healthcare that shift the perspectives of the low -wage, or rather in the developed markets as well?

Chris Ciauri

You know, this, I’ll share some statistics that you might find surprising based on what you just said. So India has the highest adoption of cloned. outside the U .S. It’s second in the world for cloud adoption. So, yes, it’s global south, but your country is consuming AI. It’s probably all of my travels I’ve not seen or felt a country that’s more optimistic about the potential of this technology. It’s also, by the way, it’s second in usage, but it’s the fastest growing. So in the last four months, the usage of cloud and the revenue for Anthropic has doubled in India. So I wanted to make that point. But I think if we think about what does that mean, I think that means given the context of an amazing digital health record system that you’ve built, which is not just unprecedented in the global south.

It’s one of the few globally. And it really does give you and us, I think, the ability to… to do something quite special across… you know the largest democracy in the world a massive population that’s also uh got the additional challenge of multilingual and and it’s why i said at the beginning i think if if you can do something you know we can make this work in a place like india not only does this sort of give the the global south i think a model i think it gives the whole world a model of how we could really see health

Dr. Sabine Kapasi

care transform exactly i think some of the ways for example if you um i’ll take a you know take a case from my own stories um we had a patient who was a dengue patient we knew clinically there’s a dengue patient we were working in a very very remote region so we had no access to diagnostic tests at that time and we just started treating them the reason is the drugs were cheaper than the diagnostic tests and the patient could afford any so for the system at that time it was a trial and error problem but the clinical values were stark enough for us to know that this was not actually a risky case. This was almost certainly a dangerous case, even though we didn’t have the data to back it up.

So I think taking those clinical validations and those clinical intelligence, combining them with the new discoveries and biomarkers that now with new technologies are coming in every day and making diagnosis and drugs affordable across the world, that I think is going to be the next big leap in the healthcare ecosystem, and it is going to make a fundamental shift within that ecosystem, at least in my understanding. And you guys are at the front

Chris Ciauri

I completely agree. If you think about that situation you described there and the one before when you said you used to see 200 patients a day, I’ve been very lucky. I think those of us that work in technology companies that get to help businesses and different sectors. do things better. I always feel like we’re lucky because normally I’d get to spend 30 minutes with you and I’d say, tell me what’s going on. Like if you could, if you could fix one thing, what would, what would mean that, um, you know, you’d be so efficient in the care that you gave that you could extend the time with patients and you could provide a way that a hundred of them could, could self -serve.

Um, I mean, I think those are the kinds of conversations that we get to have, um, as this technology, uh, you know, hits a sector like healthcare.

Dr. Sabine Kapasi

No, of course. Uh, and he’ll take it off the stage later, but, uh, to your note, you have been a biotechnologist and a researcher yourself. How do you see, and we all know in healthcare prevention is better than cure. I mean, that’s been said, but as a system, we have never really adopted it at scale. So how do you see diagnostic technology evolving through the use of AI in the next few years? And how do you see that playing out for countries

Dr. Aditya Yad

diagnosed and prevented from the treatment. So the cost of having maybe a diagnostic tool that costs two or three or four times more than the current one was not enough to convince insurance companies to pay for that in order to avoid a $20 ,000 or $50 ,000 therapy for cancer for instance. So the narrative and the whole system is being built around that, around treatment. If we can use and prove that AI can have a significant impact both on the quality of the diagnostic but also on the cost of developing these diagnostics and being put on the market, then I think we can really make a change in terms of how we see healthcare as a system as well.

Dr. Sabine Kapasi

No, I think that’s true. So we have been talking to medical device companies who are now targeting new age diagnostic tools and even companies, legacy companies like GE and Philips. And one of the larger problems that they face is that adoption of diagnostics is an issue because you essentially are talking about, especially in screening and very much so in screening, because if you feel there is a problem with your body, you will certainly want to figure out a way to solve it, be it through self -pay models, which is prevalent in this part of the world, or through insurance models. But screening basically precedes the need for healthcare. It precedes when you feel you have a problem with your body.

So how do you make people pay when they don’t see the need for it? So I think one of the things, throwing that back to you, is two things, two questions. One point is that, you know, healthcare is an industry because we are quite, we understand the risks of using any tool, be it pharma tool, be it AI tool, in the healthcare industry, we know the risks are quite immediate and sometimes life -threatening. So we are always quite skeptical in how we position ourselves. We position the tool. So how do you train the healthcare workforce to adopt it while also keeping a layer of difference within the tool as well as people adopting it? Because you don’t want people acting on health advisors from an LLM today.

No, of course. That’s one. So how do you create the education for the healthcare workforce strong enough, but also ensure that people are not directly acting on those advisors? And secondly, how do you educate the ecosystem better so that such processes like screening and making sure that you precede the need, especially in healthcare, precede the felt need in healthcare? How do you rather execute those kind of strategies? These two are questions I would love for you to give me a thought on.

Chris Ciauri

Maybe on the second one, I might see if you have an opinion because I’m an AI person and not on the clinician side. On the first one, I think. I think you have to be clear, and we certainly have clarity on this in dealing with health care systems and customers in the health sector and pharma sector. You know, AI is for preparation. Clinicians are for judgment. So we have no intention of Claude being a doctor or a nurse. Thank God. Doctors are really scared. I think you have to be clear about that. And it goes back to what I said before, because these models, if they’re really going to serve health care, you know, Claude’s going to know when to say, I don’t know.

I’m not certain of that. Like, this is where you need to go, you know, and we’re going to have this conversation together with the clinician. And we think that’s table stakes. The same thing with, you know, Claude will never use someone’s patient data to train our models. And I think that’s the key. And I think that’s the key. If you don’t make those your non -negotiables, then AI will not get to have the impact that it should have.

Dr. Sabine Kapasi

No, that’s true. And if you can answer that.

Dr. Aditya Yad

Maybe just to weigh in on that, because it is true what you just said. So the importance of AI being with the companies, with the innovation segment, and not necessarily at the end. Because, as you said, there can be also pros and cons on that usage. So, for instance, I’ll give you one practical example with our state government. We launched a program just one year ago because we noticed that especially smaller companies, small mid -sized companies developing these new technologies, how did they embrace AI? And there is still a little bit of caution about how to use that, how to not make a mistake from the beginning of the process and then go into a direction that was not anticipated.

So we launched a program a year ago where we have cohorts of companies that is being used. We have a lot of companies that are being funded by the state government. where we talk directly with the CEOs of these companies, and we have a leadership program to train CEOs to think how they can implement AI from the very start of this process. The challenge we have is that in our state, for instance, we have 40 ,000 companies, SMEs. How do we convince them to go and to embrace AI, and how do we sell them the benefit of using AI, because there’s also resistance to change on that level. So as long as we don’t know who is going to really take the lead on using the AI tools, then everybody will be using more or less of it, and then there will not be a homogeneous application of AI.

So here we said, okay, the industry needs to take the charge of doing that, and then we’re going to train the people, as you said, on the use of AI, not just as a technological tool, but as a strategic roadmap for the company going forward.

Dr. Sabine Kapasi

So, see, we are in 2026 today. four years away from 2030 and a lot of plans that a lot of countries have made for adoption in 2030, so India included. There are a couple of areas like workforce enablement for AI adoption, especially when we are talking about healthcare. We are looking at solving B2B cases, workflow management, time enhancement of the skilled workforces and also create some level of skilled enhancement of creating bots or other agentic systems which can in some proportion aid people who are not as credentialed, let’s put it that way, so enabling a frontline workforce or enabling a GP to solve some cases which may otherwise be referred to a higher center. So reduce burden and distribute the burdens in healthcare systems.

I think that is one use. In this case, that is something that I would love for you to throw some light on, Chris. and I think also the non -sexy use cases, like the drug discovery cases, which are phenomenal not just for business but also for changing the world as we know it, and diagnostics. So if you can throw light on these three verticals and how you see them panning out in the next five years, that would be really, really helpful.

Chris Ciauri

I mean, I might sort of frame it with this. I think, and you talked a little bit about it. Your friends have kind of seen what Claude, sort of the latest version of Claude, and what we’ve seen in the last, Anthropic’s a five -year -old company. We’ve had a commercial model, or a frontier lab. We’ve had a commercial model on the market for coming up on three years, and each model, which now we’re at a rate of every two and a half months a model releases, is exponentially more intelligent, more powerful than the last one, and safe. And I think that’s really important. we don’t see that stopping. So I think what makes me extremely optimistic about the ability to really transform health care on many dimensions that you talked about is this technology will get better and more enabling.

As long as we do it incredibly safely, the benefits are probably hard for us to imagine with how fast it’s moving, but I think it’s a tremendous opportunity.

Dr. Sabine Kapasi

Just before we close this, one more thing. LLMs versus small language models, right? Targeted use cases. How do you see the industry evolving in the next five years in health care? Targeted use cases?

Chris Ciauri

Yeah, I think what you will likely see is smaller targeted use cases will have a place. You know, maybe that’s out on the edge in specific things, and open source will have a place in that. I think as a frontier lab, we have one model. It’s Claude. It comes in a few versions, you know, so that it can be scaled up and down for different use cases. Our position in the market is, like, let’s make Claude the most capable and safe model that we possibly can. Let’s keep that exponential innovation going, because our place in the market is going to be to drive the greatest amount of innovation and transformation. And there will definitely be a place for more edge use cases with smaller models.

I just think it’ll be a great place. It’ll play out differently.

Dr. Sabine Kapasi

And you see countries like India playing out an interesting role in that? Sorry? How do you see countries like India playing out an interesting role in that?

Chris Ciauri

Yeah, I mean, I think we’ll see many countries probably playing more on the small language edge use case side. Today, the frontier is in a couple of countries, but I think there will be opportunities that we can’t see.

Dr. Sabine Kapasi

Thank you so much. And Aditya, as a policymaker, if you can throw a very quick light on where do you see the next five years spanning out in terms of, and what are the things that we will need to be careful about that we don’t see today?

Dr. Aditya Yad

I think in general, the trust around data and personal data, medical data is still a debate. So this is ongoing. There’s a lot of awareness building to be made. We have to gain the trust of the people that they trust the systems, what is happening with the data, where is the data flowing, and how do they see the ultimate benefit from them. using this data. From that point onwards, we can really build different systems, we can think about new things, but that is still something that we have to work on.

Dr. Sabine Kapasi

Thank you so much, Aditya. Thanks, Chris, for joining us, and essentially creating equity and AI, which is useful for all, especially for something like healthcare, is something that we all strive for and hope that this is going to change the world in the next five years. Thank you so much for joining us for this chat. Namaste. Thank you. And see you all in Geneva next year, because the AI Summit will be in Switzerland next year. So we’re all welcome there as well. Yes, of course. Thank you. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
C
Chris Ciauri
9 arguments148 words per minute2076 words837 seconds
Argument 1
Healthcare AI can reduce administrative burden and improve patient care time – Administrative Burden Reduction
EXPLANATION
Chris argues that AI can significantly reduce the administrative workload that currently consumes 70% of clinicians’ time in the U.S., allowing doctors to spend more time on actual patient care. This represents a massive opportunity to improve healthcare efficiency and outcomes.
EVIDENCE
In the U.S., only 30% of a clinician’s time is spent on patient care, with the rest on paperwork and administrative tasks. Banner Health used Claude to summarize 100-page oncology reports, reducing the time from 8 hours to quickly accessible summaries.
MAJOR DISCUSSION POINT
AI can transform healthcare by reducing administrative burden
AGREED WITH
Dr. Aditya Yad
Argument 2
AI can improve healthcare access in countries with limited consultation time – Healthcare Access Improvement
EXPLANATION
Chris highlights that AI can address the access challenge in countries like India where the average primary care visit lasts only two minutes. AI solutions can help make healthcare systems more broadly accessible to larger populations.
EVIDENCE
Data shows that the average primary care visit in India only lasts two minutes, indicating severe access constraints.
MAJOR DISCUSSION POINT
AI can improve healthcare access in resource-constrained environments
Argument 3
AI safety is paramount in healthcare applications, models must acknowledge uncertainty – AI Safety Priority
EXPLANATION
Chris emphasizes that AI can do tremendous good in healthcare but can also cause significant harm if not implemented carefully. Anthropic’s focus on safety means their models freely use language like ‘I don’t know’ and ‘I’m not certain’ rather than providing confident but potentially wrong answers.
EVIDENCE
Banner Health chose Claude specifically because they wanted a model that would say ‘I don’t know’ or ‘I’m not certain’ rather than one that was confident but potentially wrong in high-stakes healthcare situations.
MAJOR DISCUSSION POINT
Safety considerations are critical for AI in healthcare
AGREED WITH
Dr. Sabine Kapasi
Argument 4
India’s digital healthcare infrastructure provides excellent foundation for AI implementation – Digital Infrastructure Advantage
EXPLANATION
Chris praises India’s digital healthcare system as being among the best globally, not just in the Global South. This infrastructure provides an excellent foundation for AI implementation and gives India the potential to serve as a model for AI-driven healthcare transformation worldwide.
EVIDENCE
India has built a digital healthcare system that’s ‘the envy of the world’ and is ‘one of the few globally’ with such comprehensive digital health record systems.
MAJOR DISCUSSION POINT
Digital infrastructure enables AI adoption in healthcare
AGREED WITH
Dr. Sabine Kapasi
Argument 5
AI can reduce drug development timelines from eight weeks to eight hours – Drug Development Acceleration
EXPLANATION
Chris describes how AI can dramatically accelerate drug discovery and development processes. Working with pharmaceutical companies like Novo Nordisk and Sanofi, AI has reduced certain drug development lifecycle tasks from eight weeks to eight hours.
EVIDENCE
Collaboration with customers like Novo Nordisk and Sanofi has demonstrated the ability to reduce drug development lifecycle times from eight weeks to eight hours.
MAJOR DISCUSSION POINT
AI can revolutionize pharmaceutical development timelines
Argument 6
Indian AI Adoption Leadership
EXPLANATION
Chris reveals that India has the highest adoption of Claude outside the U.S., making it second globally for cloud adoption. The country shows remarkable optimism about AI technology potential, with usage and revenue doubling in just four months.
EVIDENCE
India is second in the world for Claude adoption, with the fastest growth rate. In the last four months, usage and revenue for Anthropic has doubled in India.
MAJOR DISCUSSION POINT
India leads in AI adoption and growth
Argument 7
Multilingual AI capabilities are essential for healthcare access in diverse populations – Language Accessibility
EXPLANATION
Chris explains that multilingual barriers prevent effective healthcare AI deployment in diverse countries like India. To address this, Anthropic has trained Claude on 12 Indic languages as part of their entry into the Indian market.
EVIDENCE
As part of Anthropic’s entry into India over the last six months, they’ve trained Claude on 12 Indic languages, though more languages and dialects still need to be addressed.
MAJOR DISCUSSION POINT
Language barriers must be addressed for inclusive AI healthcare
Argument 8
Human-AI Collaboration
EXPLANATION
Chris establishes clear boundaries for AI in healthcare, stating that AI should be used for preparation while clinicians maintain authority over judgment and decision-making. He emphasizes that Claude will never replace doctors or nurses and will acknowledge when it doesn’t know something.
EVIDENCE
Claude is designed to say ‘I don’t know’ or ‘I’m not certain’ and will never use patient data to train models. The principle is ‘AI is for preparation, clinicians are for judgment.’
MAJOR DISCUSSION POINT
Clear roles needed between AI and healthcare professionals
DISAGREED WITH
Dr. Aditya Yad
Argument 9
AI Model Evolution
EXPLANATION
Chris describes the rapid evolution of AI models, with Anthropic releasing new versions every two and a half months, each exponentially more intelligent and powerful than the last. He predicts that frontier labs will continue advancing large models while smaller, targeted models will serve edge cases.
EVIDENCE
Anthropic releases a new model every two and a half months, with each being exponentially more intelligent than the previous version. The company has been commercial for nearly three years.
MAJOR DISCUSSION POINT
AI technology continues rapid advancement
D
Dr. Aditya Yad
8 arguments174 words per minute1469 words503 seconds
Argument 1
AI can optimize healthcare costs while maintaining quality in expensive systems – Cost Optimization
EXPLANATION
Dr. Aditya explains that Switzerland has a highly efficient and qualitative healthcare system but it’s very expensive, creating public pressure over high premiums. AI can play a significant role in cost reduction and process optimization from research to market, potentially reducing the billions spent on drug development.
EVIDENCE
Switzerland is known for efficient, high-quality but expensive healthcare. There’s public pressure about high healthcare premiums. AI can accelerate drug development and increase probability of finding right targets, reducing the billions typically spent on new drug development.
MAJOR DISCUSSION POINT
AI can reduce healthcare costs while maintaining quality
AGREED WITH
Chris Ciauri
Argument 2
Swiss Innovation Leadership
EXPLANATION
Dr. Aditya highlights Switzerland’s consistent ranking as number one in the Global Innovation Index for 15 consecutive years, largely due to its healthcare, biotech, and pharmaceutical industries. Despite being a small country with 9 million people, Switzerland hosts around 1,700 companies and research institutions in the life sciences sector.
EVIDENCE
Switzerland has been ranked number one in the Global Innovation Index for the past 15 years straight, with around 1,700 companies or research institutions in healthcare/biotech/pharma for a population of 9 million people.
MAJOR DISCUSSION POINT
Switzerland’s innovation ecosystem in healthcare and biotech
Argument 3
Bilateral Healthcare Investment
EXPLANATION
Dr. Aditya describes the recently signed Switzerland-India free trade agreement that includes a commitment from Switzerland and EFTA countries to invest $100 billion in India across various sectors, including healthcare, and create 1 million direct jobs over the next 15 years.
EVIDENCE
The free trade agreement signed between Switzerland/EFTA and India includes a commitment to invest $100 billion into India in various sectors including healthcare and create 1 million direct jobs in the next 15 years.
MAJOR DISCUSSION POINT
International partnerships can drive healthcare innovation investment
Argument 4
Manufacturing Optimization
EXPLANATION
Dr. Aditya discusses how AI is being integrated into biomanufacturing, particularly relevant to India’s new biofoundry policy. AI can monitor and optimize production parameters in controlled environments to increase yields and ROI, making smaller bioreactors more efficient and cost-effective.
EVIDENCE
India has a new biofoundry policy making biomanufacturing a national priority. Companies like Novartis, Roche, and Lonza are using AI-controlled bioreactors that can produce high-quality products with better yields at lower costs.
MAJOR DISCUSSION POINT
AI optimization in pharmaceutical manufacturing
Argument 5
Production Efficiency
EXPLANATION
Dr. Aditya explains how smaller AI-controlled bioreactors can be more qualitative than large-scale production because they use AI tools for better control and optimization. This results in highly qualitative products that, despite being expensive to produce traditionally, become more affordable due to AI-driven efficiency gains.
EVIDENCE
Smaller bioreactors using AI tools can produce very highly qualitative products with better yields, making expensive drugs more affordable through improved production costs and efficiency.
MAJOR DISCUSSION POINT
AI enables efficient small-scale high-quality production
Argument 6
Diagnostic Economics
EXPLANATION
Dr. Aditya argues that the healthcare system has been built around treatment rather than prevention because the cost-benefit analysis hasn’t favored expensive diagnostics over treatments. However, if AI can significantly impact both the quality and cost of developing diagnostics, it could shift the system toward prevention.
EVIDENCE
Insurance companies historically haven’t been convinced to pay 2-4 times more for diagnostics to avoid $20,000-$50,000 cancer therapies, leading to a treatment-focused rather than prevention-focused system.
MAJOR DISCUSSION POINT
Economic incentives need to shift toward prevention-focused healthcare
AGREED WITH
Dr. Sabine Kapasi
Argument 7
Strategic AI Implementation
EXPLANATION
Dr. Aditya describes a government program launched to train CEOs of small and mid-sized companies on how to strategically implement AI from the start of their processes. The challenge is convincing 40,000 SMEs to embrace AI and ensuring homogeneous application across the industry.
EVIDENCE
Switzerland launched a program with cohorts of state-funded companies, providing leadership training to CEOs on AI implementation. The state has 40,000 SMEs that need to be convinced to embrace AI tools.
MAJOR DISCUSSION POINT
Leadership training needed for strategic AI adoption
DISAGREED WITH
Chris Ciauri
Argument 8
Data Privacy Trust
EXPLANATION
Dr. Aditya identifies data trust and privacy, particularly around medical data, as an ongoing debate and challenge. Building public trust about data usage, flow, and ultimate benefits is essential before developing new AI systems in healthcare.
EVIDENCE
There’s ongoing debate about trust around personal and medical data, requiring awareness building to gain public trust about data systems, data flow, and how people see ultimate benefits from using their data.
MAJOR DISCUSSION POINT
Public trust in data governance is essential for AI healthcare adoption
D
Dr. Sabine Kapasi
3 arguments152 words per minute2766 words1089 seconds
Argument 1
Reverse Innovation Potential
EXPLANATION
Dr. Sabine suggests that countries like India and other Global South nations, with their massive digital adoption and unique healthcare challenges, have the potential to develop innovative solutions that could influence and benefit developed markets as well.
EVIDENCE
India has close to 90% smartphone ownership in urban areas and 75% in rural areas, demonstrating massive digital adoption that could enable innovative healthcare solutions.
MAJOR DISCUSSION POINT
Global South countries can lead innovation that benefits developed markets
AGREED WITH
Chris Ciauri
Argument 2
Workforce Training Balance
EXPLANATION
Dr. Sabine raises the critical question of how to train healthcare workforce to adopt AI tools while maintaining appropriate skepticism and ensuring people don’t act directly on health advice from LLMs. She emphasizes the need for education that enables adoption while preserving safety boundaries.
EVIDENCE
Healthcare professionals need to understand AI tools while maintaining skepticism, and systems must prevent people from directly acting on health advice from AI without professional oversight.
MAJOR DISCUSSION POINT
Healthcare workforce needs balanced AI training
AGREED WITH
Chris Ciauri
Argument 3
Preventive Care Economics
EXPLANATION
Dr. Sabine highlights the challenge of implementing screening and preventive care because it requires people to pay for healthcare before they feel they need it. This creates a fundamental economic challenge in shifting from treatment-focused to prevention-focused healthcare systems.
EVIDENCE
Screening precedes the felt need for healthcare, making it difficult to convince people to pay when they don’t perceive a problem with their health, unlike treatment where people are motivated by existing symptoms.
MAJOR DISCUSSION POINT
Economic models must address pre-symptomatic healthcare payment
AGREED WITH
Dr. Aditya Yad
Agreements
Agreement Points
AI can significantly reduce healthcare costs while improving efficiency
Speakers: Chris Ciauri, Dr. Aditya Yad
Healthcare AI can reduce administrative burden and improve patient care time – Administrative Burden Reduction AI can optimize healthcare costs while maintaining quality in expensive systems – Cost Optimization
Both speakers agree that AI has tremendous potential to reduce costs in healthcare systems – Chris focuses on reducing the $1 trillion administrative burden in the US, while Aditya emphasizes cost reduction in expensive systems like Switzerland’s healthcare
AI safety and appropriate human-AI boundaries are critical in healthcare
Speakers: Chris Ciauri, Dr. Sabine Kapasi
AI safety is paramount in healthcare applications, models must acknowledge uncertainty – AI Safety Priority Workforce Training Balance
Both speakers emphasize the critical importance of maintaining safety boundaries in healthcare AI, with Chris advocating for models that acknowledge uncertainty and Sabine stressing the need for balanced workforce training that prevents direct reliance on AI medical advice
Digital infrastructure enables transformative AI adoption in healthcare
Speakers: Chris Ciauri, Dr. Sabine Kapasi
India’s digital healthcare infrastructure provides excellent foundation for AI implementation – Digital Infrastructure Advantage Reverse Innovation Potential
Both speakers recognize that strong digital infrastructure, particularly India’s digital healthcare system and high smartphone adoption rates, creates an excellent foundation for AI implementation and innovation that could benefit global healthcare
Prevention-focused healthcare faces economic and systemic challenges
Speakers: Dr. Aditya Yad, Dr. Sabine Kapasi
Diagnostic Economics Preventive Care Economics
Both speakers identify the fundamental economic challenge in shifting healthcare systems from treatment-focused to prevention-focused approaches, noting that current incentive structures don’t favor expensive diagnostics over treatments, and people are reluctant to pay for healthcare before feeling they need it
Similar Viewpoints
Both speakers see AI as transformative for pharmaceutical development and manufacturing, with Chris highlighting dramatic timeline reductions in drug development and Aditya focusing on AI-optimized biomanufacturing processes
Speakers: Chris Ciauri, Dr. Aditya Yad
AI can reduce drug development timelines from eight weeks to eight hours – Drug Development Acceleration Manufacturing Optimization
Both speakers emphasize the importance of strategic, thoughtful AI implementation rather than haphazard adoption, with Chris focusing on clear role boundaries and Aditya highlighting the need for leadership training and homogeneous application across industries
Speakers: Chris Ciauri, Dr. Aditya Yad
Strategic AI Implementation Human-AI Collaboration
Both speakers recognize that successful AI adoption in healthcare requires building trust and proper education – Aditya focuses on public trust in data governance while Sabine emphasizes balanced workforce training
Speakers: Dr. Aditya Yad, Dr. Sabine Kapasi
Data Privacy Trust Workforce Training Balance
Unexpected Consensus
India as a global leader in AI adoption and innovation
Speakers: Chris Ciauri, Dr. Sabine Kapasi
Indian AI Adoption Leadership Reverse Innovation Potential
It’s unexpected that a representative from a major US AI company would position India not just as a market but as the second-highest adopter globally and a potential model for the world. This consensus suggests a shift from viewing Global South countries as recipients to recognizing them as innovation leaders
Small-scale, AI-optimized production can compete with large-scale manufacturing
Speakers: Dr. Aditya Yad, Chris Ciauri
Production Efficiency AI Model Evolution
The consensus that smaller, AI-controlled systems can be more efficient than traditional large-scale operations challenges conventional manufacturing wisdom and suggests a fundamental shift in how pharmaceutical production might evolve
Overall Assessment

The speakers demonstrate strong consensus on AI’s transformative potential in healthcare, the critical importance of safety and proper implementation, the economic challenges of prevention-focused care, and India’s leadership role in global AI adoption

High level of consensus with complementary perspectives rather than disagreements. The implications suggest that successful AI healthcare implementation requires coordinated efforts across technology development, policy frameworks, workforce training, and economic incentive alignment. The recognition of Global South countries as innovation leaders rather than just markets represents a significant shift in global technology discourse

Differences
Different Viewpoints
Scale and approach to AI model development
Speakers: Chris Ciauri
AI Model Evolution
Chris advocates for frontier labs focusing on large, general-purpose models like Claude that are continuously improved every 2.5 months, while acknowledging that smaller, targeted models will serve edge cases. However, there’s an implicit tension about whether the future lies in large general models or specialized smaller models for healthcare applications.
Implementation strategy for AI adoption
Speakers: Chris Ciauri, Dr. Aditya Yad
Human-AI Collaboration Strategic AI Implementation
Chris emphasizes clear boundaries where AI handles preparation and clinicians handle judgment, while Dr. Aditya focuses on training CEOs and leadership for strategic AI implementation from the start. These represent different approaches to AI adoption – one focused on maintaining professional boundaries, the other on comprehensive organizational transformation.
Unexpected Differences
Data governance priorities
Speakers: Chris Ciauri, Dr. Aditya Yad
AI Safety Priority Data Privacy Trust
While both speakers acknowledge data and safety concerns, they emphasize different aspects. Chris focuses on AI model safety and uncertainty acknowledgment, while Dr. Aditya emphasizes public trust in data governance. This represents different priorities in addressing AI safety – technical model behavior versus public acceptance and trust.
Overall Assessment

The discussion shows remarkably high consensus among speakers on the potential of AI in healthcare, with disagreements mainly centered on implementation approaches rather than fundamental goals. The main areas of difference involve technical strategies (large vs. small models), organizational implementation approaches (boundary-setting vs. comprehensive transformation), and priorities in addressing safety concerns (technical safety vs. public trust).

Low to moderate disagreement level. The speakers demonstrate strong alignment on core objectives of improving healthcare through AI, reducing costs, and ensuring safety. Disagreements are primarily tactical and complementary rather than conflicting, suggesting different but potentially compatible approaches to achieving shared goals. This level of agreement is positive for advancing AI in healthcare, as it indicates broad consensus on direction while allowing for diverse implementation strategies.

Partial Agreements
Both speakers agree that AI can reduce healthcare costs and improve efficiency, but they focus on different mechanisms. Chris emphasizes reducing administrative burden to free up clinician time, while Dr. Aditya focuses on optimizing expensive healthcare systems through process improvements and drug development acceleration.
Speakers: Chris Ciauri, Dr. Aditya Yad
Healthcare AI can reduce administrative burden and improve patient care time – Administrative Burden Reduction AI can optimize healthcare costs while maintaining quality in expensive systems – Cost Optimization
Both agree on the need for careful healthcare workforce training for AI adoption, but approach it differently. Chris focuses on maintaining clear role boundaries between AI and clinicians, while Dr. Sabine emphasizes the challenge of balancing adoption with appropriate skepticism and safety.
Speakers: Chris Ciauri, Dr. Sabine Kapasi
Workforce Training Balance Human-AI Collaboration
Both recognize the economic challenges in shifting healthcare from treatment to prevention, but focus on different aspects. Dr. Aditya discusses how insurance economics haven’t favored expensive diagnostics over treatments, while Dr. Sabine highlights the fundamental challenge of making people pay for healthcare before they feel they need it.
Speakers: Dr. Aditya Yad, Dr. Sabine Kapasi
Diagnostic Economics Preventive Care Economics
Takeaways
Key takeaways
AI has transformative potential in healthcare through administrative burden reduction, improved access, and accelerated drug discovery, but safety and human oversight remain paramount India’s digital healthcare infrastructure and high AI adoption rates position it as a leader for global healthcare AI implementation, with potential to influence worldwide standards The Switzerland-India partnership, including a $100 billion investment commitment, creates significant opportunities for cross-border healthcare innovation collaboration AI applications span from reducing drug development timelines from weeks to hours, to enabling multilingual healthcare access, to optimizing manufacturing processes Healthcare workforce education must balance AI adoption with appropriate clinical skepticism, maintaining the principle that AI supports preparation while clinicians retain judgment authority Prevention-focused healthcare and screening programs face economic challenges that AI cost reduction could help address, potentially shifting from treatment-centered to prevention-centered systems
Resolutions and action items
Anthropic has opened operations in Bengaluru and trained Claude on 12 Indic languages to address multilingual healthcare access challenges Switzerland launched a CEO leadership program to train companies on strategic AI implementation from the start of their development process The discussion established that AI models in healthcare must acknowledge uncertainty and never use patient data for training purposes as non-negotiable safety standards
Unresolved issues
How to effectively educate healthcare ecosystems to adopt screening and preventive care when patients don’t feel immediate need for healthcare services How to scale AI training and adoption across thousands of small and medium enterprises in healthcare (Switzerland faces challenge of reaching 40,000 SMEs) Data trust and privacy concerns remain ongoing debates that need resolution before widespread AI healthcare adoption The balance between frontier AI models versus smaller, targeted language models for specific healthcare use cases needs further development How to ensure homogeneous and strategic AI application across healthcare systems when adoption rates vary significantly
Suggested compromises
AI should be positioned for preparation and support while clinicians maintain final judgment and decision-making authority A hybrid approach where frontier AI models handle complex cases while smaller models serve edge cases and specific applications Industry-led training programs combined with government support to bridge the gap between AI capability and workforce readiness Gradual implementation starting with administrative and workflow optimization before moving to more complex clinical applications
Thought Provoking Comments
AI can do a lot of good. It also can create a lot of harm if done carelessly… we like the tension between capability of AI models but also making sure that the safety is right so that we can deliver on some of the opportunities.
This comment immediately established the critical balance between AI’s transformative potential and its risks in healthcare, setting a responsible tone for the entire discussion. It moved beyond typical tech optimism to acknowledge real dangers.
This framed the entire conversation around responsible AI development rather than just capabilities. It led to deeper discussions about safety protocols, the importance of models saying ‘I don’t know,’ and established trust as a foundational requirement for healthcare AI adoption.
Speaker: Chris Ciauri
In the U.S., only 30% of a clinician’s time is spent on patient care. The rest is on paperwork and administrative tasks… In India, the average primary care visit only lasts two minutes.
This stark comparison revealed fundamentally different healthcare challenges between developed and developing nations – administrative burden vs. access issues. It demonstrated that AI solutions cannot be one-size-fits-all.
This shifted the discussion from generic AI applications to region-specific solutions. It led to exploration of how the same technology (AI) must address completely different problems – efficiency in the US versus accessibility in India – and influenced later discussions about multilingual capabilities and scalable solutions.
Speaker: Chris Ciauri
Switzerland has been ranked number one in the Global Innovation Index for the past 15 years straight… but Switzerland is not a big domestic market, right? We are 9 million people… That’s not even Delhi.
This exchange highlighted the paradox of innovation leadership coming from small markets and the necessity of global thinking from day one. It challenged assumptions about where innovation originates and scales.
This led to a deeper exploration of how different market sizes drive different innovation strategies. It connected to discussions about the Switzerland-India partnership, the $100 billion investment commitment, and how small, efficient systems can complement large, scalable markets.
Speaker: Dr. Aditya Yad and Dr. Sabine Kapasi
We had a patient who was a dengue patient… the drugs were cheaper than the diagnostic tests and the patient could afford any so for the system at that time it was a trial and error problem but the clinical values were stark enough for us to know that this was not actually a risky case.
This real-world clinical story illustrated the harsh economic realities of healthcare in resource-constrained settings, where treatment decisions are driven by cost rather than optimal diagnostics. It humanized the abstract discussion of AI applications.
This personal anecdote grounded the theoretical AI discussion in practical reality. It led to conversations about how AI could make diagnostics more affordable and accessible, and how clinical intelligence combined with new biomarkers could transform care delivery in underserved regions.
Speaker: Dr. Sabine Kapasi
AI is for preparation. Clinicians are for judgment… Claude’s going to know when to say, I don’t know. I’m not certain of that… Claude will never use someone’s patient data to train our models.
This clearly delineated the boundaries between AI capabilities and human responsibility in healthcare, addressing fears about AI replacing doctors while establishing crucial ethical boundaries around data use.
This comment provided a framework for responsible AI deployment that other participants could build upon. It led to discussions about training healthcare workers, building trust in AI systems, and the importance of maintaining human oversight in clinical decision-making.
Speaker: Chris Ciauri
India has the highest adoption of Claude outside the U.S. It’s second in the world for cloud adoption… in the last four months, the usage of cloud and the revenue for Anthropic has doubled in India.
This challenged the framing of India as a ‘low-income’ country and revealed it as a leading AI adopter, contradicting assumptions about technology adoption patterns in the Global South.
This reframed India’s position from a recipient of technology to a leader in adoption and potentially innovation. It led to discussions about India as a model for other countries and the potential for reverse innovation flowing from Global South to developed markets.
Speaker: Chris Ciauri
Overall Assessment

These key comments transformed what could have been a typical ‘AI will revolutionize healthcare’ discussion into a nuanced exploration of regional differences, ethical responsibilities, and practical implementation challenges. The conversation evolved from broad promises to specific use cases, from technological capabilities to human-centered design, and from Western-centric assumptions to a more globally inclusive perspective. The interplay between the clinician’s real-world experience, the technologist’s safety-first approach, and the policy expert’s cross-cultural insights created a rich dialogue that addressed both the transformative potential and the complex realities of implementing AI in healthcare across different economic and cultural contexts.

Follow-up Questions
How to effectively train healthcare workforce to adopt AI while maintaining appropriate skepticism and preventing direct patient action based on AI advice alone
This addresses the critical challenge of balancing AI adoption with patient safety, ensuring healthcare professionals can leverage AI tools while maintaining clinical judgment and avoiding over-reliance on AI recommendations
Speaker: Dr. Sabine Kapasi
How to educate healthcare ecosystems to implement preventive screening strategies that precede felt need for healthcare
This explores the fundamental challenge of shifting healthcare from reactive treatment to proactive prevention, particularly important for population health management and cost reduction
Speaker: Dr. Sabine Kapasi
How AI solutions can be adapted for high-volume, time-constrained healthcare environments typical in Global South countries (like 200 patients per day with 2-minute consultations)
This addresses the need to customize AI healthcare solutions for resource-constrained settings with different workflow patterns than developed countries
Speaker: Dr. Sabine Kapasi
How to build trust around medical data usage and personal data privacy in AI healthcare systems
This is fundamental to widespread AI adoption in healthcare, as patient trust and data security concerns remain major barriers to implementation
Speaker: Dr. Aditya Yad
How to achieve homogeneous AI application across diverse healthcare organizations and convince smaller companies/SMEs to embrace AI adoption
This addresses the challenge of ensuring consistent AI implementation across varied organizational sizes and capabilities in the healthcare ecosystem
Speaker: Dr. Aditya Yad
Evolution of Large Language Models versus Small Language Models for targeted healthcare use cases over the next five years
This explores the technical architecture decisions that will shape how AI is deployed in healthcare, affecting everything from cost to performance to accessibility
Speaker: Dr. Sabine Kapasi
Role of countries like India in developing edge use cases and smaller AI models for healthcare applications
This examines how emerging markets might contribute to AI innovation in healthcare, potentially developing solutions that could benefit global health
Speaker: Dr. Sabine Kapasi
How AI can enable frontline workers and general practitioners to handle cases that would otherwise require specialist referral
This addresses healthcare workforce optimization and access improvement by using AI to enhance capabilities of less specialized healthcare providers
Speaker: Dr. Sabine Kapasi

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.