Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT

Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion between Amish Devagon and Ankush Sabharwal, co-founder of a company developing Bharat GPT, focuses on India’s AI development strategy and its potential impact on jobs and society. Sabharwal emphasizes that AI should ultimately benefit humans and citizens, arguing that technology works best when it becomes invisible and integrated into daily life. When addressing concerns about AI eliminating jobs, he contends that AI will redefine rather than eliminate opportunities, suggesting that automation will enable faster problem-solving and allow businesses to create more solutions for enterprises.


A key challenge Sabharwal identifies is the need to shift from effort-based pricing models in Indian IT services to value-based pricing, as AI enables fewer people to accomplish the same work. The discussion highlights the importance of developing AI models that work in Indian languages, noting that only 10% of Indians speak English while the majority communicate in regional languages. Sabharwal presents Bharat GPT as a collective national asset built from data contributed by all Indians across different languages.


The conversation explores the concept of sovereign AI, which Sabharwal views as crucial for India’s independence from foreign technology dependence. He argues that India has natural advantages in AI development due to its large population generating vast amounts of data and Indians’ aspirational nature driving technology adoption. Sabharwal praises the Indian government’s AI policies, particularly highlighting free GPU access and funding for model development. He envisions India becoming a global hub for AI solutions and believes the country is well-positioned to lead in practical AI applications that solve real-world problems.


Keypoints

Major Discussion Points:


AI’s Impact on Employment and Business Models: Discussion of how AI will transform rather than eliminate jobs, with emphasis on shifting from effort-based pricing (per hour/day) to value-based pricing models in IT services, and the need for businesses to adapt their approaches.


Indian Language AI and Digital Inclusion: Focus on Bharat GPT’s capability to work in Indian languages rather than just English, addressing the fact that only 10% of Indians speak English, and the importance of making AI accessible to the broader Indian population.


Sovereign AI and India’s Self-Reliance: Extensive discussion about India’s need for technological independence through sovereign AI, leveraging India’s massive data generation from its 1.4-1.5 billion population, and reducing dependence on foreign AI systems.


India’s AI Infrastructure and Government Policy: Coverage of India’s AI development strategy, including government provision of free GPUs, funding for AI models, energy infrastructure needs, and the overall policy framework supporting AI innovation.


India’s Position in Global AI Leadership: Discussion of India’s potential to become a global AI hub, the success of the AI Summit in Delhi, and India’s competitive advantages in talent, data, and market adoption.


Overall Purpose:


The discussion aimed to explore India’s AI landscape, strategy, and future potential, with particular focus on how Indian companies like Bharat GPT are contributing to the country’s technological sovereignty and addressing uniquely Indian challenges through AI solutions.


Overall Tone:


The conversation maintained an optimistic and patriotic tone throughout, with both participants expressing strong confidence in India’s AI capabilities and future. The tone was collaborative and supportive, with the interviewer occasionally pushing for more critical perspectives, though the interviewee remained consistently positive about government policies and India’s prospects. The discussion ended on a particularly upbeat note with a rapid-fire round that reinforced the optimistic outlook.


Speakers

Amish Devagon: Role/Title not explicitly mentioned, appears to be an interviewer or journalist conducting the discussion


Ankush Sabharwal: Co-founder of a company working on AI technology, specifically mentioned as co-founder along with Manav Gandotra, involved in developing Bharat GPT (an AI model that works in Indian languages)


Additional speakers:


Manav Gandotra: Mentioned as co-founder of Ankush Sabharwal’s company, but did not directly participate in the recorded discussion


Full session reportComprehensive analysis and detailed insights

This interview between journalist Amish Devagon and Ankush Sabharwal, who is involved with Bharat GPT development, took place at an AI Summit in Delhi and explores India’s artificial intelligence development, business transformation, and future prospects. The discussion covers practical challenges and opportunities as India positions itself in the global AI landscape.


Note: Some portions of this interview contain repetitive or unclear transcription that affects the completeness of certain responses.


AI’s Impact on Employment and Business Transformation


Sabharwal addresses concerns about AI-driven job displacement, arguing that AI will redefine rather than eliminate opportunities. His central premise is that AI enables faster problem-solving, which should lead to creating more solutions and consequently more opportunities for human contribution. He emphasizes that technology should ultimately benefit humans and citizens.


The conversation focuses significantly on the transformation required in India’s IT services sector. Sabharwal identifies a critical challenge: if AI enables two people to accomplish work that previously required 100, traditional hourly or daily rate pricing models become unsustainable. He advocates for a shift toward value-based pricing, where companies charge based on outcomes delivered rather than time invested. This approach would allow businesses to maintain profitability while providing enhanced value to clients, leveraging AI’s efficiency gains.


Indian Language AI and Accessibility


A key theme is the importance of developing AI systems that function in Indian languages rather than exclusively in English. Sabharwal provides rough estimates that only 10% of India’s population speaks English, while approximately 40-45% understand Hindi, highlighting the potential exclusion of vast population segments from English-centric AI systems.


He gives practical examples of real-time multilingual capabilities, such as the Prime Minister’s speeches being simultaneously translated into Tamil, Bengali, and other regional languages. Sabharwal positions Bharat GPT as belonging to all of India, built from collective contributions across the country’s linguistic diversity, framing it as a shared national resource rather than a corporate product.


Sovereign AI and Data Advantages


The discussion explores sovereign AI as central to India’s technological strategy. Sabharwal argues that India has natural advantages in developing independent AI capabilities, primarily due to data generation from its large population of 1.4-1.5 billion people. He positions data as the fundamental raw material for AI development.


Rather than viewing sovereign AI defensively, Sabharwal frames it as an opportunity for India to provide sovereign AI solutions to other countries, transforming India from a consumer of foreign AI technology into a potential global provider. He notes that Indians generate vast amounts of data through daily digital activities, which could provide valuable insights when processed through sovereign AI systems.


Government Support and Policy Framework


Sabharwal expresses strong appreciation for government AI initiatives, particularly highlighting the provision of free GPUs to developers and funding for AI model development. He characterizes the government’s approach as forward-thinking and proactive. When pressed by Devagon for critical feedback, Sabharwal initially offers only praise, leading to an exchange where the interviewer explicitly requests more critical analysis.


The conversation addresses infrastructure requirements, with Sabharwal noting the energy demands of AI systems – contrasting the 20 watts required by the human brain with the 1000 watts needed by GPU-based AI models. He also mentions the need for AI skilling initiatives across various sectors and suggests the government should “fund them who use our applications.”


When asked about establishing an AI Ministry, Sabharwal indicates this isn’t necessary, suggesting existing support mechanisms are adequate.


India’s Global AI Positioning


Both participants express confidence in India’s potential for global AI leadership. Sabharwal suggests ambitious timelines, indicating India could become a global AI hub relatively quickly. He references meeting the Prime Minister at “JIPA” and notes the presence of AI ministers from the UK, Canada, and France at the Delhi summit as evidence of India’s growing international prominence in AI discussions.


Sabharwal argues that India’s focus on solving real-world problems through AI applications, rather than merely developing platforms, will create sustainable competitive advantages globally. To critics, his advice is direct: “start making yourself a fool.”


Rapid-Fire Insights


The interview concludes with quick responses from Sabharwal:


– He believes founders and entrepreneurs will remain central to AI evolution because “AI was created by human intelligence, so human intelligence will control”


– Predicts Bangalore will become India’s AI capital


– Claims India already has the most AI users globally


– Expresses confidence in job security for those adapting to AI


Conclusion


This interview reveals an optimistic vision for India’s AI development that emphasizes technological sovereignty, multilingual accessibility, and practical problem-solving. The discussion demonstrates confidence in government support and India’s natural advantages in AI development, particularly its large population and data generation capacity.


Sabharwal’s perspective presents AI as a transformative tool that can enhance human capabilities and solve social challenges while positioning India as a global leader. The emphasis on inclusive, multilingual AI development and value-based business models reflects an approach that prioritizes broad accessibility and real-world applications over purely technological advancement.


Session transcriptComplete transcript of the session
Ankush Sabharwal

Absolutely, whatever we do, it should be for humans only. If we are doing something for businesses, ultimately these businesses are helping humans, which means we are benefiting citizens. And safety, inclusivity should remain and I think technology should be the way it should be, which is not visible. And we are saying AI again and again, we are not afraid of it. Some people say, or the job will go away or what is the risk? So we are using AI knowingly or unknowingly. If you look at any app, it has AI intervention. Now if there is any product at home, TV, fridge, eventually we are bringing AI there too. So technology is for everyone. and Manav is our co -founder.

Manav Gandotra, it’s a big coincidence for us.

Amish Devagon

Yes, Manav is your co -founder. But there is a big question that I want to ask all of this on behalf of the Indians. AI will not eliminate opportunity. It will redefine opportunity because there is a question that is being raised again and again. AI will come, jobs will go, mass exodus will happen in corporates. What do you think about it?

Ankush Sabharwal

Sir, everyone is saying that AI is automating work. That’s why we think that jobs will go. What does it mean that if it is automating, our problems that we are solving with technology are getting solved quickly. So why don’t we think that we will solve problems with AI quickly? We will solve more problems. If they provide solutions for businesses, then those solutions will be made quickly, will be good, and those businesses will make more solutions so that they can give more benefit to enterprises. So, I think that the work will be done and maybe the business model will have to be changed. I think that the effort -based IT services in India…

Amish Devagon

What do you mean by the business model will have to be changed?

Ankush Sabharwal

I think that the maximum IT services in India are rated per mandate, per hour. Rates are there, right? $20 per hour, $40 per hour. So, if I do the same work, if earlier 100 people used to work, now maybe 10 people will do it, even 2 people will do it, right? So, that business model… I am giving the same value. If my rate is fixed per mandate, then accordingly I will get less money as a company. But I am giving the same value for my client. Right? So, if you have to discuss this with them… I will provide you more value. Don’t give me per hour or per day basis pricing. We will do value based pricing. So what will happen with that?

Our clients will get more solutions, they will get more benefits, and eventually we will be able to make more money.

Amish Devagon

An interesting fact is that most of the AI models in the world work in English. But your AI model works in Indian languages. This is very very important for India. Because we have a lot of languages here. We have two languages in Bihar, three languages in UP, two languages in Tamil Nadu. How do you see this? Do you think this is very important to grow the Indian AI story? Indian languages are Indian languages. What’s your view?

Ankush Sabharwal

Absolutely. In India, I think only 10 % of people know English. Or speak it. The other 90 % even Hindi, around 40 -45%. I think only 80 % know it. Hindi, India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India.

Amish Devagon

very great very very Now you see the person of Tamil Nadu the Prime Minister is speaking in Hindi, the person of Tamil Nadu wants to hear the person of Assam wants to hear, he is coming in their language the person of Gujarat wants to hear, he is coming in their language so this is a kind of convenience

Ankush Sabharwal

Absolutely, when we met the Honorable Prime Minister in JIPA I was speaking in Hindi and when he was very delicate, what I was saying he was coming in Tamil and Bengali at the same time so we if we want to give benefit to everyone of technology and that is becoming a collective product I think that Bharat GPT is not ours, it belongs to the whole of India everyone has contributed by giving their language and voice so we are helping them back

Amish Devagon

Do you think AI will enable people to do their daily work and will it add convenience in their lives we all know that mobile when mobile phone came it was a communication machine today it is a convenience with all the apps and everything so this ai will also be a convenience in the next five years six years

Ankush Sabharwal

i think it will happen in five six years i think it will happen from today if so many companies have come here i think you are all seeing which is better technology which is better platform i think you should be next session just about use cases now we have technology and talent so people are also ready to use ai products so from today i think people will have to start making daily use apps daily use products so that everyone will benefit

Amish Devagon

sovereign ai means we are not dependent on any other country prime minister modi was also talking about this again and again in such a situation sovereign ai is very important for us to understand that we have to make sure that we are not dependent on any other country prime minister modi was also talking about this again and again in such a situation sovereign about this again and again in such a situation sovereign ai means we are not dependent on any other country prime minister modi was also talking about this again and again in such a situation about this again and again in such a situation sovereign ai means we are not dependent on any other ai means we are not dependent on any other country prime minister modi was also talking about this again and again in such a situation sovereign ai means we are not dependent on any other country prime minister modi was also talking about this again and again in such a situation Why did you make this issue so big?

Everyone is talking about sovereign AI. We

Ankush Sabharwal

Absolutely. And let me tell you the truth. We would be better without investing that much money.

Amish Devagon

Okay. Why do you think so? Why do you think so?

Ankush Sabharwal

What raw material is needed for AI?

Amish Devagon

Data.

Ankush Sabharwal

Yes. Data. And right now, we are producing data. Even audiences are producing data. You produce content in your channel. Not just you speaking and creating content. The people who are listening, they are also creating content. With the virtue of this beautiful number of people, a population of 1 .4, 1 .5 billion, we are all producing data. But just living life. So, we have this much data. So, will it make a model? It will create a platform. And we Indians are very aspirational. We want to grow fast. We want to use every technology. Now if there are foreign apps, most of the users are in India. So it means if we create a platform app, then it will be used in India too.

Amish Devagon

So what direction do you see the AI story of India going in? And in the next two years, where do you see it standing? What is your view? Be very very rational.

Ankush Sabharwal

First of all, I think the whole world, the AI has to be adopted. Let’s be the users of AI. Enough of the platform. Tell the platform that where is this solving real world problem? If we solve the real world problem with AI, in the whole world, I think India will have a huge contribution to create real world AI. I think India will have a huge contribution to create real world applications with AI. I think India will have a huge contribution to create real world applications with AI. The AI applications we make, the products we make, will of course be used in India. I think India will be considered a hub for AI solutions. If someone wants AI solutions all over the world, I think India would be the preferred choice in the next few months.

Amish Devagon

Right now in Delhi, all the CEO heads of big tech companies are here. Do you think AI Summit has been successful?

Ankush Sabharwal

Yes, absolutely. When I was coming here, a foreigner was telling me that he has attended many summits in the US and UK. But he has never seen a better summit than this. This is big, big, quite big. Absolutely. We are the leaders of India. We are the leaders of India. So, we meet daily. AI Minister of UK is here. We met him. AI Minister of Canada is here. We met him. AI Minister of France is here. AI Ministries are here. They all are here. So, I think now India is a focal point.

Amish Devagon

So, there should be an AI Ministry in India too.

Ankush Sabharwal

I think it should be soon. I think there are ministers, Ashniv Ashton sir. We should add one more role to them. We should add one more role. They already have a lot of roles.

Amish Devagon

But, in the next 3 -5 years, what are the main targets for India to become the first AI country? What’s your view on this?

Ankush Sabharwal

I think whatever is there, first, energy. Our brain is very useful. It only runs on 20 watts. But, the GPU doesn’t run on 20 watts. It runs on 1000 watts. So, any AI model, to run this model, we need to have a lot of people. Infra and energy are needed. Honorable Prime Minister’s vision is working well. It is visible but it is working well. In energy, infrastructure, compute, as you saw in the DIA mission how many GPUs are coming. And after that, foundational models have also been launched. Applications are also being launched. So these are 4 -5 things. Talent, I think there should be a sector for AI skilling. I think they are also doing it in the education department or MSME.

So all the other factors that are important for AI I think are being focused in India.

Amish Devagon

Please answer this question a bit critically. What would you say about the AI policy of the Government of India? What do you think? They are on right track. They are on the right track but they should make this change. What is your take on this? And be critical on this. Your government should give advice. Sir, may I

Ankush Sabharwal

ask I don’t think there is any country in the world whose government has given its citizens… In India’s context. Yes, first I am saying that no country has given its technologists, innovators, entrepreneurs free GPUs. And on top of that, GPUs were given, then money was given to make models. And opening up doors for us to adopt our application. I think that what we, entrepreneurs and techies, want, the government is already giving. And I think they are thinking ahead. I mean, whatever policy they launch, I think I was thinking this. I haven’t articulated it yet. If I haven’t articulated it yet, if the government hasn’t given a request, they launch it before that. They are ahead.

Indian government is already ahead. Indian government is already ahead. This is a

Amish Devagon

politically correct answer. I told them to give advice. You are saying they are doing everything right. Okay, I will

Ankush Sabharwal

give advice. Now stop scolding us. Okay. Okay. You fund them who use our applications. Okay. That’s a

Amish Devagon

good one. That’s a good one. That’s a very, very good one. If India is successful in making sovereign AI, will India get a lot of benefit in the long term? Absolutely. If

Ankush Sabharwal

sovereign AI comes to India, we’ll have the control. We’ll get the benefit. But see it as a business model angle. We can provide sovereign AI to other countries as well. And that work has started. We’re making our own sovereign AI for ourselves. We’re making it for others as well. We’re making it for others as well. Okay. We’ve

Amish Devagon

come to the end of the conversation. So let’s do a rapid fire round. Quickly. A few questions. Which country has the most AI users in the world? India. India.

Ankush Sabharwal

India?

Amish Devagon

Which city do you think will be the AI capital? We have IT capital. Which city will be

Ankush Sabharwal

AI capital? I think

Amish Devagon

it will be Bangalore. I can be biased. Bangalore. You are saying Bangalore. Okay. Now,

Ankush Sabharwal

a personal question. Durandhar film or cricket match? Neither. Neither? So, even after AI, which

Amish Devagon

job will not end? Any job. Any job. One word. Which will not end? Yes. I think it will be founders. Founders. Wow. Wow. Yes. AI. Yes. Yes. Yes. Yes. So who will control AI in the world after 50 years? I think the fear is that AI will not

Ankush Sabharwal

control but we will control. I believe that AI was created

Amish Devagon

by human intelligence. So that’s why human intelligence will control. Well said. After AI

Ankush Sabharwal

comes,

Amish Devagon

whose job will be easier? Doctor’s or engineer’s? What do

Ankush Sabharwal

you think?

Amish Devagon

The engineer will be biased because

Ankush Sabharwal

he is making it himself. Yes, he

Amish Devagon

will be biased because he is making it himself. Last question. What do you want to say about

Ankush Sabharwal

Bharat GPT? Do you think that the time has come for Bharat GPT? Absolutely. I am saying that it is not ours, it is of the whole country. All the

Amish Devagon

data in it is of the whole country. All the languages that have contributed to it are

Ankush Sabharwal

of the whole country. We have not given any money to

Amish Devagon

anyone to procure data. So it is… Now we are doing it free. If he is in a hugging phase… so all of us can use it freely. What would you like to

Ankush Sabharwal

say to the critics? Sir, we don’t have time to think about them. We don’t have time. We don’t have time to think about them. You don’t want to waste your time? Yeah. One line, what will be one line? For them? Yes. I am saying that start making yourself a fool. Okay, that’s a good one. A big round of applause. Thank you so much. Thank you so much. You came here

Amish Devagon

and spoke for yourself. Thank you. Pleasure talking to

Ankush Sabharwal

you. Thank you so much. Thank you. Amish ji, thank you very much. And thank you for

Amish Devagon

keeping my words. You actually bombarded him.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Ankush Sabharwal
10 arguments155 words per minute1645 words633 seconds
Argument 1
AI will redefine rather than eliminate opportunities, allowing faster problem-solving and more solutions for businesses
EXPLANATION
Sabharwal argues that AI automation will enable faster problem-solving and allow businesses to create more solutions rather than eliminating jobs. He believes this will lead to better and quicker solutions that provide more benefits to enterprises.
EVIDENCE
He explains that if AI automates work and solves problems quickly, companies can solve more problems and provide more solutions to businesses, ultimately benefiting enterprises more effectively.
MAJOR DISCUSSION POINT
AI’s Impact on Employment and Business Models
AGREED WITH
Amish Devagon
Argument 2
Indian IT services must shift from effort-based pricing to value-based pricing as AI reduces manpower requirements
EXPLANATION
Sabharwal contends that the traditional hourly pricing model ($20-40 per hour) in Indian IT services will become obsolete as AI reduces workforce needs from 100 people to potentially 2-10 people. He advocates for value-based pricing where companies charge based on the value delivered rather than time spent.
EVIDENCE
He provides specific examples of current hourly rates ($20-40 per hour) and explains how the same value can now be delivered with significantly fewer people (from 100 to 2-10 people), necessitating a pricing model change to maintain profitability while providing equal or greater value to clients.
MAJOR DISCUSSION POINT
AI’s Impact on Employment and Business Models
Argument 3
Only 10% of Indians speak English, making local language AI essential for reaching 90% of the population
EXPLANATION
Sabharwal emphasizes that English proficiency is limited to only 10% of India’s population, with Hindi known by around 40-45%. This makes Indian language AI crucial for serving the vast majority of the population who cannot access English-based AI systems.
EVIDENCE
He provides specific statistics: only 10% of Indians speak English, and around 40-45% know Hindi, highlighting the need for multilingual AI to serve the remaining population.
MAJOR DISCUSSION POINT
Indian Language AI and Inclusivity
AGREED WITH
Amish Devagon
Argument 4
India has advantages in creating sovereign AI due to its massive data generation from 1.4-1.5 billion population
EXPLANATION
Sabharwal argues that India’s large population of 1.4-1.5 billion people naturally generates massive amounts of data, which is the raw material needed for AI development. This gives India a significant advantage in creating its own AI systems without heavy investment.
EVIDENCE
He explains that data is the raw material for AI, and India’s population of 1.4-1.5 billion people produces data simply by living their lives, creating content, and using technology. He notes that most foreign apps have their largest user base in India.
MAJOR DISCUSSION POINT
Sovereign AI and India’s Independence
AGREED WITH
Amish Devagon
Argument 5
India can provide sovereign AI solutions to other countries as a business opportunity
EXPLANATION
Sabharwal sees sovereign AI not just as a tool for India’s independence but as a business model where India can offer sovereign AI solutions to other countries. This work has already begun according to him.
EVIDENCE
He mentions that the work of providing sovereign AI solutions to other countries has already started, indicating practical implementation of this business model.
MAJOR DISCUSSION POINT
Sovereign AI and India’s Independence
Argument 6
India should focus on creating real-world AI applications and solutions rather than just platforms
EXPLANATION
Sabharwal believes India should prioritize developing AI applications that solve real-world problems rather than focusing solely on creating platforms. He sees this as India’s path to becoming a global leader in AI solutions.
EVIDENCE
He emphasizes the need to move beyond platform creation to solving actual real-world problems with AI, suggesting this approach will give India a significant contribution to global AI development.
MAJOR DISCUSSION POINT
India’s AI Future and Global Position
Argument 7
India has potential to become the global hub for AI solutions within the next few months
EXPLANATION
Sabharwal expresses confidence that India will become the preferred choice globally for AI solutions in the near future. He believes India’s focus on real-world applications will establish it as a hub for AI solutions worldwide.
EVIDENCE
He states that the AI applications and products made in India will be used domestically and internationally, positioning India as the preferred choice for AI solutions globally.
MAJOR DISCUSSION POINT
India’s AI Future and Global Position
AGREED WITH
Amish Devagon
Argument 8
The Indian government’s AI policy and support mechanisms are effective and forward-thinking
EXPLANATION
Sabharwal praises the Indian government’s AI initiatives, stating that no other country provides free GPUs to its citizens, along with funding for model development and support for application adoption. He believes the government is thinking ahead of entrepreneurs and technologists.
EVIDENCE
He cites specific government support: free GPUs provided to citizens, funding for model development, and opening doors for application adoption. He mentions the DIA mission and the launch of foundational models and applications as examples of government initiatives.
MAJOR DISCUSSION POINT
Government AI Policy and Support
DISAGREED WITH
Amish Devagon
Argument 9
The government should also fund organizations that use AI applications, not just those who create them
EXPLANATION
When pressed for critical feedback, Sabharwal suggests that the government should extend funding support to organizations that implement and use AI applications, in addition to supporting those who develop them.
EVIDENCE
This was his specific advice when asked to be critical of government policy, suggesting a gap in funding for AI application users versus creators.
MAJOR DISCUSSION POINT
Government AI Policy and Support
Argument 10
Bharat GPT belongs to the entire country as it uses data and languages contributed by all Indians
EXPLANATION
Sabharwal emphasizes that Bharat GPT is not owned by his company but belongs to all of India, as it was built using data and linguistic contributions from the entire population. He sees it as a collective national asset that should benefit everyone.
EVIDENCE
He explains that no money was paid to procure data, all languages contributed are from across the country, and the platform is being made available for free use, demonstrating its nature as a public good.
MAJOR DISCUSSION POINT
Bharat GPT as National Asset
AGREED WITH
Amish Devagon
A
Amish Devagon
7 arguments167 words per minute905 words323 seconds
Argument 1
AI automation raises concerns about mass job losses in corporates
EXPLANATION
Devagon raises the widespread concern that AI implementation will lead to significant job displacement and mass exodus from corporate environments. He presents this as a major question being raised repeatedly by Indians about AI’s impact on employment.
EVIDENCE
He frames this as a question being asked ‘again and again’ on behalf of Indians, indicating widespread public concern about AI-driven job losses.
MAJOR DISCUSSION POINT
AI’s Impact on Employment and Business Models
Argument 2
Most global AI models work in English, but Indian language AI is crucial for India’s diverse linguistic landscape
EXPLANATION
Devagon highlights that while most AI models globally operate in English, India’s linguistic diversity requires AI systems that can work in Indian languages. He emphasizes this as very important for India’s AI development story.
EVIDENCE
He points out India’s linguistic complexity, mentioning that Bihar has two languages, UP has three languages, and Tamil Nadu has two languages, demonstrating the need for multilingual AI systems.
MAJOR DISCUSSION POINT
Indian Language AI and Inclusivity
AGREED WITH
Ankush Sabharwal
Argument 3
AI language translation enables real-time communication across different Indian languages, as demonstrated with the Prime Minister’s speeches
EXPLANATION
Devagon illustrates how AI language translation can provide convenience by allowing people from different linguistic regions to hear content in their native languages. He uses the example of the Prime Minister’s speeches being translated in real-time.
EVIDENCE
He provides specific examples of how a person from Tamil Nadu, Assam, or Gujarat can hear the Prime Minister’s Hindi speech translated into their respective languages simultaneously, demonstrating practical AI language translation benefits.
MAJOR DISCUSSION POINT
Indian Language AI and Inclusivity
AGREED WITH
Ankush Sabharwal
Argument 4
Sovereign AI is important to ensure India doesn’t depend on other countries for AI technology
EXPLANATION
Devagon emphasizes the strategic importance of sovereign AI for India’s technological independence, referencing Prime Minister Modi’s repeated emphasis on this issue. He sees this as crucial for national self-reliance in AI technology.
EVIDENCE
He mentions that Prime Minister Modi has been talking about sovereign AI ‘again and again,’ indicating high-level government priority for AI independence.
MAJOR DISCUSSION POINT
Sovereign AI and India’s Independence
AGREED WITH
Ankush Sabharwal
Argument 5
India could become the first AI country in 3-5 years with proper focus on energy, infrastructure, and talent development
EXPLANATION
Devagon inquires about India’s potential to become the leading AI nation within 3-5 years, focusing on the main targets and requirements for achieving this goal, particularly in terms of infrastructure and human resources.
EVIDENCE
He specifically mentions energy, infrastructure, and talent development as key areas that need focus for India to achieve AI leadership status.
MAJOR DISCUSSION POINT
India’s AI Future and Global Position
AGREED WITH
Ankush Sabharwal
Argument 6
India needs dedicated AI ministry similar to other countries
EXPLANATION
Devagon suggests that India should establish a dedicated AI ministry, noting that other countries have AI ministers and ministries. He sees this as necessary for proper AI governance and development coordination.
EVIDENCE
He mentions meeting AI ministers from UK, Canada, and France, indicating that these countries have dedicated AI ministries, and suggests India should follow suit.
MAJOR DISCUSSION POINT
Government AI Policy and Support
Argument 7
The time has come for Bharat GPT to serve India’s AI needs
EXPLANATION
Devagon affirms that the timing is right for Bharat GPT to play a significant role in meeting India’s AI requirements, positioning it as a crucial national AI asset.
MAJOR DISCUSSION POINT
Bharat GPT as National Asset
AGREED WITH
Ankush Sabharwal
Agreements
Agreement Points
Indian language AI is crucial for India’s diverse population
Speakers: Ankush Sabharwal, Amish Devagon
Only 10% of Indians speak English, making local language AI essential for reaching 90% of the population Most global AI models work in English, but Indian language AI is crucial for India’s diverse linguistic landscape
Both speakers agree that multilingual AI systems are essential for India given that only 10% of the population speaks English and the country has significant linguistic diversity across different states
AI will provide convenience and enable daily work efficiency
Speakers: Ankush Sabharwal, Amish Devagon
AI will redefine rather than eliminate opportunities, allowing faster problem-solving and more solutions for businesses AI language translation enables real-time communication across different Indian languages, as demonstrated with the Prime Minister’s speeches
Both speakers view AI as a tool that will enhance convenience and efficiency in daily life and work, similar to how mobile phones evolved from communication devices to comprehensive convenience platforms
Sovereign AI is strategically important for India
Speakers: Ankush Sabharwal, Amish Devagon
India has advantages in creating sovereign AI due to its massive data generation from 1.4-1.5 billion population Sovereign AI is important to ensure India doesn’t depend on other countries for AI technology
Both speakers emphasize the strategic importance of developing indigenous AI capabilities to ensure technological independence and leverage India’s natural advantages
Bharat GPT represents a national AI asset
Speakers: Ankush Sabharwal, Amish Devagon
Bharat GPT belongs to the entire country as it uses data and languages contributed by all Indians The time has come for Bharat GPT to serve India’s AI needs
Both speakers view Bharat GPT as a collective national resource that belongs to all Indians and is ready to serve the country’s AI requirements
India has potential to become a global AI leader
Speakers: Ankush Sabharwal, Amish Devagon
India has potential to become the global hub for AI solutions within the next few months India could become the first AI country in 3-5 years with proper focus on energy, infrastructure, and talent development
Both speakers express confidence in India’s potential to achieve global AI leadership, with Sabharwal focusing on near-term solutions hub status and Devagon on medium-term comprehensive AI leadership
Similar Viewpoints
While Devagon raises concerns about job displacement, both speakers ultimately agree that AI will transform rather than simply eliminate employment opportunities, requiring adaptation rather than causing mass unemployment
Speakers: Ankush Sabharwal, Amish Devagon
AI will redefine rather than eliminate opportunities, allowing faster problem-solving and more solutions for businesses AI automation raises concerns about mass job losses in corporates
Both speakers support strong government involvement in AI development, with Sabharwal praising current efforts and Devagon suggesting institutional strengthening through dedicated AI ministry
Speakers: Ankush Sabharwal, Amish Devagon
The Indian government’s AI policy and support mechanisms are effective and forward-thinking India needs dedicated AI ministry similar to other countries
Unexpected Consensus
Government AI policy effectiveness
Speakers: Ankush Sabharwal, Amish Devagon
The Indian government’s AI policy and support mechanisms are effective and forward-thinking The government should also fund organizations that use AI applications, not just those who create them
Despite Devagon’s role as an interviewer pressing for critical assessment, there was unexpected consensus on the government’s effectiveness. Even when pushed to be critical, Sabharwal’s only suggestion was to extend funding to AI users, indicating strong satisfaction with current policies
Rapid timeline for AI adoption and implementation
Speakers: Ankush Sabharwal, Amish Devagon
India should focus on creating real-world AI applications and solutions rather than just platforms India has potential to become the global hub for AI solutions within the next few months
Both speakers show unexpected consensus on very aggressive timelines for AI implementation and India’s global positioning, with Sabharwal suggesting India could become a global AI hub within months rather than years
Overall Assessment

The discussion shows remarkably high consensus between the interviewer and interviewee across all major AI-related topics, including the importance of multilingual AI, sovereign AI development, government policy effectiveness, and India’s potential for global AI leadership

Very high consensus level with minimal disagreement. This strong alignment suggests broad stakeholder agreement on India’s AI strategy and priorities, which could facilitate rapid policy implementation and coordinated national AI development efforts. The consensus spans technical, policy, and strategic dimensions of AI development.

Differences
Different Viewpoints
Government policy criticism and transparency
Speakers: Amish Devagon, Ankush Sabharwal
The Indian government’s AI policy and support mechanisms are effective and forward-thinking Request for critical assessment of government AI policy
Devagon repeatedly pressed Sabharwal to provide critical feedback on government AI policy, calling his initial response ‘politically correct’ and asking him to ‘give advice’ and ‘be critical.’ Sabharwal initially provided only praise for government initiatives, showing reluctance to offer substantive criticism until pressed multiple times.
Unexpected Differences
Interviewer’s persistent challenge to provide government criticism
Speakers: Amish Devagon, Ankush Sabharwal
Request for critical assessment of government AI policy The Indian government’s AI policy and support mechanisms are effective and forward-thinking
It was unexpected that the interviewer would so persistently challenge the interviewee to criticize the government, even calling his response ‘politically correct’ and demanding he ‘be critical.’ This created an unusual dynamic where the interviewer was pushing for negative commentary rather than accepting the positive assessment, which is uncommon in typical interview formats.
Overall Assessment

The discussion showed minimal direct disagreement, with most tension arising from the interviewer’s attempts to elicit critical commentary about government policy and a different framing of AI’s employment impact

Low level of substantive disagreement. The format was primarily interview-based rather than debate-oriented, with disagreements mainly around communication style and framing rather than fundamental policy positions. The main implication is that there appears to be broad consensus on India’s AI development direction, with differences mainly in how optimistically or critically to assess current progress and government support.

Partial Agreements
Both speakers acknowledge that AI will significantly impact employment, but they approach it differently. Devagon raises concerns about job losses and mass exodus from corporates, while Sabharwal reframes this as opportunity redefinition rather than elimination. They agree AI will change the employment landscape but disagree on whether this should be viewed as primarily concerning or optimistic.
Speakers: Amish Devagon, Ankush Sabharwal
AI automation raises concerns about mass job losses in corporates AI will redefine rather than eliminate opportunities, allowing faster problem-solving and more solutions for businesses
Both speakers agree on the importance of strong government involvement in AI development. Devagon suggests India needs a dedicated AI ministry like other countries, while Sabharwal believes current government support is already adequate and forward-thinking. They agree on the goal of strong institutional support but disagree on whether current structures are sufficient.
Speakers: Amish Devagon, Ankush Sabharwal
India needs dedicated AI ministry similar to other countries The Indian government’s AI policy and support mechanisms are effective and forward-thinking
Takeaways
Key takeaways
AI will redefine rather than eliminate job opportunities, enabling faster problem-solving and creation of more solutions Indian IT services must transition from effort-based pricing (per hour/day) to value-based pricing models to adapt to AI efficiency gains Indian language AI is critical for reaching 90% of India’s population who don’t speak English, making technology truly inclusive India has significant advantages in developing sovereign AI due to its massive data generation from 1.4-1.5 billion population India should focus on creating real-world AI applications rather than just platforms to become a global AI solutions hub The Indian government’s AI policy and support mechanisms are effective and forward-thinking, providing free GPUs and funding for model development Bharat GPT represents a national asset built with contributions from all Indians through their data and languages India has potential to become the first AI country within 3-5 years with proper focus on energy, infrastructure, compute, and talent development
Resolutions and action items
People should start making daily use AI apps and products immediately to benefit everyone Indian companies should adopt value-based pricing models instead of hourly rates when implementing AI solutions Focus should be placed on AI skilling initiatives in education and MSME sectors India should continue developing sovereign AI solutions that can also be provided to other countries as a business opportunity
Unresolved issues
Whether India should establish a dedicated AI Ministry (discussed but no concrete decision mentioned) Specific timeline and mechanisms for transitioning IT services from effort-based to value-based pricing Detailed implementation strategy for AI skilling across different sectors How to balance rapid AI adoption with addressing legitimate concerns about job displacement
Suggested compromises
Government should fund both AI creators and organizations that use AI applications, not just developers Existing ministers could take on additional AI-related roles rather than creating entirely new ministries immediately Focus on redefining jobs and creating new opportunities rather than dismissing concerns about AI’s impact on employment
Thought Provoking Comments
I think that the maximum IT services in India are rated per mandate, per hour… So, if I do the same work, if earlier 100 people used to work, now maybe 10 people will do it, even 2 people will do it… Don’t give me per hour or per day basis pricing. We will do value based pricing.
This comment provides a concrete, practical solution to the widespread fear about AI eliminating jobs. Instead of dismissing concerns, Sabharwal reframes the entire business model discussion, showing how AI can actually increase profitability while delivering more value to clients.
This shifted the conversation from abstract fears about job displacement to concrete business strategy discussions. It demonstrated how AI adoption requires fundamental changes in how services are priced and delivered, moving the dialogue toward actionable insights rather than theoretical concerns.
Speaker: Ankush Sabharwal
In India, I think only 10% of people know English… India has more knowledge than India has in English. So if we create AI in Indian languages, then we can give benefit to 90% of people.
This comment highlights a critical gap in AI accessibility that most global AI discussions overlook. It emphasizes that language barriers create a massive untapped market and democratization opportunity, making AI truly inclusive rather than elite-focused.
This comment redirected the conversation toward India’s unique competitive advantage and social responsibility in AI development. It elevated the discussion from technical capabilities to social impact and market opportunity, showing how local solutions can have global significance.
Speaker: Ankush Sabharwal
What raw material is needed for AI? Data. And right now, we are producing data… With the virtue of this beautiful number of people, a population of 1.4, 1.5 billion, we are all producing data.
This insight reframes India’s large population from a perceived burden to a strategic asset in the AI race. It’s a profound shift in perspective that positions India’s demographic dividend as a competitive advantage in data generation and AI development.
This comment fundamentally changed the framing of the sovereign AI discussion. Instead of focusing on India catching up to other countries, it positioned India as naturally advantaged in the AI race, shifting the tone from defensive to confident and strategic.
Speaker: Ankush Sabharwal
You fund them who use our applications… Now stop scolding us.
When pressed for critical feedback on government policy, this response reveals a key insight about the AI ecosystem – that government support should extend beyond just building technology to creating demand and adoption incentives. The humor also shows the entrepreneur’s confidence in current government support.
This moment created a turning point where the interviewer acknowledged getting a ‘politically correct’ answer, leading to a more candid exchange. It also introduced the important concept that successful AI policy requires demand-side interventions, not just supply-side support.
Speaker: Ankush Sabharwal
I think it will be founders… AI was created by human intelligence. So that’s why human intelligence will control.
This response to questions about job displacement and AI control demonstrates deep confidence in human agency and entrepreneurship. It suggests that the ability to envision, create, and direct remains fundamentally human, even in an AI-dominated future.
These rapid-fire responses reinforced the overall optimistic and human-centric tone of the discussion. They positioned entrepreneurs and human creativity as the ultimate controllers of AI development, providing a reassuring counternarrative to fears about AI dominance.
Speaker: Ankush Sabharwal
Overall Assessment

These key comments transformed what could have been a typical AI hype discussion into a nuanced exploration of practical business transformation, social inclusion, and strategic positioning. Sabharwal’s insights consistently reframed challenges as opportunities – job displacement became business model evolution, language diversity became competitive advantage, and large population became data wealth. The discussion evolved from defensive responses about AI threats to confident assertions about India’s natural advantages in the AI economy. The interviewer’s probing questions, particularly around critical assessment of government policy, created moments of authentic exchange that revealed deeper strategic thinking. Overall, these comments shaped a narrative of India not just participating in the global AI race, but potentially leading it through unique advantages in data, languages, and human capital.

Follow-up Questions
How can India develop better energy infrastructure and compute resources to support AI models that require significantly more power than human brains?
This is important because AI models require substantial energy (1000 watts vs 20 watts for human brain) and infrastructure investment is crucial for India’s AI ambitions
Speaker: Ankush Sabharwal
What specific mechanisms should be implemented for AI skilling across different sectors in India?
This addresses the need for workforce preparation and skill development to support India’s AI transformation across various industries
Speaker: Ankush Sabharwal
How can the government fund organizations that use AI applications developed by Indian companies?
This was suggested as advice to the government to support the AI ecosystem by funding adoption rather than just development
Speaker: Ankush Sabharwal
What are the specific real-world problems that AI should prioritize solving in the Indian context?
This is important to ensure AI development focuses on practical applications that benefit citizens rather than just technological advancement
Speaker: Ankush Sabharwal
How can India effectively provide sovereign AI solutions to other countries as a business model?
This explores the potential for India to become a global provider of sovereign AI solutions, which could be a significant economic opportunity
Speaker: Ankush Sabharwal
Should India establish a dedicated AI Ministry to coordinate AI initiatives?
This addresses governance and coordination needs for India’s AI strategy, given that other countries have dedicated AI ministers
Speaker: Amish Devagon

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote Adresses at India AI Impact Summit 2026

Keynote Adresses at India AI Impact Summit 2026

Session at a glanceSummary, keypoints, and speakers overview

Summary

The India AI Impact Forum brought together senior leaders from the United States and India to announce the signing of the Pax Silica Declaration, a partnership aimed at securing a resilient technology and supply-chain ecosystem [33][81].


Google CEO Sundar Pichai opened the session by noting that the world is on the cusp of a hyper-progressive AI era and that the U.S.-India partnership is essential to ensure AI benefits are widely shared [4-6]. He highlighted Google’s role as a connective platform, citing cross-border product teams and innovations such as Google Pay that originated in India and now serve global users [9-10]. Pichai detailed three pillars of Google’s commitment: launching AI-powered products for Indian consumers, scaling the AI Coach with 22 Gemini models and new language support, and expanding voice and visual search tools that are among the highest-adopted worldwide [12-13][15-18]. He also announced a $15 billion investment in Indian infrastructure, including a gigawatt-scale AI hub in Vizag and the India-America Connect subsea cable network that will physically link the two economies [22-26]. Pichai stressed that these initiatives depend on secure, trusted supply chains, referencing Axilica’s work to safeguard component flows across borders [27-30].


A subsequent speaker framed Pax Silica as a forward-looking partnership to strengthen secure and resilient technology ecosystems amid rapid emergence of emerging technologies [33-35].


Sanjay Mehrotra, CEO of Micron, emphasized that memory and storage are critical for AI performance and that Micron is the only Western company designing and manufacturing such components [57-60]. He noted Micron’s long-standing presence in India with R&D centers employing about 4,000 staff, a portfolio of 60,000 patents, and a new $2.75 billion advanced packaging and assembly facility in Gujarat that will span 500,000 sq ft, featuring a clean room the size of ten cricket fields and massive steel and concrete usage [62-66][70-74]. Mehrotra credited the Indian government, particularly the Gujarat administration, for policy support that enabled the pioneering semiconductor project [75-76].


Jacob Helberg, U.S. Undersecretary for Economic Affairs, described the Pax Silica Declaration as a roadmap that rejects weaponized dependency, links economic security to national security, and commits both nations to a pro-innovation AI approach [81-84][92-97][104-108]. He warned that over-concentrated global supply chains expose allies to coercion and called for building a diversified, sovereign supply chain for minerals, wafers, and AI models [93-98][106-108].


Ambassador Sergio Gore highlighted the recent U.S.-India interim trade agreement and positioned Pax Silica as a strategic coalition that secures the entire silicon stack-from mines to fabs to data centers-thereby replacing coercive dependencies with trusted industrial bases [133-138][141-146]. Gore further asserted that India’s deep engineering talent, emerging critical-mineral processing capacity, and commitment to strong borders make its participation essential for a resilient, free-society-driven technology order [147-152][155-160].


Minister Ashwini Vaishnav emphasized India’s trusted status and the symbolic yet strategic importance of its entry into Pax Silica, after which the declaration was formally signed and a photo-op commemorated the historic milestone [178-184][190-198].


Keypoints

Major discussion points


AI collaboration and investment between the United States and India – Sundar Pichai outlined Google’s “full-stack commitment” to AI in India, covering product development, developer skilling, and massive infrastructure such as the $15 billion AI Hub in Vizag and new subsea cable routes that will link the two economies [5-7][10-13][22-27].


Launch of the Pax Silica Declaration – The ceremony marked the signing of a new coalition aimed at building a secure, resilient “silicon stack” supply chain, protecting critical minerals, chips, and AI models from coercive dependencies and reinforcing economic security as a facet of national security [33][48-51][81-88][96-100][106-110][141-148][155-168].


Micron’s strategic role and investments in India – Sanjay Mehrotra highlighted Micron’s R&D presence, its patent portfolio, and a $2.75 billion advanced-packaging and test facility in Gujarat, underscoring the company’s contribution to India’s semiconductor ecosystem [56-66][70-74][75-78].


Shared democratic values and strategic partnership – Both Jacob Helberg and Ambassador Sergio Gore stressed the historical ties, mutual commitment to “free people” and “free markets,” and the need for a coalition of like-minded democracies to shape the future of AI, space, and advanced semiconductors [81-92][94-100][112-115][140-146][152-166].


Overall purpose / goal of the discussion


The gathering served to publicly cement a deepening U.S.-India technology partnership: announcing concrete AI initiatives and infrastructure investments, formally signing the Pax Silica Declaration to create a trusted, end-to-end silicon and AI supply chain, and showcasing industry commitments (e.g., Micron) that together aim to secure economic prosperity, technological leadership, and democratic values for both nations.


Overall tone and its evolution


Opening segment (Pichai & early remarks): Optimistic, celebratory, and forward-looking, emphasizing collaboration and shared opportunity.


Mid-session (Pax Silica introduction & Helberg’s speech): Shifts to a more solemn, strategic tone, stressing geopolitical risks, economic coercion, and the necessity of a resilient supply chain.


Later remarks (Gore, Micron, and ceremony): Returns to a confident, affirming tone that blends pride in concrete investments with reaffirmation of shared democratic ideals.


Overall, the tone moves from enthusiastic partnership-building to a serious, security-focused narrative, then settles into a resolute, hopeful conclusion about the joint future of U.S. and Indian technology ecosystems.


Speakers

Participant – Role/Title: Moderator / Event host (facilitates introductions and transitions).


Ashwini Vaishnav – Role/Title: Honorable Minister (Minister for Electronics and Information Technology, Government of India)[S4]. Area of expertise: Technology policy, semiconductor ecosystem, electronics and information technology.


Sundar Pichai – Role/Title: CEO of Google (Alphabet Inc.)[S7]. Area of expertise: Artificial intelligence, cloud services, consumer products, global technology strategy.


Sanjay Mehrotra – Role/Title: CEO of Micron Technology[S9]. Area of expertise: Semiconductor memory and storage technologies, AI infrastructure.


Jacob Helberg – Role/Title: Undersecretary of State for Economic Affairs, United States[S12]. Area of expertise: U.S. economic policy, trade, technology cooperation.


Sergio Gore – Role/Title: U.S. Ambassador to India[S15]. Area of expertise: Diplomacy, U.S.-India strategic partnership, technology and economic collaboration.


Additional speakers:


Michael Kratios – Role/Title: Director, Office of Science and Technology Policy (OSTP), United States[S15]. Area of expertise: Science and technology policy, AI governance.


Randhir Thakur – Role/Title: CEO, Tata Electronics[S11]. Area of expertise: Electronics manufacturing, semiconductor design and production.


Mike Krohn – Role/Title: CEO (company not specified in transcript). Area of expertise: Not specified.


Sajiv Garb – Role/Title: Senior official (referred to as “His Excellency” in the transcript). Area of expertise: Not specified.


S. Krishnan – Role/Title: Secretary (of an Indian government department, as indicated in the signing ceremony). Area of expertise: Not specified.


Jay Shankar – Role/Title: Minister (Indian government, mentioned alongside Minister Vaishnav). Area of expertise: Not specified.


Full session reportComprehensive analysis and detailed insights

Sundar Pichai (CEO, Google) opened the India AI Impact Forum by thanking Director Kratzios and noting the occasion’s significance for U.S.-India relations [1-2]. He reminded the audience that the world stands on the “cusp of an era of hyper-progress and new discoveries” but warned that the best outcomes are not guaranteed [4-5]. Emphasising the centrality of the bilateral partnership, he described Google as both a figurative and literal “connection point” between the two nations [7-8] and highlighted the seamless collaboration of teams across the United States and India on key initiatives [9-10].


Pichai then outlined Google’s “full-stack commitment” to AI in India across three pillars: (1) the rollout of AI-powered products for Indian consumers and businesses, including 22 Gemini models on the AI Coach platform and collaborations with the government on monsoon forecasts for farmers, diabetic-retinopathy screening for health workers, and multilingual information services [12-14][13-14]; (2) the AI Skill House programme, which aims to equip ten million future Indian leaders with AI tools, complemented by a new Google AI certificate delivered with Badwani AI [20-22]; and (3) a US$15 billion investment in Indian infrastructure, centred on a gigawatt-scale AI Hub in Vizag and the India-America Connect subsea-cable network that will physically link the two economies and expand digital trade routes [22-26]. He concluded that all of these initiatives depend on “stable supply chains built on a foundation of shared trust” and cited Axilica’s role in securing component flows across borders [27-30].


A senior participant framed the signing of the Pax Silica Declaration as a milestone that “strengthens secure and resilient technology ecosystems” at a time when emerging technologies are reshaping global competitiveness and economic security [32-35], setting the stage for the formal introduction of the coalition.


Sanjay Mehrotra (CEO, Micron) stressed that memory and storage are the “fuel” for AI, noting that Micron is the only Western company that designs and manufactures these components [57-60]. He highlighted Micron’s long-standing Indian footprint-R&D centres in Bangalore and Hyderabad employing roughly 4 000 staff-and its impressive patent portfolio of 60 000 worldwide, of which about 2 000 stem from roughly 300 Indian inventors [62-65][64-65]. Mehrotra announced a US$2.75 billion investment in an advanced-packaging, assembly and test facility in Sanand, Gujarat, describing the 500 000 sq ft plant as “the size of ten cricket fields” with steel usage “three and a half times the Eiffel Tower” and concrete equivalent to “100 Olympic-sized swimming pools” [66-74]. He credited the Gujarat government and Prime Minister Modi’s policies for enabling this pioneering semiconductor project [75-76].


Jacob Helberg (U.S. Under-Secretary for Economic Affairs) presented the Pax Silica Declaration as a concrete roadmap for a shared future, invoking an Alexander-the-Great anecdote to illustrate the historic defiance of both democracies in claiming self-determination [81-89]. He warned that “over-concentrated” global supply chains enable weaponised economic coercion, citing examples such as a city’s lights being extinguished by a keystroke and the denial of essential minerals to outspoken leaders [93-96]. Helberg declared that the declaration “says no to weaponised dependency” and frames economic security as national security, while pledging a “pro-innovation” AI approach that builds a diversified full-stack supply chain-from minerals to wafers to models-to unleash human potential [96-108][106-110]. (The transcript contains the phrase “Packed Silica is our declaration,” a transcription error; the official name is Pax Silica.)


Ambassador Sergio Gore contextualised Pax Silica within the broader U.S.-India strategic framework, noting the recent interim trade agreement that “shapes the economic contours of the Indo-Pacific” [133-136]. He described the coalition as a “strategic coalition … securing the entire silicon stack” – from critical-mineral extraction to chip fabrication and data-centre deployment – thereby replacing coercive dependencies with a trusted industrial base [141-145]. Gore highlighted India’s deep engineering talent, emerging critical-mineral processing capacity, and commitment to strong borders as essential assets that amplify the coalition’s strength [146-152][155-160]. He underscored that the partnership aims to ensure that advanced technologies serve free societies rather than surveillance states [165-170].


Minister Ashwini Vaishnav (Minister of Electronics & Information Technology) reinforced India’s “trusted” status, pointing to the country’s 315 EDA tools [176] and the cultural gravitas of a 5 000-year civilisation that commands global respect [174-183]. He linked this trust to India’s role in Pax Silica, thanked the U.S. dignitaries for their participation, and formally invited the signing ceremony to commence [184-188].


The declaration was then signed by Under-Secretary Jacob Helberg, Ambassador Sergio Gore and Secretary S. Krishnan, after which the signatories were asked to hold up the document for an official photograph [190-196]. Additional industry leaders-including Micron’s CEO Sanjay Mehrotra and Tata Electronics CEO Randhir Thakur-joined the photo-op, underscoring the multi-stakeholder nature of the agreement [197-202].


A brief interlude announced the transition to a fireside conversation, with Helberg slated to moderate a panel that included His Excellency Sajiv Garb, Secretary Krishnan, Mr. Sanjay Mehrotra, CEO Mike Krohn, Dr. Randhir Thakur (CEO, Tata Electronics) and the CEO of General Catalyst [203-210].


Overall, the forum cemented a deepening U.S.-India technology partnership. The speakers converged on three core themes: (1) the necessity of trusted, diversified supply chains for AI hardware and critical minerals; (2) the strategic importance of large-scale infrastructure-Google’s Vizag AI Hub, the India-America subsea cables, and Micron’s Gujarat facility-to underpin AI growth; and (3) the pivotal role of talent development, illustrated by Google’s AI Skill House, India’s EDA ecosystem and Micron’s Indian R&D contributions. While consensus was strong on the goals, nuanced disagreements emerged regarding the current stability of supply chains-Pichai portrayed them as “stable … built on shared trust” whereas Helberg warned of “over-concentration” and weaponised dependency [93-96]-and on the framing of Pax Silica (partnership-building versus policy roadmap versus full-stack coalition) [33-36][81-84][141-145]. The ceremony emphasized renewal over resistance, positioning the U.S.-India alliance as a democratic bulwark against coercive technological dominance.


Session transcriptComplete transcript of the session
Sundar Pichai

Thank you, Director Kratzios. Thank you for the opportunity to return to this stage and to mark this important occasion in U .S.-India relations. Yesterday, at the opening session, I shared some thoughts on this profound moment with AI. I said we are on the cusp of an era of hyper -progress and new discoveries, but the best outcomes are not guaranteed. We must work together to ensure the benefits of AI are available to everyone and everywhere. The U .S.-India partnership has a critical role to play. Google is proud to serve as a connection point between them, both figuratively and literally. More on this later. We have teams across both countries working seamlessly together on some of our most important initiatives.

Thank you. Innovations that start in India, like Google Pay. are making products better for people all over the world. I believe India is going to have an extraordinary trajectory with AI, and we are supporting with a full -stack commitment, including products, scaling, and infrastructure. First, products. We are working on building AI products and solutions for Indian consumers and businesses. To empower India’s incredible developer community, we have already contributed 22 Gemma models to AI Coach, and we are working closely with the government to bring AI applications with real -world impact, be it through delivering timely monsoon forecasts to farmers, helping healthcare workers screen for diseases like diabetic retinopathy, or making information and services accessible in more languages.

Our commitment extends to reimagining the products people use every day. As one example, AI is changing the way people use search. Indian users are amongst the highest adopters of voice and visual search globally. Our scan detection features with circle to search and lens are used in India more than anywhere else. The Gemini app is growing rapidly across the world, and it’s available in 10 languages spoken in India. And YouTube supports a vibrant ecosystem of Indian content creators sharing music, arts, and culture with the world. Second, skilling. Through the AI skill house, we are working to equip 10 million future Indian leaders with the tools to drive global progress. We are also partnering with Badwani AI to reach students and early career professionals with a Google AI certificate, which we announced earlier this week.

Third, infrastructure. Last year, we announced a $15 billion investment in Indian infrastructure with the AI Hub in Vizag at the center. This hub will house gigawatt -scale computing. When finished, it will bring jobs and the benefits of cutting -edge AI to people and businesses across India. Building on this, we recently announced the India -America Connect Initiative, which will deliver new subsea cable routes to connect the U .S., India, and multiple locations across the Southern Hemisphere. Combined with our existing cable systems, this initiative will significantly expand the digital trade routes and serve as a literal bridge between our two countries. Of course, none of this would be possible without stable supply chains built on a foundation of shared trust.

Products, subsea cables, AI hubs are all dependent on a complex flow of goods and components across borders. Axilica focuses on making sure that the supply chains are safe and secure and encourages greater commercial partnerships across key technologies. So let me congratulate the U .S. and India on this historic moment. Alongside the recent trade agreement, this will lay a strong foundation for a robust U .S.-India tech

Participant

Thank you so much, Mr. Sundar Pichai, for all those motivating and inspiring words. And ladies and gentlemen, today marks an important milestone as India formally joins Pax Silica, a forward -looking partnership aimed at strengthening secure and resilient technology ecosystems at a time when emerging technologies are reshaping global competitiveness and economic security. Trusted partnerships are essential. This declaration reflects a shared commitment by India and the United States to advance responsible innovation and resilient infrastructure. We are honored to have with us senior leadership from both the governments, alongside distinguished representatives from industry and also the diplomatic community. Without any further ado, may I now respectfully invite our distinguished dignitaries to please join us on stage. Ladies and gentlemen, please join me in extending a warm welcome as they make their way to the stage.

It’s an honor to have such distinguished leadership this morning, Excellencies. Thank you so much for joining us. We’ll proceed with brief remarks ahead of the signing ceremony. May I please invite Honorable Jacob Helberg, Undersecretary of State for Economic Affairs, the United States, to deliver his remarks. Thank you. I request Honourable Jacob Helberg, Under -Secretary of State for Economic Affairs, to please present his address. Ladies and gentlemen, please welcome. Ladies and gentlemen, we would like to wait for a couple of minutes for Under -Secretary Mr. Jacob Helberg. He is on his way and he would be here with us very soon. It’s an important occasion, especially when we talk of Pax Silica. It’s a historic agreement between the two governments, between the two biggest and the oldest democracies of the world.

And so we are here to listen to our distinguished guests as they present their views, their remarks on Pax Silica. This is one agreement which would change the way both the countries work in this particular domain. Ladies and gentlemen, we have distinguished speakers who are going to join us. And then a very, very important signing agreement procedure, the protocols that need to be followed. We are also going to have a photo op session after this. Ladies and gentlemen, in the meantime, may I please request Mr. Sanjay Mehrotra, the CEO of Micron, to kindly come on the stage and present his keynote address. Mr. Sanjay Mehrotra.

Sanjay Mehrotra

Good morning. On behalf of Micron Technology, I want to say we are super excited to be here participating in this phenomenal AI Summit. Micron is a semiconductor technology leader, leader in memory and storage. Memory and storage are critical to driving AI. As contextual processing becomes larger and as real -time demands on performance are placed on AI systems, they need more and more memory. I’m very proud to say that Micron is the only company in the Western Hemisphere that develops and manufactures memory and storage, and we have had successive generations of leadership in DRAM technology as well as NAND technology. But I’m also very proud today, later, with this PAC -SILICA initiative that will be signed here, bringing the technology collaboration closer between U .S.

and India. Micron, since 2019, has had large presence here in India with R &D centers in Bangalore, in Hyderabad, employing nearly 4 ,000 employees today. What I’m proud of is that Micron has 60 ,000 patents worldwide, one of the most innovative companies, but also a manufacturing powerhouse. Some of our most advanced DRAM products are being designed right here in India in collaboration with our teams in the U .S. In fact, we have now, in this short period since 2019, we now have 300 inventors with number of patents approaching nearly 2 ,000 that have been contributed by the innovative, phenomenal team here in India. Very proud. We are proud also of Micron’s investment in bringing advanced packaging, assembly, and test technologies here to Sanant, Gujarat.

In fact, Mitron is making an investment of $2 .75 billion here in Gujarat. We’ll talk more about it in the fireside chat a little bit later. And those investments now are going to be bringing a grand opening coming up soon where packaging and assembly will be done of advanced memory wafers produced worldwide. So this is a pioneering project here in India. The size of this facility that has been built is 500 ,000 square feet. So imagine that clean room is the size of 10 cricket fields. The amount of steel that has been used in that is about three and a half times of Eiffel Tower. The amount of concrete that is used in that is size of 100 Olympic -sized swimming pools.

This is the pioneering project of semiconductor manufacturing here in India, and Micron is proud to have partnered with the central government as well as the government in Gujarat bringing this project to Sanand. Modi Ji’s government has provided tremendous support and really policy that encourages investment here in India. So without further ado, having shared some of the importance of memory and storage in terms of driving AI infrastructure worldwide and importance of Micron here in India in R &D as well as in manufacturing, I would now like to pass it back to our host. Our host here in continuing with the regularly scheduled program. Thank you very much.

Participant

Thank you so much, Mr. Mehrotra. Ladies and gentlemen, please join me now in inviting Honorable Mr. Jacob Helberg, Undersecretary of State for Economic Affairs, to deliver his remarks.

Jacob Helberg

Good morning. It’s a profound honor to be here in Delhi at the India AI Impact Forum to mark a historic milestone in the partnership between the United States and India. Today, we sign the Pax Silica Declaration, a document that’s not merely an agreement on paper, but a roadmap for a shared future. There’s a line from antiquity attributed to Alexander the Great that famously said that the people of India are the ones who are the most important people in the world. The peoples of Asia were slaves because they had not yet learned to pronounce the word no. Alexander viewed himself as a conqueror speaking to a world of subjects, and after traveling 11 ,000 miles for eight years, it was in India that Alexander finally met his match and turned around.

He did not know India, and India said no. The truth is, both of our nations were forged by that very word. Both of our nations claimed their freedom by learning to say no. We are the people who looked at a king oceans away and refused to quietly acquiesce. We rejected the counsel of polite society and broke centuries of colonial rule to take our destiny into our own hands. That spirit of defiance, that insistence on self -determination, is the fire that burns at the heart of both of our democracies, and today we are called upon to summon that spirit once again. For too long, we have allowed the foundation of our democracy and the foundations of our economic security to drip.

We find ourselves grappling with a global supply chain that is massively over -concentrated. We watch as our friends and allies face daily threats of economic coercion and blackmail, forced to choose between their sovereignty and their prosperity. We have seen the lights of a great Indian city extinguished by a keystroke from across the border, and we’ve seen our friends denied essential minerals simply because a leader dared to speak her mind. So today, as we sign the Pax Silica Declaration, we say no to weaponized dependency, and we say no to blackmail, and together we say that economic security is national security. But we must be precise about what that word means. There are some who use words like global governance and sovereignty in the same breath, just like Orwell used.

There are some who use freedom and slavery interchangeably. America and India are not deceived. sovereignty does not come from a global bureaucracy. It comes from builders, from the very builders present in this room today. It comes from the builders of smelters and oil wells, airplanes and expressways. And it comes from the hardworking people who physically build the rails of the future. And through the joint statement that we’re signing today, the United States and India are affirming our embrace of a pro -innovation approach to AI against those who would constrain us to set us back. But our fundamental mission is not resistance, it’s renewal. We are forging a supply chain that is the foundation for prosperity.

We are building a new architecture that diffuses intelligence, placing the awesome power of AI into the palm of our people’s hands and unleashing a wave of unprecedented possibility. From the minds to the models, we are securing the foundation, the full stack of the future, the minerals deep in the earth. the silicon wafers in our labs and fabs, and the intelligence that will unleash human potential. Packed Silica is our declaration that the future belongs to those who build. And when free people join forces, we do not wait for the future to be given to us. We build it ourselves. I want to end by thanking my good friend and colleague, Ambassador Sergio Gore. Sergio and his leadership has been the bridge for this very moment.

His work to bring our nations closer together is a testament to the vital importance that the United States places on this friendship. Sergio, thank you

Sergio Gore

for your service and your energy. Will you please all join me in giving Ambassador Gore a very warm welcome? Thank you. Good morning. Namaste. It is great to be here with you all. Thank you, Jacob. I want to just say a quick word about Jacob. Jacob’s an incredible friend, but Jacob also cares deeply about this relationship. This initiative, Pax Silica, would not be happening if it’s not for Jacob Helleberg. So a round of applause to him. What an honor to stand before all of you here today here in New Delhi at this historic moment as we welcome India into Pax Silica. Just over a month ago, I arrived in this extraordinary nation as the U .S.

ambassador. In my first weeks, I’ve walked the halls of South Block, met with innovators in Bangalore, and broke bread with entrepreneurs who are building the future. What struck me most was the fact that I was able to be here today. It wasn’t just India’s scale, although that is breathtaking. It’s India’s resolve, the determination to chart your own course. I keep talking about the limitless potential between our two nations, and I truly mean it. From the trade deal, to Pax Silica, to defense cooperation, the potential for our two nations to work together is truly limitless. And I aim to fulfill that over the next three years that I’m here. Earlier this month, we concluded the Interim Trade Agreement, a deal that shapes the economic contours of the Indo -Pacific.

We overcame friction points that had held us back for far too long. That agreement wasn’t just about trade flows or tariff schedules. It was about two great democracies saying we will build together, not just buy from one another. And now today, we take the next step. India joins Pax Silica, the coalition that will define the 21st century economic and technological order. I’m delighted to welcome Jacob. Jacob here. I’m also delighted to welcome the OSTP Director, Michael Kratios, who’s the head of our delegation at this very important summit. The U .S. leads in a strategic coalition which is designed to secure an entire silicon stack. From the mines we extract critical minerals, to the fabs where we manufacture chips, to the data centers where we deploy frontier AI.

It’s a coalition of capabilities that replaces coercive dependencies with a positive sum alliance of trusted industrial bases. Pax Silica will be a group of nations that believe technology should empower free people and free markets. India’s entry into Pax Silica isn’t just symbolic, it’s strategic, it’s essential. India is a nation with deep talent, deep enough. To rival challengers. India’s engineering depth offers critical capabilities for this vital coalition. In addition to talent, India has made important strides towards critical mineral processing capacity, and that’s something that we’re fully engaged on also. Policies that will reinforce U .S.-India tech cooperation will power AI innovation and adoption for years to come. We can share trusted AI technology with the world and especially with partners like India.

And critically, India brings strength. Peace doesn’t come from hoping adversaries will play fair. We all know they won’t. Peace comes through strength. India understands this. India understands strong borders. India understands this part of the world. That strength, that sovereignty, is exactly what Pax Silica amplifies. Because here’s the truth. Strength multiplies when it’s connected. When Minister Vaishnav and Minister Jay Shankar traveled to Washington, in recent weeks, they came as partners, forging the future. Their discussions on critical minerals were about interdependency among strong actors, about building supply chains that will not be held hostage. America is building coalitions of the capable and the willing. We’re ensuring the technologies that will define the next century. AI, space, and advanced semiconductors are developed, deployed, and controlled by free nations.

And we’re doing it in a partnership with the world’s largest democracy, a nation of 1 .4 billion people that share our values and our vision. We welcome India joining to co -found the future. Pax Silica is about whether free societies will control the commanding heights of the global economy. It’s about whether innovation happens in Bengaluru and Silicon Valley or in surveillance states. They use technology to monitor. And control their

Participant

Thank you, His Excellency Ambassador Sajagore, for re -strengthening and highlighting the enduring ties between our two nations and also for the shared vision that underpins today’s milestone. May I now request Honourable Minister Sri Ashwini Vaishnav to address the August gathering. Siemens,

Ashwini. Vaishnav

all the design EDA tools, students have available. Counting. Not able to count more than 20 in the whole world. India has 315. This capability we have to develop. This scale we have to develop. And in the world India today is seen as a trusted country. India is a trusted country. And that’s because our Prime Minister Narendra Modi ji has conducted the foreign policy in a way where the trust and respect the respect of a 5000 year old civilization that gravitas that India’s civilization’s stability that stability that world believes in. That gravitas that world believes in. And that’s why India has trust. Because of that trust today that trust is becoming part of the tax silica. I welcome you all and especially those who worked on the US side.

My biggest gratitude to all the three honorable guests from the US for taking out time to be part of this Paxillica signing. And I’ll now request the Paxillica signing ceremony to be done. Thank you, friends. Bharat Mata Ki. Bharat Mata Ki. Thank you. Thank you. S -T -P -U -S.

Participant

Ladies and gentlemen, and now the Pax Silica Declaration is being signed between India and the United States of America. The Pax Silica Declaration is being signed by Honorable Undersecretary Jacob Helberg, His Excellency Ambassador Sergio Gore, and the Secretary, Mr. S. Krishnan. And now Once the declaration has been signed by the respected signatories, the declaration will be exchanged. I request the distinguished guests to kindly hold up the signed declaration for the official photograph. I request the distinguished guests to kindly hold up the signed declaration for the official photograph. I request our distinguished guests to kindly proceed to the photo point on the right of the stage in front of the flags for the official photograph We are going to have an official photograph So may I please request our distinguished guests to kindly proceed to the point in front of the flags on your right that will give us the right picture for this photo So once again we are going to have this photo I would like to now also request CEO of Micron, Mr.

Sanjay Mehrotra and Mr. Randhir Thakur, CEO of Tata Electronics to please join us for a photo op on the stage. I also invite CEO of General Catalyst to come on the stage, please. I thank our distinguished guests for that photo op. It’s a great moment when Pax Silica Declaration has been signed between India and the United States of America. The photo op to commemorate this special moment. This is another historical milestone between the relationship between India and the United States of America. I thank all our distinguished guests for this photo op. I thank Honorable Minister and Mr. Michael Kratios for being with us on this wonderful and historic occasion. Ladies and gentlemen, we are waiting for the furniture to be rearranged and very soon we will now continue with the Fireside Conversation.

Ladies and gentlemen, now we would proceed to the Fireside Conversation. I invite our distinguished guests to please join us for this conversation. Undersecretary Jacob Helberg is going to moderate this discussion. His Excellency Sajiv Garb, Secretary Krishnan, Mr. Sanjay Mehrotra, CEO Mike Krohn, and Mr. Sanjay Mehrotra, CEO Mike Krohn. Dr. Randhir Thakur, CEO Tata Electronics. I request our distinguished guests to please take your seats as we begin the Fireside Conversation. Please stand by for the Fireside Conversation.

Related ResourcesKnowledge base sources related to the discussion topics (19)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Sundar Pichai thanked Director Kratzios and noted the occasion’s significance for U.S.–India relations.”

The transcript excerpt includes a direct thank-you to Director Kratzios and frames the event as an important moment in U.S.-India relations, matching the report’s statement [S17].

Confirmedhigh

“Pichai reminded the audience that the world stands on the “cusp of an era of hyper‑progress and new discoveries” but warned that the best outcomes are not guaranteed.”

Both S10 and S17 contain the same language about a cusp of hyper‑progress and the need for deliberate cooperation to ensure good outcomes, confirming the claim.

Confirmedmedium

“Axilica’s role is to secure component flows across borders, ensuring safe and reliable supply chains.”

S6 explicitly states that Axilica focuses on making supply chains safe and secure and encourages commercial partnerships across key technologies, which aligns with the report’s description.

Additional Contextmedium

“Google is rolling out 22 Gemini models on the AI Coach platform and collaborating with the Indian government on monsoon forecasts, diabetic‑retinopathy screening, and multilingual information services.”

S67 confirms that Gemini models are being integrated into Google products for India, but it does not specify the number of models, the AI Coach platform, or the particular government collaborations mentioned, providing partial support and additional nuance.

Additional Contextlow

“The signing of the Pax Silica Declaration was presented as a milestone that strengthens secure and resilient technology ecosystems.”

S11 records Jacob’s remarks congratulating the Paxilica (Pax Silica) signing and highlighting its importance for technology collaboration, adding context to the report’s framing of the declaration.

External Sources (68)
S1
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Participant** – (Role/title not specified – appears to be Dr. Esther Yarmitsky based on context)
S2
Keynote Address_Revanth Reddy_Chief Minister Telangana — -Participant: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or organizer…
S3
Leaders TalkX: Moral pixels: painting an ethical landscape in the information society — – **Participant**: Role/Title: Not specified, Area of expertise: Not specified
S4
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Ashwini Vaishnaw- Role/Title: Honorable Minister (appears to be instrumental in India’s semiconductor industry developm…
S5
Announcement of New Delhi Frontier AI Commitments — -Shri Ashwini Vaishnaw: Role/Title: Honorable Minister for Electronics and Information Technology, Area of expertise: El…
S6
Keynote Adresses at India AI Impact Summit 2026 — -Ashwini Vaishnav- Minister (India) Multiple speakers emphasised India’s unique combination of technological capabiliti…
S7
Keynote-Sundar Pichai — -Moderator: Role/Title: Event Moderator; Area of Expertise: Not mentioned -Mr. Dario Amote: Role/Title: Not mentioned; …
S8
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — -Sundar Pichai: Role/Title: Not specified in transcript; Area of expertise: Technology (implied)
S9
Keynote Adresses at India AI Impact Summit 2026 — -Sanjay Mehrotra- CEO of Micron Technology And so we are here to listen to our distinguished guests as they present the…
S10
Keynote Adresses at India AI Impact Summit 2026 — Ladies and gentlemen, and now the Pax Silica Declaration is being signed between India and the United States of America….
S11
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Agreed with:Ambassador Sergio Gor, Secretary S. Krishnan, Sanjay Mehrotra, Dr. Randhir Thakur — U.S.-India partnership i…
S12
Keynote Adresses at India AI Impact Summit 2026 — -Jacob Helberg- Undersecretary of State for Economic Affairs, United States I invite our distinguished guests to please…
S13
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Agreed with:Ambassador Sergio Gor, Secretary S. Krishnan, Dr. Randhir Thakur, Jacob Helberg — U.S.-India partnership is …
S14
Keynote Adresses at India AI Impact Summit 2026 — Ladies and gentlemen, and now the Pax Silica Declaration is being signed between India and the United States of America….
S15
Keynote Adresses at India AI Impact Summit 2026 — -Sergio Gore- U.S. Ambassador to India Ambassador Sergio Gore explained that Pax Silica creates “a coalition of capabil…
S16
Keynote Adresses at India AI Impact Summit 2026 — Good morning. It’s a profound honor to be here in Delhi at the India AI Impact Forum to mark a historic milestone in the…
S17
https://app.faicon.ai/ai-impact-summit-2026/keynote-adresses-at-india-ai-impact-summit-2026 — We are building a new architecture that diffuses intelligence, placing the awesome power of AI into the palm of our peop…
S18
Keynote-Sundar Pichai — Namaste. Thank you. Thank you. Prime Minister Modi and distinguished leaders. It’s wonderful to be back in India. Every …
S19
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — That’s our passion. I think Sundar mentioned all the investments we’re making into the industry and the India ecosystem….
S20
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Evidence:He explains that India already has one nation, one grid with unified frequency, and the Energy Stack creates in…
S21
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — By utilizing a mix of tools and methods, it is possible to effectively address identified issues. Stakeholder cooperatio…
S22
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S23
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S24
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — The tone was consistently optimistic, collaborative, and forward-looking throughout the session. It maintained a formal …
S25
Scaling Innovation Building a Robust AI Startup Ecosystem — Overall Tone:The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with t…
S26
Keynote Adresses at India AI Impact Summit 2026 — Summary:The speakers demonstrate remarkable consensus across multiple dimensions: the strategic importance of U.S.-India…
S27
Keynote Adresses at India AI Impact Summit 2026 — The speakers demonstrate remarkable consensus across multiple dimensions: the strategic importance of U.S.-India partner…
S28
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — High level of consensus with complementary perspectives rather than conflicting viewpoints. The government R&D and enter…
S29
Skilling and Education in AI — Infrastructure development emerged as crucial, with investments in data centers, subsea cables, and compute capacity to …
S30
Press Conference: Closing the AI Access Gap — To effectively harness the potential of AI, countries need reliable infrastructure, compute capacity, and partnerships. …
S31
The Innovation Beneath AI: The US-India Partnership powering the AI Era — The panel opened with Kumar’s observation that whilst AI models receive significant attention, the underlying infrastruc…
S32
Skilling and Education in AI — This discussion focused on leveraging artificial intelligence as a tool for development and equality in India, examining…
S33
World Economic Forum® — Understanding the social and cultural contexts that may contribute to epidemics, such as burial practices or misconcepti…
S34
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — So today, the type of things that we need to do for each one of these actions, the type of inferencing, type of response…
S35
ASYMMETRY OF CULTURAL STYLES AND THE UNINTENDED CONSEQUENCES OF CRISIS PUBLIC DIPLOMACY — Collectivist cultures tend to value goals of the collective over goals of the individual. Triandis et al. observed that …
S36
How Trust and Safety Drive Innovation and Sustainable Growth — All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won’t use A…
S37
Supply Chain Fortification: Safeguarding the Cyber Resilience of the Global Supply Chain — Consequently, there is an increasing need to stay alert to more sophisticated attacks resulting from AI. The analysis al…
S38
Secure Finance Risk-Based AI Policy for the Banking Sector — “Three dominate cloud capacity and a handful command foundation models threatening financial stability and economic sove…
S39
The Global Power Shift India’s Rise in AI & Semiconductors — Sovereignty involves ensuring that data and applications remain resident within the country and relevant to national con…
S40
Agents of Change AI for Government Services & Climate Resilience — Srinivas Tallapragada introduced an important distinction between strategic sovereignty and technical sovereignty that p…
S41
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Agreed with:Ambassador Sergio Gor, Secretary S. Krishnan, Dr. Randhir Thakur, Jacob Helberg — U.S.-India partnership is …
S42
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — All speakers agree that the U.S.-India partnership represents a natural, mutually beneficial collaboration based on comp…
S43
Parallel Session D3: Supply Chain Disruptions – The Role and Response of NTFCs — In summary, the analysis accentuated TFAs as catalysts for managing and enhancing supply chain efficiency. It also under…
S44
High-Level session: Building and Financing Resilient and Sustainable Global Supply chains and the Role of the Private Sector — IDB Invest is playing a pivotal role in tackling the financial obstacles facing supply chain financing and business augm…
S45
Manufacturing’s Moonshots Are Landing . . . Are You Ready for the Next Wave? — In conclusion, the analysis highlights the potential challenges posed by geopolitical issues and emerging sustainability…
S46
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Thank you everyone We are up against Jan, we are up against her boss. So, but, let’s have fun in this panel. And the bro…
S47
Keynote Adresses at India AI Impact Summit 2026 — -Sundar Pichai- CEO of Google The discussion revealed significant financial commitments underpinning the partnership. G…
S48
Keynote-Sundar Pichai — Evidence:Google is making a $15 billion infrastructure investment in India, establishing a full-stack AI hub in Vizag wi…
S49
Keynote Adresses at India AI Impact Summit 2026 — The discussion revealed significant financial commitments underpinning the partnership. Google announced substantial inv…
S50
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Thank you and a special thank to you, Prime Minister Modi, for the vision you shared this morning and for this event. SP…
S51
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Sanjay Mehrotra from Micron detailed the company’s $2.75 billion investment in assembly and test operations in Sanand, G…
S52
Pursuing a metaverse based on democratic values | IGF 2023 Day 0 Event #207 — In conclusion, the discourse offered a multitude of perspectives on the potential trajectories of the metaverse, outlini…
S53
Scaling Innovation Building a Robust AI Startup Ecosystem — Overall Tone:The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with t…
S54
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — The tone was consistently optimistic, collaborative, and forward-looking throughout the session. It maintained a formal …
S55
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S56
Opening of the session — Enhanced collaboration and data sharing are emphasised as pivotal for the forward momentum of the group. Argentina spotl…
S57
Democratizing AI: Open foundations and shared resources for global impact — The tone was consistently collaborative, optimistic, and forward-looking throughout the discussion. Speakers maintained …
S58
Session — Marilia Maciel: Thank you, Jovan. I’ll do that, but I’ll do that by going back to your question about what predominates,…
S59
Parallel Session A5: Achieving Sustainable and Resilient Transport and Logistics including inSIDS — Attention was drawn to the need for closer examination of supply chain ethics, focusing on inclusivity and the World Eco…
S60
Closing Session  — The tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory y…
S61
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S62
Welcome Address — Overall Tone:The tone is consistently optimistic, visionary, and confident throughout the speech. Modi maintains an insp…
S63
Keynote-Ankur Vora — Overall Tone:The tone is consistently optimistic, inspirational, and mission-driven throughout. The speaker maintains a …
S64
Keynote-Jeet Adani — Overall Tone:The tone was consistently aspirational, patriotic, and strategic throughout. Jeet Adani maintained a confid…
S65
UNSC meeting: Regional arrangements for peace — Mozambique:Mr. President, Mozambique warmly Commends Brazil for convening this important open debate. We thank the disti…
S66
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-good-using-technology-to-create-real-world-impact — Digital public infrastructure and open networks are an important part of making this possible. They provide the coordina…
S67
AI-powered Gemini Live to enhance voice commands for millions in India — Google is ramping up its AIeffortsin India, aiming to integrate itsGemini AImodel across various products to cater to th…
S68
Google boosts AI in coding and cloud growth — More than 30% of all code at Googleis now writtenwith the help of AI, according to CEO Sundar Pichai during Alphabet’s Q…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sundar Pichai
5 arguments137 words per minute574 words250 seconds
Argument 1
AI‑driven consumer products and services for Indian users (Sundar Pichai)
EXPLANATION
Google is rolling out AI‑enhanced products that cater specifically to Indian consumers, such as voice and visual search, multilingual Gemini app, and YouTube content tools. These offerings aim to improve everyday digital experiences and make information more accessible across India.
EVIDENCE
Pichai highlighted that AI is changing how people use search, noting that Indian users are among the highest adopters of voice and visual search and that features like scan detection, Circle to Search, and Lens are used more in India than elsewhere [15-18]. He also mentioned the Gemini app’s rapid growth and its availability in ten Indian languages, as well as YouTube’s support for Indian creators [19]. Additionally, he cited AI applications delivering timely monsoon forecasts to farmers, helping healthcare workers screen for diseases, and providing information in more languages [13].
MAJOR DISCUSSION POINT
AI consumer products for India
Argument 2
AI Skill House program to train 10 million future Indian leaders (Sundar Pichai)
EXPLANATION
Google’s AI Skill House initiative is designed to equip ten million Indian students and professionals with AI knowledge and tools, creating a large talent pool to drive global progress. The program includes partnerships to deliver certifications and practical training.
EVIDENCE
Pichai announced the AI Skill House, stating it will equip ten million future Indian leaders with the tools to drive global progress [20]. He also referenced a partnership with Badwani AI to provide a Google AI certificate to students and early-career professionals [21].
MAJOR DISCUSSION POINT
Large‑scale AI skilling
AGREED WITH
Ashwini Vaishnav, Sanjay Mehrotra
Argument 3
AI Hub in Vizag and India‑America Connect subsea cable initiative to boost infrastructure (Sundar Pichai)
EXPLANATION
Google announced a $15 billion investment in an AI Hub in Vizag that will host gigawatt‑scale computing, and a new India‑America Connect subsea cable system to expand digital trade routes between the U.S., India, and the Southern Hemisphere. These projects aim to strengthen AI infrastructure and connectivity.
EVIDENCE
Pichai detailed a $15 billion investment in Indian infrastructure centered on an AI Hub in Vizag that will house gigawatt-scale computing and create jobs [22-24]. He then described the India-America Connect Initiative, which will deliver new subsea cable routes linking the U.S., India, and multiple Southern Hemisphere locations, expanding digital trade routes and acting as a literal bridge between the two countries [25-26].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Pichai announced a $15 billion AI Hub in Vizag with gigawatt-scale computing and a new India-America subsea cable, as detailed in the summit keynotes [S6][S10][S18].
MAJOR DISCUSSION POINT
AI infrastructure and connectivity
AGREED WITH
Sanjay Mehrotra, Sergio Gore
Argument 4
Need for stable, trusted supply‑chains underpinning products, cables, and AI hubs (Sundar Pichai)
EXPLANATION
Pichai emphasized that the success of AI products, subsea cables, and AI hubs depends on reliable, secure supply chains across borders. He highlighted the role of partners like Axilica in safeguarding these flows.
EVIDENCE
He noted that products, subsea cables, and AI hubs rely on a complex flow of goods and components across borders, and that stable supply chains built on shared trust are essential [27-28]. He mentioned Axilica’s focus on ensuring safe and secure supply chains and encouraging commercial partnerships across key technologies [29].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He emphasized that products, subsea cables and AI hubs depend on complex, trusted supply-chain flows across borders, a point reiterated in the summit remarks [S6][S10].
MAJOR DISCUSSION POINT
Supply‑chain stability for AI ecosystem
AGREED WITH
Jacob Helberg, Sergio Gore
DISAGREED WITH
Jacob Helberg
Argument 5
AI Skill House and Google AI Certificate to equip Indian students and professionals (Sundar Pichai)
EXPLANATION
Beyond the broader AI Skill House, Google is providing a specific AI certification in partnership with Badwani AI to help Indian students and early‑career professionals gain recognized credentials in AI, further strengthening the talent pipeline.
EVIDENCE
Pichai referenced a partnership with Badwani AI to reach students and early-career professionals with a Google AI certificate, announced earlier in the week [21].
MAJOR DISCUSSION POINT
AI certification for Indian talent
J
Jacob Helberg
2 arguments159 words per minute668 words250 seconds
Argument 1
Declaration as a concrete roadmap for shared AI and economic security (Jacob Helberg)
EXPLANATION
Helberg described the Pax Silica Declaration as more than a symbolic agreement; it is a practical roadmap that aligns U.S. and Indian efforts on AI and economic security, reinforcing the link between economic stability and national security.
EVIDENCE
He stated that the Pax Silica Declaration is “not merely an agreement on paper, but a roadmap for a shared future” and linked it to economic security being national security [82-84][96-97].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Helberg described the Pax Silica Declaration as a practical roadmap aligning U.S. and Indian AI and economic security efforts [S6][S10].
MAJOR DISCUSSION POINT
Pax Silica as AI‑economic roadmap
AGREED WITH
Sergio Gore, Sanjay Mehrotra
DISAGREED WITH
Participant, Sergio Gore
Argument 2
Over‑concentration of global supply chains and the risk of weaponized dependency (Jacob Helberg)
EXPLANATION
Helberg warned that global supply chains are overly concentrated, creating vulnerabilities that can be exploited for economic coercion. He called for a collective stance against such weaponized dependencies.
EVIDENCE
He highlighted that the world faces an over-concentrated global supply chain, with allies threatened by economic coercion and blackmail, citing examples such as a city’s lights being extinguished by a keystroke and denial of essential minerals due to political dissent [93-96].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He warned that global supply chains are overly concentrated and can be weaponised for economic coercion, a concern highlighted in the keynote summaries [S6][S10].
MAJOR DISCUSSION POINT
Supply‑chain concentration and coercion risk
AGREED WITH
Sundar Pichai, Sergio Gore
DISAGREED WITH
Sundar Pichai
S
Sergio Gore
2 arguments133 words per minute715 words320 seconds
Argument 1
Coalition to secure the full silicon stack and ensure technology serves free societies (Sergio Gore)
EXPLANATION
Gore outlined a strategic coalition that covers the entire silicon stack—from mineral extraction to chip fabrication and data‑center deployment—aimed at replacing coercive dependencies with a trusted, free‑society‑focused technology ecosystem.
EVIDENCE
He described the U.S. leading a coalition that secures the entire silicon stack, from mines to fabs to data centers, replacing coercive dependencies with a positive-sum alliance of trusted industrial bases, and emphasized that technology should empower free people and markets [141-145].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Gore outlined a coalition that secures the entire silicon stack-from mineral extraction to data-center deployment-aimed at replacing coercive dependencies with trusted industrial bases [S6][S10].
MAJOR DISCUSSION POINT
Full‑stack silicon coalition
AGREED WITH
Jacob Helberg, Sanjay Mehrotra
DISAGREED WITH
Participant, Jacob Helberg
Argument 2
Cooperation on critical minerals, smelters, and manufacturing to reduce coercive dependencies (Sergio Gore)
EXPLANATION
Gore stressed the importance of joint work on critical mineral processing, smelters, and manufacturing to build supply chains that cannot be held hostage, thereby reducing the leverage of adversarial actors.
EVIDENCE
He noted India’s strides in critical mineral processing capacity and the U.S.-India cooperation on these issues, stating that discussions on critical minerals focus on interdependency among strong actors and building supply chains not held hostage [148-152][161-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He highlighted U.S.-India collaboration on critical mineral processing, smelting and manufacturing to build supply chains that cannot be held hostage [S6][S10].
MAJOR DISCUSSION POINT
Critical minerals collaboration
AGREED WITH
Sundar Pichai, Jacob Helberg
P
Participant
1 argument49 words per minute851 words1023 seconds
Argument 1
Forward‑looking partnership to build a secure, resilient technology ecosystem (Participant)
EXPLANATION
The participant framed Pax Silica as a forward‑looking partnership that aims to create a secure and resilient technology ecosystem, emphasizing trusted partnerships and shared responsibility between the U.S. and India.
EVIDENCE
The participant announced that India formally joins Pax Silica, a partnership aimed at strengthening secure and resilient technology ecosystems, and highlighted that trusted partnerships are essential and reflect a shared commitment to responsible innovation and resilient infrastructure [33-36].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The participant announced India’s formal joining of Pax Silica as a forward-looking partnership to strengthen a secure and resilient technology ecosystem [S10].
MAJOR DISCUSSION POINT
Secure, resilient tech ecosystem
AGREED WITH
Sundar Pichai, Jacob Helberg, Sergio Gore
DISAGREED WITH
Jacob Helberg, Sergio Gore
A
Ashwini. Vaishnav
2 arguments93 words per minute185 words118 seconds
Argument 1
India’s trusted status and deep talent pool as essential to Pax Silica (Ashwini Vaishnav)
EXPLANATION
Vaishnav argued that India’s reputation as a trusted nation, bolstered by its deep talent pool and strategic policy, makes it a vital contributor to the Pax Silica coalition.
EVIDENCE
He stated that India is seen as a trusted country because of its 5,000-year-old civilization, stable governance, and the trust earned under Prime Minister Narendra Modi’s foreign policy, and that this trust is now becoming part of Pax Silica [179-184]. He also highlighted India’s deep talent and engineering depth as critical for the coalition [146-148].
MAJOR DISCUSSION POINT
India’s trust and talent for Pax Silica
AGREED WITH
Sundar Pichai, Ashwini Vaishnav, Sanjay Mehrotra
Argument 2
Expansion of EDA tools and development of Indian design talent (Ashwini Vaishnav)
EXPLANATION
Vaishnav emphasized the need to expand access to electronic design automation (EDA) tools and to develop Indian design talent, noting that India currently has a large number of such tools and capabilities compared with the rest of the world.
EVIDENCE
He mentioned that while only a few countries have access to design EDA tools, India has 315, underscoring the scale that must be developed, and stressed that this capability positions India as a trusted country in the global tech landscape [174-180].
MAJOR DISCUSSION POINT
EDA tools and design talent expansion
S
Sanjay Mehrotra
3 arguments129 words per minute496 words229 seconds
Argument 1
Memory and storage as foundational to AI performance and growth (Sanjay Mehrotra)
EXPLANATION
Mehrotra explained that memory and storage are critical components for AI, as increasing model size and real‑time performance demands require ever‑greater memory capacity.
EVIDENCE
He stated that memory and storage are critical to driving AI, and that as contextual processing grows and real-time performance demands increase, AI systems need more memory [58-60].
MAJOR DISCUSSION POINT
Memory/storage importance for AI
AGREED WITH
Jacob Helberg, Sergio Gore
Argument 2
$2.75 billion advanced packaging, assembly, and test facility in Gujarat (Sanjay Mehrotra)
EXPLANATION
Mehrotra announced a $2.75 billion investment by Micron in Gujarat to build an advanced packaging, assembly, and test facility, describing its massive scale and the resources required for construction.
EVIDENCE
He detailed Micron’s $2.75 billion investment in Gujarat, describing a 500,000-square-foot facility whose clean room equals ten cricket fields, uses steel three and a half times the Eiffel Tower, and concrete equivalent to 100 Olympic-size swimming pools [66-73].
MAJOR DISCUSSION POINT
Large‑scale semiconductor manufacturing investment
AGREED WITH
Sundar Pichai, Sergio Gore
Argument 3
Significant Indian R&D contributions: 300 inventors, ~2 000 patents (Sanjay Mehrotra)
EXPLANATION
Mehrotra highlighted the substantial R&D output from Micron’s Indian teams, noting hundreds of inventors and thousands of patents generated locally, underscoring India’s role in advanced semiconductor innovation.
EVIDENCE
He reported that Micron’s Indian R&D teams have produced 300 inventors and nearly 2,000 patents contributed by the innovative team in India [64].
MAJOR DISCUSSION POINT
Indian R&D impact on Micron
AGREED WITH
Sundar Pichai, Ashwini Vaishnav
Agreements
Agreement Points
Trusted, resilient supply chains are essential for the AI ecosystem and to avoid coercive dependencies.
Speakers: Sundar Pichai, Jacob Helberg, Sergio Gore
Need for stable, trusted supply‑chains underpinning products, cables, and AI hubs (Sundar Pichai) Over‑concentration of global supply chains and the risk of weaponized dependency (Jacob Helberg) Cooperation on critical minerals, smelters, and manufacturing to reduce coercive dependencies (Sergio Gore)
All three speakers stress that secure, diversified supply chains across minerals, components and infrastructure are vital for AI products, subsea cables and hubs, and that concentration creates vulnerability that must be mitigated [27-28][93-96][148-152][161-162].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the emphasis on supply-chain resilience for AI highlighted in the India AI Impact Summit 2026 keynote and in policy discussions on cyber-resilience and public-private partnerships [S26][S31][S37][S45].
Strengthening the U.S.–India partnership is a cornerstone for advancing AI, secure technology ecosystems and shared economic security.
Speakers: Sundar Pichai, Jacob Helberg, Participant, Sergio Gore
U.S.-India partnership has a critical role to play (Sundar Pichai) Declaration as a concrete roadmap for shared AI and economic security (Jacob Helberg) Forward‑looking partnership to build a secure, resilient technology ecosystem (Participant) India’s entry into Pax Silica is strategic and essential (Sergio Gore)
Each speaker highlights the bilateral relationship as the engine for AI collaboration, policy alignment and a resilient tech ecosystem, framing the Pax Silica signing as a historic step forward [6-7][81-84][33-36][138-146].
Developing India’s talent and capacity is critical for AI and semiconductor innovation.
Speakers: Sundar Pichai, Ashwini Vaishnav, Sanjay Mehrotra
AI Skill House program to train 10 million future Indian leaders (Sundar Pichai) India’s trusted status and deep talent pool as essential to Pax Silica (Ashwini Vaishnav) Significant Indian R&D contributions: 300 inventors, ~2 000 patents (Sanjay Mehrotra)
The speakers converge on the importance of scaling skills, leveraging a deep engineering talent base and recognizing substantial R&D output from Indian teams to fuel AI and chip advances [20-21][146-148][64].
POLICY CONTEXT (KNOWLEDGE BASE)
Skilling and education sessions at the summit stressed building AI talent in India as essential for semiconductor and AI innovation, echoing broader policy calls to develop a skilled workforce [S29][S39].
Large‑scale infrastructure investments are needed to underpin AI capabilities and reduce dependence.
Speakers: Sundar Pichai, Sanjay Mehrotra, Sergio Gore
AI Hub in Vizag and India‑America Connect subsea cable initiative to boost infrastructure (Sundar Pichai) $2.75 billion advanced packaging, assembly, and test facility in Gujarat (Sanjay Mehrotra) Coalition to secure the full silicon stack and ensure technology serves free societies (Sergio Gore)
All three underline the necessity of massive physical investments-from computing hubs and subsea cables to semiconductor fabs and packaging lines-to create a trusted, end-to-end AI supply chain [22-26][66-73][141-145].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple speakers at the summit called for large-scale investments in data centres, subsea cables and compute capacity to reduce external dependence, aligning with infrastructure priorities outlined in the summit agenda [S26][S28][S30][S31].
AI hardware and supply‑chain resilience are strategic assets for economic and national security.
Speakers: Jacob Helberg, Sergio Gore, Sanjay Mehrotra
Declaration as a concrete roadmap for shared AI and economic security (Jacob Helberg) Coalition to secure the full silicon stack and ensure technology serves free societies (Sergio Gore) Memory and storage as foundational to AI performance and growth (Sanjay Mehrotra)
The speakers agree that robust AI hardware, from memory/storage to the full silicon stack, underpins economic sovereignty and national security, making supply-chain independence a security imperative [58-60][96-97][144-146].
POLICY CONTEXT (KNOWLEDGE BASE)
AI hardware and supply-chain resilience were framed as strategic economic and national-security assets in both the summit discussions and supply-chain fortification reports [S31][S37][S45].
Similar Viewpoints
Both stress that supply‑chain concentration creates strategic vulnerability and that trusted, diversified flows are essential for AI and related infrastructure [27-28][93-96].
Speakers: Sundar Pichai, Jacob Helberg
Need for stable, trusted supply‑chains underpinning products, cables, and AI hubs (Sundar Pichai) Over‑concentration of global supply chains and the risk of weaponized dependency (Jacob Helberg)
Both frame the bilateral relationship as a forward‑looking, strategic partnership that will drive secure technology development [6-7][33-36].
Speakers: Sundar Pichai, Participant
U.S.-India partnership has a critical role to play (Sundar Pichai) Forward‑looking partnership to build a secure, resilient technology ecosystem (Participant)
Both highlight India’s strong developer talent and trusted status as a foundation for AI collaboration [13][146-148].
Speakers: Sundar Pichai, Ashwini Vaishnav
To empower India’s incredible developer community (Sundar Pichai) India’s trusted status and deep talent pool as essential to Pax Silica (Ashwini Vaishnav)
Both argue that diversifying critical mineral and manufacturing supply chains is necessary to prevent coercive economic leverage [93-96][148-152][161-162].
Speakers: Jacob Helberg, Sergio Gore
Over‑concentration of global supply chains and the risk of weaponized dependency (Jacob Helberg) Cooperation on critical minerals, smelters, and manufacturing to reduce coercive dependencies (Sergio Gore)
Both emphasize large‑scale semiconductor infrastructure as a pillar of a trusted, free‑society technology ecosystem [66-73][141-145].
Speakers: Sanjay Mehrotra, Sergio Gore
$2.75 billion advanced packaging, assembly, and test facility in Gujarat (Sanjay Mehrotra) Coalition to secure the full silicon stack and ensure technology serves free societies (Sergio Gore)
Both see AI hardware (memory/storage) as a strategic component of economic security and national resilience [58-60][96-97].
Speakers: Jacob Helberg, Sanjay Mehrotra
Declaration as a concrete roadmap for shared AI and economic security (Jacob Helberg) Memory and storage as foundational to AI performance and growth (Sanjay Mehrotra)
Unexpected Consensus
Alignment between a government official and a private‑sector executive on AI hardware as a national security asset.
Speakers: Jacob Helberg, Sanjay Mehrotra
Declaration as a concrete roadmap for shared AI and economic security (Jacob Helberg) Memory and storage as foundational to AI performance and growth (Sanjay Mehrotra)
It is notable that a senior U.S. Under-Secretary and the CEO of a semiconductor firm converge on the view that AI-related hardware (memory, storage, and broader silicon stack) is not merely commercial but a cornerstone of national economic security, linking policy and industry perspectives [58-60][96-97].
POLICY CONTEXT (KNOWLEDGE BASE)
The fireside chat highlighted alignment between government R&D leaders and private-sector executives on treating AI hardware as a national-security priority, reflecting the public-private coordination emphasized in supply-chain policy dialogues [S28][S37].
Overall Assessment

The speakers display a high degree of consensus around three core themes: (1) the necessity of a trusted, diversified supply chain for AI hardware and minerals; (2) the strategic importance of deepening U.S.–India collaboration through the Pax Silica framework; and (3) the critical role of large‑scale infrastructure and talent development in sustaining AI growth. These shared positions cut across government, corporate and diplomatic voices, indicating a unified strategic narrative.

Strong consensus – the convergence of multiple actors (government, industry, and diplomatic) on supply‑chain resilience, partnership depth, and infrastructure investment suggests a coordinated policy and commercial agenda that will likely accelerate implementation of the Pax Silica objectives and shape future AI governance.

Differences
Different Viewpoints
Assessment of current supply‑chain stability and the need for diversification
Speakers: Sundar Pichai, Jacob Helberg
Need for stable, trusted supply‑chains underpinning products, cables, and AI hubs (Sundar Pichai) Over‑concentration of global supply chains and the risk of weaponized dependency (Jacob Helberg)
Pichai stresses that products, subsea cables and AI hubs rely on a “complex flow of goods and components across borders” and that “stable supply chains built on a foundation of shared trust” are essential [27-28]. Helberg, by contrast, warns that the world faces an “over-concentrated” global supply chain that can be weaponised for economic coercion, citing examples such as a city’s lights being extinguished by a keystroke and denial of essential minerals [93-96]. Thus they differ on whether existing supply chains are already trustworthy or fundamentally vulnerable and in need of restructuring.
POLICY CONTEXT (KNOWLEDGE BASE)
The Parallel Session D3 on supply-chain disruptions examined current stability and advocated diversification through regional collaboration and trade-facilitation agreements [S43].
Framing of the Pax Silica initiative’s primary purpose
Speakers: Participant, Jacob Helberg, Sergio Gore
Forward‑looking partnership to build a secure, resilient technology ecosystem (Participant) Declaration as a concrete roadmap for shared AI and economic security (Jacob Helberg) Coalition to secure the full silicon stack and ensure technology serves free societies (Sergio Gore)
The Participant describes Pax Silica primarily as a “forward-looking partnership aimed at strengthening secure and resilient technology ecosystems” emphasizing trusted partnerships [33-36]. Helberg presents the same declaration as a “roadmap for a shared future” that links AI cooperation with economic security, positioning it as a policy instrument [81-84][96-97]. Gore expands the framing to a strategic coalition securing the entire silicon stack to empower free societies, focusing on industrial and security dimensions [141-145]. While all endorse the initiative, they diverge on its central emphasis-partnership building, policy roadmap, or full-stack industrial coalition.
Unexpected Differences
Cultural‑based trust versus security‑based trust in the partnership
Speakers: Ashwini Vaishnav, Jacob Helberg
India’s trusted status and deep talent pool as essential to Pax Silica (Ashwini Vaishnav) Over‑concentration of global supply chains and the risk of weaponized dependency (Jacob Helberg)
Vaishnav attributes India’s value in Pax Silica to its ancient civilizational heritage, stable governance and the trust earned through “5000-year-old civilization” and Modi’s foreign policy [179-184]. Helberg, however, frames trust in terms of strategic security, warning that existing supply chains are vulnerable to coercion and that economic security must be protected [93-96]. The shift from a cultural-trust narrative to a security-risk narrative was not anticipated and reveals an unexpected divergence in how trust is conceptualised within the same coalition.
POLICY CONTEXT (KNOWLEDGE BASE)
Research on cultural trust dynamics and the multidimensional nature of trust provides context for the debate between cultural-based and security-based trust in the US-India partnership [S35][S36].
Overall Assessment

The discussion shows broad consensus on the necessity of a US‑India AI and technology partnership, but the speakers diverge on how to achieve a secure ecosystem. The most salient disagreements concern the perceived stability of current supply chains versus the need for diversification, and the framing of Pax Silica—whether as a partnership‑building effort, a policy roadmap linking AI to economic security, or a full‑stack silicon coalition. An unexpected tension emerges between cultural notions of trust and security‑focused risk assessments.

Moderate. While all participants share the overarching goal of a resilient, inclusive AI ecosystem, the differing diagnoses of supply‑chain health and the varied strategic emphases (product investment, policy roadmap, industrial coalition, cultural trust) indicate substantive but not irreconcilable disagreements. These gaps could affect implementation timelines and the allocation of resources, requiring further coordination to align the technical, policy, and strategic dimensions of the partnership.

Partial Agreements
All speakers concur that a secure, inclusive AI ecosystem between the United States and India is essential. However, they propose different pathways: Pichai focuses on product‑level investments such as AI‑enhanced consumer tools, AI hubs and subsea cables [15-26]; Helberg stresses policy coordination and a declaration that ties AI cooperation to economic security [81-84][96-97]; Gore advocates a broader industrial coalition covering the entire silicon supply chain from minerals to data centres [141-145]; the Participant highlights the partnership’s role in building trusted, resilient infrastructure [33-36]. The shared goal is evident, but the means—product investment, policy roadmap, full‑stack coalition, or partnership framework—are distinct.
Speakers: Sundar Pichai, Jacob Helberg, Sergio Gore, Participant
AI‑driven consumer products and services for Indian users (Sundar Pichai) Declaration as a concrete roadmap for shared AI and economic security (Jacob Helberg) Coalition to secure the full silicon stack and ensure technology serves free societies (Sergio Gore) Forward‑looking partnership to build a secure, resilient technology ecosystem (Participant)
Takeaways
Key takeaways
The United States and India reaffirmed a deepening partnership in AI, emphasizing joint product development, AI‑driven services for Indian consumers, and large‑scale infrastructure investments. Google announced a $15 billion AI Hub in Vizag, a new India‑America subsea cable system, and the AI Skill House program to train 10 million Indian leaders, alongside contributions of Gemini models for local AI applications. The Pax Silica Declaration was signed, establishing a coalition to secure the full silicon stack, protect economic security, and ensure that advanced technologies serve free societies. Micron highlighted its $2.75 billion advanced packaging, assembly, and test facility in Gujarat and the significant R&D contributions from Indian engineers (≈300 inventors, ~2 000 patents). Both governments stressed the importance of resilient, trusted supply chains for chips, critical minerals, and AI infrastructure, positioning India’s talent pool and emerging mineral processing capacity as strategic assets.
Resolutions and action items
Formal signing of the Pax Silica Declaration between the United States and India. Google’s commitment to deploy the AI Hub in Vizag and launch the India‑America Connect subsea cable network. Launch of the AI Skill House initiative targeting 10 million Indian AI‑skill trainees and rollout of the Google AI Certificate in partnership with Badwani AI. Micron’s $2.75 billion investment in an advanced packaging, assembly, and test facility in Sanand, Gujarat, with plans for a grand opening and operational ramp‑up. Ongoing collaboration with Indian government agencies on AI applications for agriculture (monsoon forecasts), healthcare (disease screening), and multilingual services. Joint work on critical mineral sourcing, smelting, and semiconductor manufacturing to reduce over‑concentration and weaponized dependency.
Unresolved issues
Detailed governance structure, decision‑making processes, and accountability mechanisms for the Pax Silica coalition were not defined. Specific timelines, milestones, and performance metrics for the AI Skill House program and the AI Hub deployment were not disclosed. How the new subsea cable routes will be financed, regulated, and integrated with existing infrastructure remains unclear. The transcript did not address potential regulatory or data‑privacy challenges associated with cross‑border AI services. Mechanisms for monitoring and mitigating supply‑chain risks, especially regarding critical minerals, were mentioned but not concretely outlined.
Suggested compromises
Balancing the need for secure, sovereign supply chains with the desire for open, collaborative innovation – e.g., building trusted partnerships rather than exclusive protectionism. Acknowledging both nations’ “spirit of defiance” (saying “no” to coercion) while committing to joint investment and shared standards to reduce dependency on hostile actors.
Thought Provoking Comments
We must work together to ensure the benefits of AI are available to everyone and everywhere… The U.S.-India partnership has a critical role to play. Google is proud to serve as a connection point between them, both figuratively and literally.
Sets a broad, inclusive vision for AI that frames the entire summit around shared prosperity rather than competition, and introduces the concrete idea of a ‘connection point’ linking the two nations.
Established the thematic foundation of the discussion, prompting subsequent speakers to reference collaboration, infrastructure, and shared responsibility. It steered the conversation toward concrete initiatives (AI Hub, subsea cables) and reinforced a cooperative tone.
Speaker: Sundar Pichai
The size of this facility that has been built is 500,000 square feet… the clean room is the size of 10 cricket fields… the amount of steel used is about three and a half times the Eiffel Tower.
Uses vivid, tangible analogies to convey the massive scale of semiconductor manufacturing in India, highlighting the strategic importance of physical infrastructure in AI advancement.
Shifted the dialogue from abstract policy to concrete industrial capability, underscoring India’s emerging role in the global chip supply chain. It prompted later remarks about supply‑chain security and the need for trusted manufacturing bases.
Speaker: Sanjay Mehrotra
We say no to weaponized dependency, and together we say that economic security is national security… Our fundamental mission is not resistance, it’s renewal. We are forging a supply chain that is the foundation for prosperity.
Frames the Pax Silica agreement as a decisive stance against coercive economic practices, linking economic policy to democratic values and national security, and introduces the concept of ‘renewal’ over ‘resistance’.
Created a turning point by moving the conversation from partnership celebration to a strategic, security‑focused narrative. It prompted the ambassador and others to emphasize the coalition’s purpose, deepening the discussion on supply‑chain resilience and geopolitical implications.
Speaker: Jacob Helberg
Pax Silica is about whether free societies will control the commanding heights of the global economy… Innovation happens in Bengaluru and Silicon Valley, not in surveillance states.
Draws a stark moral and geopolitical contrast between open democracies and authoritarian regimes, positioning the coalition as a values‑driven technology alliance.
Reinforced and expanded Helberg’s security framing, shifting the tone toward a values‑based competition. It prompted acknowledgment of India’s strategic depth and set the stage for the fireside conversation on building a trusted tech stack.
Speaker: Sergio Gore
India has 315 EDA tools… India is seen as a trusted country because of its 5,000‑year civilization and the gravitas that the world believes in.
Highlights India’s unique technical capabilities (EDA tools) and cultural‑historical credibility, linking soft power to hard tech leadership.
Added depth to the narrative of trust and capability, supporting the earlier claims about India’s role in the coalition. It helped transition the discussion toward the formal signing and the symbolic significance of the declaration.
Speaker: Ashwini Vaishnav
Overall Assessment

The discussion was anchored by Sundar Pichai’s inclusive AI vision, which set a collaborative tone. Sanjay Mehrotra’s concrete illustration of India’s semiconductor capacity grounded the conversation in tangible infrastructure. Jacob Helberg’s security‑focused framing of Pax Silica marked a pivotal shift, recasting the agreement as a strategic response to coercive dependencies. Sergio Gore amplified this by tying the coalition to democratic values versus surveillance states, deepening the geopolitical dimension. Finally, Ashwini Vaishnav’s emphasis on India’s technical depth and historic gravitas reinforced the narrative of trust, culminating in the formal signing. Together, these comments steered the dialogue from aspirational partnership to a nuanced, security‑aware, values‑driven alliance, shaping the overall direction and tone of the summit.

Follow-up Questions
How will the 22 Gemini models contributed to AI Coach be integrated into Indian developer workflows and what impact will they have?
Understanding adoption and practical outcomes of the AI models for the Indian developer community is essential for measuring the initiative’s success.
Speaker: Sundar Pichai
What are the plans and timeline for deploying AI-driven monsoon forecasts to Indian farmers?
Deploying accurate weather forecasts can significantly improve agricultural productivity; clarity on rollout is needed.
Speaker: Sundar Pichai
How will AI tools assist healthcare workers in screening for diseases such as diabetic retinopathy in India?
Evaluating the effectiveness of AI‑enabled diagnostics is crucial for improving public health outcomes.
Speaker: Sundar Pichai
What strategies will be used to make information and services accessible in more Indian languages through AI?
Language inclusion determines how broadly AI benefits can reach India’s diverse population.
Speaker: Sundar Pichai
How will the AI Skill House and Google AI certificate program reach the target of equipping 10 million future Indian leaders, and how will success be measured?
A clear implementation and metrics plan is needed to assess the scale and impact of the skilling effort.
Speaker: Sundar Pichai
What is the expected timeline, capacity, and job‑creation impact of the AI Hub in Vizag?
Details on the hub’s operational schedule and economic benefits are important for stakeholders.
Speaker: Sundar Pichai
What are the technical specifications and anticipated benefits of the new subsea cable routes under the India‑America Connect Initiative?
Understanding the infrastructure’s capabilities helps gauge its effect on digital trade and connectivity.
Speaker: Sundar Pichai
How will Axilica ensure safe and secure supply chains for key technologies, and what metrics will be used to assess security?
Supply‑chain resilience is a cornerstone of the partnership; measurable safeguards are required.
Speaker: Sundar Pichai
What concrete steps are being taken to diversify the global supply chain for critical minerals to reduce over‑concentration?
Diversification is vital to mitigate economic coercion and ensure national security.
Speaker: Jacob Helberg
How will the Pax Silica coalition address the risk of economic coercion and blackmail in technology supply chains?
Clarifying mechanisms to protect partners from coercive practices strengthens the alliance.
Speaker: Jacob Helberg
What are the current capabilities and gaps in India’s critical‑mineral processing capacity, and how can they be expanded?
Identifying processing shortfalls informs investment and policy decisions to achieve self‑sufficiency.
Speaker: Sergio Gore
How can India’s engineering talent and EDA‑tool ecosystem be scaled to meet global demand?
Scaling design tools and talent is essential for maintaining India’s trusted status in the semiconductor supply chain.
Speaker: Ashwini Vaishnav
What metrics will be used to assess the effectiveness of the Pax Silica declaration in fostering resilient technology ecosystems?
Establishing evaluation criteria will allow both nations to track progress and adjust strategies.
Speaker: Multiple participants (Jacob Helberg, Sergio Gore, Ashwini Vaishnav)
How will the partnership ensure that AI innovations are not constrained by global‑governance models that could limit sovereignty?
Protecting sovereign decision‑making while promoting AI development is a key policy concern.
Speaker: Jacob Helberg
What are the plans for collaboration between Micron’s R&D centers in India and U.S. teams on advanced DRAM design?
Co‑development details are needed to understand how joint innovation will accelerate memory technology.
Speaker: Sanjay Mehrotra
What is the status and expected impact of Micron’s $2.75 billion investment in advanced packaging, assembly, and test technologies in Gujarat?
Clarifying the investment’s timeline and outcomes will indicate its contribution to India’s semiconductor ecosystem.
Speaker: Sanjay Mehrotra
How will the new 500,000 sq ft facility in Sanand contribute to India’s semiconductor manufacturing ecosystem?
Understanding the facility’s role helps gauge its effect on capacity, supply‑chain resilience, and job creation.
Speaker: Sanjay Mehrotra

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote by Uday Shankar Vice Chairman_JioStar India

Keynote by Uday Shankar Vice Chairman_JioStar India

Session at a glanceSummary, keypoints, and speakers overview

Summary

Speaker 1 opens by praising the Prime Minister’s AI-centred growth agenda and the India AI team for delivering the summit, then states he will not debate AI technology itself but focus on its potential for the Indian media and entertainment sector [1-2][4-6]. Drawing on three decades in media, he notes how successive technologies-from personal computers to digital news platforms-have repeatedly increased speed, agility and audience reach [9-11].


Over the past 25 years the industry has expanded from a few-billion-dollar market to the world’s fifth-largest media and entertainment market, now worth over $30 billion with 900 channels and 800 million video viewers [15-19]. This expansion has reshaped Indian aspirations and created a vibrant, multilingual content ecosystem [20-23], yet India remains a largely domestic content producer and has not become a global content powerhouse [31-33]. He attributes this to structural constraints such as limited capital (average Indian film $3-5 million versus Hollywood $65-100 million) and a talent pool that is often deployed for foreign productions [44-51][53-60].


The speaker argues that AI offers a “once-in-a-generation” chance to overcome cost barriers, accelerate production and shift the industry’s three pillars-content, consumer and commerce [70-74]. He cites Geostar’s 100-episode live-action Mahabharata series, which achieved global-scale visual quality three to five times faster than a traditional pipeline, demonstrating the removal of old constraints [78-81]. AI also enables new consumer experiences such as conversational discovery, interactive storytelling and precise regionalization, and it makes granular segmentation, dynamic pricing and new value categories feasible [90-97].


To capture this opportunity, he calls for three commitments: self-disruption, development of AI-native creative talent that blends storytelling with technology, and policy frameworks that act as accelerators rather than barriers [106-112][124-130]. He warns that while the West is hampered by legacy liabilities, India’s lack of baggage gives it the freedom to design inclusive revenue models and set global standards [113-119][146-149]. Concluding, he expresses confidence that India can become the world’s AI-driven media leader if it moves swiftly, aligning market scale, cultural depth and technology [150-155][158].


Keypoints

India’s media sector has achieved remarkable domestic growth but remains limited in global influence.


The speaker highlights the rapid expansion of the industry (e.g., from a few billion-dollar market to the world’s fifth-largest media market) and the massive domestic audience, yet points out that India still “produces and consumes domestically” and lacks a global content footprint [31-34][41-46][47-53][57-62].


Artificial intelligence is presented as a once-in-a-generation catalyst to overcome existing barriers.


AI can “reduce costs” and “unlock an unprecedented capacity to produce more,” as demonstrated by the rapid production of a 100-episode series [70-78][81-84]. It also promises new consumer experiences (conversational discovery, interactive storytelling, regionalization) and smarter commerce through granular segmentation, dynamic pricing, and new value categories [88-96][98-102].


A three-fold call to action for the industry, talent pipeline, and policy framework.


1. Disrupt ourselves or be disrupted – urging incumbents to adopt AI now rather than resist, citing past resistance to digital newsrooms and streaming [105-108][112-119].


2. Develop AI-native creative talent – stressing the need for professionals who blend storytelling with AI tool mastery, and calling for large-scale skilling [124-130].


3. Make policy an accelerator – advocating for India-specific regulatory guardrails that remove obstacles without importing Western models, and learning from China’s approach [131-138].


The overarching vision is for India to become the world’s leading creative capital in the AI era.


By leveraging its “formidable cultural depth” and entrepreneurial spirit, India can shift from a back-office support role to the “front office, the producer and deliverer of content globally,” potentially raising its share of the $3-trillion global media market from under 2 % to 4-5 % [70-73][85-86][149-152].


Overall purpose/goal:


The speaker seeks to rally government, industry leaders, creators, and policymakers around a coordinated AI strategy that transforms India’s media and entertainment ecosystem-from a domestically-focused market to a globally competitive creative powerhouse-thereby unlocking billions of dollars of new value for the country.


Overall tone:


The address begins with celebratory and appreciative remarks about the Prime Minister’s vision and India’s media growth [1-3]. It then shifts to a diagnostic, analytical tone describing structural constraints [31-53][57-62]. The narrative becomes increasingly visionary and urgent as AI’s potential is outlined [70-84][88-102], culminating in a rallying, confident call-to-action that emphasizes ambition over anxiety [106-119][124-152]. The tone remains optimistic throughout but moves from reflective to motivational as the speech progresses.


Speakers

Speaker 1


– Role/Title: Event moderator or host (appears to introduce the main speaker) [S1][S3]


– Area of expertise:


Additional speakers:


Speaker 4


– Role/Title: Audience member asking a question [S2]


– Area of expertise:


Full session reportComprehensive analysis and detailed insights

Speaker 1 began by congratulating the Hon Prime Minister for centering India’s growth agenda on artificial intelligence and praising the India AI team for flawlessly delivering the summit, which “could not have arrived sooner” [1-3]. He explicitly said he would not add to the “good-versus-evil” debate on AI readiness, choosing instead to focus on what AI can achieve for the Indian media and entertainment sector [4-6].


Drawing on more than three decades in media, he recounted key inflection points-from the first personal computers in newsrooms to launching India’s inaugural end-to-end digital news platform, Aaj Tak-each of which injected speed, agility and efficiency, reshaping audience relationships and positioning Indian media at the forefront of innovation [9-11].


He quantified the sector’s meteoric rise: in roughly 25 years the industry grew from a few-billion-dollar market to the world’s fifth-largest media-entertainment market, now contributing over $30 billion to the economy [15-16]; from a single broadcaster to about 900 channels in dozens of languages; from ~70 million TV households to >210 million, and from a few hundred million video viewers to >800 million [17-19]. Content has evolved from tentative family dramas to a vast, multilingual tapestry that fuels the aspirations of an entire generation [20-23].


Despite this domestic triumph, India remains largely a domestic content producer. He contrasted India’s inward-focused output with the global impact of far smaller markets-South Korea’s “Squid Game” and “Parasite,” and Puerto Rico’s Spanish-language superstar who headlined the Super Bowl-illustrating the mindset the Prime Minister called for at Waves: “Create in India, create for the world” [31-36].


He identified three structural constraints that keep India confined: (1) the sheer size of the home market breeds complacency; (2) capital constraints-average Hollywood studio budgets of $65-100 m (tentpoles $300-350 m) versus Indian films $3-5 m, and Hollywood TV episodes $20-30 m versus Indian equivalents-limit financing because monetisation remains domestic [41-53]; (3) a talent paradox-world-class creative and technical talent (e.g., VFX) is exported to support Western productions because domestic financing cannot afford it [57-62].


AI, he argued, offers a once-in-a-generation chance to become the world’s creative capital-not just a back-office service provider but the front-office producer of global content. Because the sector is built on human creativity, it is the biggest beneficiary of AI, which rewires three core pillars: content, consumer and commerce.


On the content pillar, AI removes long-standing infrastructure barriers. He cited Geostar’s recent 100-episode live-action series “Mahabharata Ek Dharmayu,” produced three-to-five times faster than a traditional pipeline while delivering global-scale visual quality and significant cost efficiencies [70-78][79-84]. He also noted that Geostar has invested over $10 billion in content in the past three years and will continue to do so [65-67]. Moreover, every major global media enterprise is competing fiercely for Indian viewers’ attention; those who are not here are simply unable to crack this complex market [68-70]. The speaker’s agenda for Geostar-to harness these attributes and position the company as the world’s leading foundry for stories and creativity-is outlined in the transcript [84-86].


On the consumer side, AI shatters the historic one-directional “producer-to-audience” model, enabling conversational discovery, interactive storytelling and sophisticated regionalisation that goes beyond simple dubbing, thereby capturing the authentic texture of India’s diverse markets [88-92].


On the commerce side, AI turns the blunt levers of advertising and subscription into granular consumer segmentation, dynamic pricing and packaging that reflect individual lifestyles and purchasing power, unlocking entirely new value categories [93-102].


Collectively, these disruptions form the engine of the “orange economy” the Prime Minister envisions. The global media market is near $3 trillion today, projected to reach $3.5 trillion by 2029, while India’s share is < 2 % [99-101]. Even a modest rise to 4-5 % would generate tens of billions of dollars in new value [102-103].


To seize this moment, he called for three commitments:


1. Disrupt ourselves or be disrupted. He warned that incumbents historically defended fortresses until they were buried, emphasizing India’s unique advantage-freedom to move and lack of legacy baggage-while Hollywood is “approaching AI defensively, paralyzed by legal battles and protectionist reflexes” [105-119][112-114][115-117]. He added that India can design inclusive revenue models that work for writers, actors, technicians and producers, creating a larger pie rather than a zero-sum game [118-121].


2. Make India the global hot-bed for AI-native creative talent. The most valuable future media professional will blend storytelling with AI fluency; India must fuse its deep creative traditions with its sharp engineering talent through large-scale skilling and upskilling programmes [124-130][128-129].


3. Ensure policy acts as an accelerator, not a brake. Regulators should remove obstacles, set India-specific guardrails, and avoid wholesale import of Western models, taking a cue from China’s “clear-eyed” regulatory approach that aligns with national ambitions [131-138][132-135].


He highlighted the symbolic importance of holding the summit in Bharat Mandapam-the first global AI summit hosted in the Global South-signalling a shift away from a world where tools, platforms and rules were created elsewhere. AI levels the playing field; “everybody is starting at the same place” as far as sector application is concerned [124-126][128-130].


In closing, he asserted that the question is no longer “whether” India can become the AI-driven global media leader, but “whether we will move fast enough to claim the position that rightfully belongs to us.” He expressed confidence that India’s energy, ambition and the alignment of market scale with technological capability mark the start of a race in which AI is the ultimate leveler, urging the audience to shape and lead the new era [150-158].


Session transcriptComplete transcript of the session
Speaker 1

Let me begin by first of all congratulating our Honorable Prime Minister on his vision and leadership in centering this country’s growth agenda around artificial intelligence. I must also compliment the India AI team for executing so flawlessly on the Prime Minister’s vision and bringing us all together at this seminal forum. The summit could not have come a day too soon. As for myself, I am not here to talk about the technology of AI. Enough debate has happened on that and I do not want to add to the debate on whether we are ready. Whether we are ready and whether that whole debate of good versus evil. We do a lot of that in our entertainment stories.

But I personally am a big believer in the power of harnessing emerging technologies to transform societies, businesses, and lives of people. Over three decades as a media professional, I have had a ringside view of technology’s transformative impact, starting with the introduction of the first personal computer in newsrooms and the launch of India’s first end -to -end digital news platform, Aajitak. At every stage since, technology has allowed the businesses I have been involved with to operate with speed, agility, and efficiency that fundamentally changed our relationship with audiences. At each of these inflection points, these businesses have been at the forefront of adopting and introducing innovations to Indian people. This has helped all stakeholders. It is exactly because of this adoption of cutting -edge technologies that India has been A late entrant to the world of technology has been a key part of the development of media and entertainment has rapidly become one of the most exciting media markets globally.

The transformation has truly been extraordinary. Within the span of just about a quarter century or so, we have gone from an industry valued at just a few billion dollars to the fifth largest media and entertainment market in the world. We are valued with our economic contribution going to over 30 billion dollars. We have transitioned from one sleepy broadcaster at the turn of the century to about 900 channels across dozens of languages. Our consumer universe has expanded from about 70 million households to more than 210 million television households and over 800 million video viewers. And the content itself has evolved beyond recognition. From a few tentative experiments in family drama to a vast, diverse, multilingual, ecosystem serving the most heterogeneous consumer universe in the world.

In this process, we have built an ecosystem that has fired the aspirations and ambitions of the whole country. The aspirations of a generation of Indians, what they wanted to become and what they thought was possible, have been shaped as much by what they watched as by what they were taught. While the social impact gives me immense satisfaction, the economic and business impact is equally compelling. At Geostar alone, we have invested over $10 billion in content over the past three years, and that will continue to be the case going forward, if anything. Every major global media enterprise is competing fiercely for the Indian viewers’ attention. Those who are not here are not here simply because they could not crack this complex market.

So the key question… The key question is what can AI do for the… Indian media industry that we are already not doing? To answer that, we need to zoom out and look at the broader landscape a little bit. Despite our remarkable domestic progress, India has not yet broken through as a global content powerhouse. We still produce and consume domestically. Compare this to countries with far smaller population, less cultural diversity, and less formidable technological capabilities that despite those, they have managed to capture the global imagination. A small country like South Korea gave the world squid games and Parasite. Puerto Rico, an island of 3 million people, just gave the world the most streamed artist on the planet, performing entirely in Spanish, headlining the Super Bowl halftime show, but gravitating.

Grabbing global attention. These cultures dared to imagine that their stories and their languages could command a global stage, and they succeeded. This is precisely the mindset that the Honourable Prime Minister called for in his rallying cry at Waves last year. Create in India, create for the world. It’s a dream many of us in the media industry have always nourished, but so far it’s just remained a dream. So why have we not been able to break out of the domestic bounds and achieve a larger mindshare and market share globally? In my view, first and foremost, our big domestic market itself has been a distraction. We can get easily satisfied as long as we are getting attention and business in India.

But our ability to translate our abundant ambition into reality has also been constrained by a few structural factors. Chief among them being the capital constraints. An inability to attract global talent and a target audience largely confined to the domestic audience. The numbers make these constraints stark. The average Hollywood studio production commands a budget of 65 to 100 million dollars. A major tent pole runs up to anything, anything up to 300 or 350 million dollars. The average Indian film, 3 to 5 million dollars. And this is equally true of television production. A single episode of a marquee series in Hollywood can cost up to 20 to 30 million dollars. We can only afford to spend a fraction of that. Because, one, we have the constraint, but two, we are not able to get the capital because our primary market of monetization still remains India.

And as a result, it’s become a spiral and we just cannot compete globally in that race. And this financial ceiling has been set. And this has created a paradox of talent as well. India has some of the finest creative and technical talent anywhere in the world. We have created cutting -edge technology and production capabilities in areas such as VFX that power the world’s biggest productions. But these are all deployed to support Western productions. Our own producers and directors who have the quality and the ambition cannot afford these services because our monetization universe is much more smaller and limited. So when both capital and talent are constrained, the horizon of our content narrows with them. Our films, our television, our music have been made primarily for consumers within the country, or at best, for the diaspora overseas.

There have been some exceptions, but they have been made. There have been just exceptions, not a pattern. The result is a peculiar chicken -and -an -egg problem. Limited capital, much of which owes to our status as a developing economy, and a primarily domestic audience constrain our global competitiveness. That lack of competitiveness in turn hinders our ability to attract the capital that would close the gap. This is not to lament what we have achieved. We have done remarkably well with the limitations and challenges that we had, but the opportunity at hand is much larger, much bigger. AI provides India a once -in -a -generation opportunity to become the creative capital of the world. Not just the back office for the world’s content, but the front office, the producer and deliverer of content globally, the leader, the standard bearer.

Because our business is built on human creativity, the media and entertainment sector is said to be the biggest beneficiary of the AI. This is a catalyst that fundamentally rewires three core pillars on which our entire industry is built. Content, consumer and commerce. On content, for decades, the limitations of infrastructure have been a constraint on the business of media and entertainment. Today, that barrier is coming down rapidly. AI -powered production is not just reducing costs, it is unlocking an unprecedented capacity to produce more and offer more. At Geostar, we recently produced the Mahabharata Ek Dharmayu, the 100 -episode live -action series, which is exhibited right here at the GeoPavilion. We achieved the visual scale and emotional depth of a global production three to five times faster than a traditional pipeline.

The economic efficiencies were significant, too. What this tells me is that the old barriers are vanishing. The only binding constraints that are left are imagination and creativity. And a landscape where imagination determines the winner. And a landscape where imagination determines the winner. India’s formidable cultural depth and inherent DNA for storytelling and entrepreneurship has become our most powerful competitive assets. Our agenda at Geostar is clear, to harness these attributes and position ourselves as the world’s leading foundry for stories and creativity. For consumers, we have an opportunity to retire a model that has been one -directional for a century. We produce, they receive. AI shatters that monologue. It allows us to create experiences that audiences have never had before.

We are opening a new frontier in the viewer relationship, conversational discovery, interactive storytelling, and regionalization that goes beyond simply dubbing the capture, the authentic texture of India’s distinct markets. And finally, commerce. Since the first newspapers, this industry has operated with exactly two monetization models, advertising and subscription. These are two incredibly broad. These are two incredibly blunt levers for a market of 800 million viewers with wildly different economic realities. AI makes genuine consumer segmentation a reality. It enables dynamic pricing and packaging that actually reflect how people live, how they consume, what they consume, and what they can afford. It unlocks entirely new categories of value we haven’t even begun to imagine in the media and entertainment sector.

Taken together, the disruption across the three pillars of content, consumer, and commerce form the very engine of the orange economy that the Honorable Prime Minister talks about. The global media market is nearly $3 trillion today, heading to $3 .5 trillion by 2029. India’s share is currently less than 2%. AI offers us the potential to explore our share in this pie. Even a modest shift in our share of global revenue from 2 % to 4 % or 5 % would represent tens of billions of dollars in new value creation and can be transformational for a large segment of our people. But opportunity and outcome are not the same thing. We need all stakeholders pulling in the same direction. To seize the moment, we need three commitments from everyone.

in this country and in this room. First, disrupt ourselves or be disrupted. I’ve seen this movie before. When we introduced digital newsrooms, senior editors resisted. When streaming arrived, traditional broadcasters looked the other way. The pattern is almost always the same. Incumbents defend the fortress until the walls come down and they are buried under it. We cannot afford the same mistake. Right now, we have an advantage the West does not. The freedom to move. The lack of baggage. Hollywood is approaching AI defensively, paralyzed by legal battles and locked in protectionist reflexes. The incumbents are conflicted and held back by the legacy value that they have accumulated. Luckily, we don’t have such liabilities. We can design the revenue models that actually work for everyone.

The writers, the actors, the technicians, and the producers. This does not have to be a zero -sum game. It is a larger pie and everybody, you must share it. fairly and squarely. We can set the global precedent, but only if we lead with ambition rather than anxiety. Secondly, India must become the global hotbed for AI -native creative talent. The most valuable person in tomorrow’s media industry is not a pure technologist, not a traditional artist. It is a blend of both. Someone who can conceive a world -class story and command the AI tools to bring it to life. We have the deepest creative traditions and the sharpest engineering minds. The task now is to fuse them seamlessly through a relentless focus on skilling and upskilling at scale so that the world looks at India for this exact kind of talent.

And finally, policy must be an accelerator. In this early stage of our growth and ambition, it should not become a break. Our creators do not need a roadmap handed to them. They simply need the obstacles removed. because these are early days. The guardrails we set now will have a massive multiplier effect on our competitiveness in future. As we shape these frameworks, we must resist the temptation to import Western regulatory construct wholesale. Look at China. It’s been very clear -eyed about this. They identified exactly what they needed to outpace the West and build their regulatory approach around that goal. Our frameworks must also reflect our unique ambitions and opportunities. We are sitting in Bharat Mandapam at the first global AI summit hosted in the global south.

This is significant in a way that goes far beyond symbolism. For too long, the intersection of technology and media has been dominated by a handful of countries and companies. The tools were always made elsewhere. The platforms were built elsewhere. The rules were written elsewhere. AI changes that equation forever. Everybody is starting at the same place. as far as application to this sector is concerned. When the barriers across the entire value chain collapse, the advantage may shift decisively. It moves away from those with deepest pockets and towards those with deepest wells of entrepreneurship, creativity, and adoption to technology. And no country on earth is better positioned for that shift than India. The question before us today is not whether India can become the global media powerhouse of the AI age.

It is whether we will move fast enough to claim that position that actually rightfully belongs to us. I believe we will. The energy and the ambition of this country always gives me hope. The stories have always been here. Now the scale of our market and the power of our technology have finally aligned, and the race has just begun. This technology is the ultimate leveler. Let us not just participate in this new era. Let us shape and lead this. Thank you very much. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (2)
Confirmedhigh

“Speaker congratulated Prime Minister Modi for the AI summit and praised the India AI team for delivering a spectacular summit that “could not have arrived sooner.””

The knowledge base records multiple speakers congratulating Prime Minister Modi on the summit and describing it as a defining, spectacular moment for India’s AI journey, confirming the congratulatory remarks and praise for the event’s timeliness [S47] and [S48].

Confirmedhigh

“Speaker said he would not add to the “good‑versus‑evil” debate on AI readiness, choosing instead to focus on AI’s opportunities for Indian media and entertainment.”

Uday Shankar’s keynote explicitly states he will not discuss the technology debate or whether India is ready, matching the speaker’s claim to avoid the “good-versus-evil” discussion and focus on transformational opportunities [S7] and [S8].

External Sources (52)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Including the excluded: how can small and micro businesses be supported toward success in e-commerce? (ITC) — To navigate the growth in the e-commerce market, businesses should prepare for market diversification. It is crucial for…
S5
The Geoeconomics of Energy and Materials/ DAVOS 2025 — Fatih Birol: Many thanks, Megan, and also framing it in a perfect way, the discussion, and you mentioned a lot of, ri…
S6
The history of computer viruses: Journey back to where it all began! — Once confined to the realms of theoretical science and speculative fiction, computer viruses have morphed into one of th…
S7
Keynote by Uday Shankar Vice Chairman_JioStar India — This comment deepens the analysis by introducing a psychological dimension to India’s global competitiveness challenges….
S8
Keynote by Uday Shankar Vice Chairman_JioStar India — Shankar identifies key structural barriers preventing India from becoming a global content powerhouse despite its domest…
S9
Keynote-Bejul Somaia — From Scarcity to Abundance: Dissolving the Talent Bottleneck Scarcity of capital. Scarcity of infrastructure. Scarcity …
S10
WS #100 Integrating the Global South in Global AI Governance — AUDIENCE: I think beyond skills programs and helping developers and people working in those industries in the click co…
S11
Europe’s rush to innovate — One argument posits that Europe is not fully capitalising on its abundance of talent. The European Research Council (ERC…
S12
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Speaker 1 contends that we are transitioning from passive consumption to active creation, enabled by AI-powered video ge…
S13
Sticking with Start-ups / DAVOS 2025 — Bhatnagar explains how AI is transforming content creation and enabling new business models. He highlights the reduced c…
S14
Keynote by Naveen Tewari Founder & CEO, inMobi India AI Impact Summit — Building on this transformation framework, Tewari focused on commerce as one of the most significant areas for AI-driven…
S15
Conversation: 01 — Artificial intelligence
S16
AI to boost India’s media and entertainment sector — AIcould boostrevenues by 10% and reduce costs by 15% for media and entertainment firms, according to a report by EY, unv…
S17
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — Industry should act proactively without waiting for regulation, taking accountability for harmful impacts
S18
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Speaker 1’s presentation represents a masterful progression from current state analysis to future vision, punctuated by …
S19
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — In conclusion, the analysis reinforces the potential of digitalisation and emerging technologies, such as artificial int…
S20
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Authorities and independent media will lag behind while malicious actors remain behind. one step ahead. Accountability w…
S21
Artificial intelligence: a catalyst for scientific discovery and advancement — While concerns about AI’s dangers abound, experts believe that it can greatly accelerate scientific progress and lead to…
S22
Keynotes — Oleksandr Bornyakov: Dear ladies and gentlemen, I’m honored to represent Ukraine today here in Strasbourg in the heart o…
S23
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — This comment was particularly insightful because it revealed the existential nature of the talent shortage for industry …
S24
India’s AI roadmap could add $500 billion to economy by 2035 — According to the Business Software Alliance, Indiacould addover $500 billion to its economy by 2035 through the widespre…
S25
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Evidence:All business objectives and growth objectives for next 10 years require talent pipeline development; industry w…
S26
Translation — customs supervision, and inspection and quarantine management and improve the level of trade and investment facilitation…
S27
AI Innovation in India — Bagla articulated a compelling vision of India’s unique advantages in the global AI landscape, asserting that India will…
S28
Secure Finance Risk-Based AI Policy for the Banking Sector — -India’s Strategic AI Positioning: Discussion centered on how India should position itself globally in AI governance, le…
S29
Secure Finance Risk-Based AI Policy for the Banking Sector — India’s Strategic AI Positioning: Discussion centered on how India should position itself globally in AI governance, lev…
S30
Indias Roadmap to an AGI-Enabled Future — This discussion focused on India’s path to building an AGI-enabling ecosystem, examining the critical pillars of energy,…
S31
Keynote by Uday Shankar Vice Chairman_JioStar India — Summary:There is recognition that technology has been the fundamental catalyst transforming India’s media industry from …
S32
Keynote by Uday Shankar Vice Chairman_JioStar India — AI provides India a once -in -a -generation opportunity to become the creative capital of the world. Not just the back o…
S33
9821st meeting — Ecuador will continue to advocate for artificial intelligence systems which are designed and used, with absolute respect…
S34
Artificial intelligence: a catalyst for scientific discovery and advancement — While concerns about AI’s dangers abound, experts believe that it can greatly accelerate scientific progress and lead to…
S35
Keynotes — Oleksandr Bornyakov: Dear ladies and gentlemen, I’m honored to represent Ukraine today here in Strasbourg in the heart o…
S37
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — Paola Galvez: Thank you, Ananda. Hello, everyone. Thank you so much for joining us to this very, very critical conver…
S38
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — This comment was particularly insightful because it revealed the existential nature of the talent shortage for industry …
S39
India’s AI roadmap could add $500 billion to economy by 2035 — According to the Business Software Alliance, Indiacould addover $500 billion to its economy by 2035 through the widespre…
S40
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Triolo emphasizes that the collaboration between government policy support, academic research and training capabilities,…
S41
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-data-sovereignty-india-ai-impact-summit — My closing remarks. One, of course, I did speak about before in terms of how you treat this asset. You’ve got to treat i…
S42
Towards a Reskilling Revolution — In addition to traditional hierarchical structures, another obstacle that companies must overcome to attract and retain …
S43
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Julie Sweet from Accenture highlighted another crucial advantage: India’s human capital. With over 350,000 employees in …
S44
AI Innovation in India — Bagla articulated a compelling vision of India’s unique advantages in the global AI landscape, asserting that India will…
S45
Indias Roadmap to an AGI-Enabled Future — Dua argued that India could become a global compute hub, potentially processing 40-50% of the world’s data by leveraging…
S46
Indias Roadmap to an AGI-Enabled Future — This discussion focused on India’s path to building an AGI-enabling ecosystem, examining the critical pillars of energy,…
S47
Keynote-Mukesh Dhirubhai Ambani — Distinguished guests, my fellow Indians, namaste. The Global AI Impact Summit is a defining moment in India’s tech histo…
S48
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Congratulations, Prime Minister Modi, on such an incredible summit. It was so incredible to see all of the who’s who, as…
S49
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Minister Vaishnav, Excellencies, ladies and gentlemen, let me begin by giving our thanks and expressing our sincere appr…
S50
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — -His Honorable Prime Minister, Mr. Narendra Modi: Prime Minister of India (mentioned but did not speak in this transcrip…
S51
Open Internet Inclusive AI Unlocking Innovation for All — Rajan Anandan provided a compelling counter-narrative to the prevailing assumption that countries must compete directly …
S52
All hands on deck to connect the next billions | IGF 2023 WS #198 — Disney is driving global demand for its content through its streaming service, Disney Plus, by offering a wide range of …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
11 arguments151 words per minute2358 words935 seconds
Argument 1
Rapid market growth and diversification
EXPLANATION
India’s media and entertainment sector has expanded dramatically over the past 25 years, becoming one of the largest markets globally. The industry now serves a massive, multilingual audience with a wide variety of content formats.
EVIDENCE
The speaker cites that the sector grew from a few-billion-dollar industry to the fifth largest media market in the world, contributing over $30 billion to the economy, expanding to about 900 channels in dozens of languages, reaching more than 210 million TV households and over 800 million video viewers, and evolving from simple family dramas to a diverse, multilingual ecosystem [15-22].
MAJOR DISCUSSION POINT
Market growth and diversification
Argument 2
Domestic market size creates complacency and limits global ambition
EXPLANATION
The sheer size of India’s internal audience leads many firms to focus on domestic success, reducing the drive to compete internationally. This complacency hampers the industry’s ability to become a global content powerhouse.
EVIDENCE
The speaker argues that the large domestic market acts as a distraction, making companies easily satisfied with attention and business within India, which in turn limits ambition to reach global audiences [41-43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Uday Shankar highlights that India’s huge domestic audience has fostered complacency, reducing the drive to compete internationally [S8].
MAJOR DISCUSSION POINT
Domestic focus as a constraint
Argument 3
Capital constraints: low production budgets compared with Hollywood
EXPLANATION
Indian productions operate with budgets that are a fraction of those in Hollywood, limiting the scale and quality of content that can be created for global markets. This financial gap creates a competitive disadvantage.
EVIDENCE
Comparative figures are provided: Hollywood studio productions cost $65-100 million on average, with tent-pole films up to $300-350 million, whereas the average Indian film costs $3-5 million, and a marquee TV episode in Hollywood can cost $20-30 million, far beyond what Indian producers can afford [47-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Shankar points out the stark budget gap-Hollywood studios spend $65-100 million per film versus Indian productions of $3-5 million-creating a capital constraint for global competitiveness [S8].
MAJOR DISCUSSION POINT
Budget disparity
Argument 4
Talent abundance but under‑utilized due to funding and domestic focus
EXPLANATION
India possesses world‑class creative and technical talent, especially in areas like VFX, but limited capital and a domestic‑only market prevent these resources from being applied to Indian productions. Consequently, talent is often exported to support Western projects.
EVIDENCE
The speaker notes that India has top-tier creative and technical talent and cutting-edge VFX capabilities, yet these are deployed mainly for Western productions because Indian creators cannot afford such services given the smaller monetisation universe, leading to a narrowed content horizon [56-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A contrasting view notes that talent scarcity, especially in VFX and creative skills, is a bottleneck for the Indian media sector, suggesting the talent pool may not be as abundant as claimed [S9].
MAJOR DISCUSSION POINT
Under‑utilised talent
Argument 5
AI reduces production costs and accelerates content creation
EXPLANATION
Artificial intelligence streamlines production pipelines, cutting costs and dramatically shortening the time needed to create high‑quality content. This enables Indian media firms to compete more effectively on a global scale.
EVIDENCE
The speaker describes AI-powered production as lowering costs and unlocking capacity, citing a recent Geostar project – the 100-episode live-action series Mahabharata Ek Dharmayu – which achieved visual scale and emotional depth comparable to global productions three to five times faster than traditional methods, with significant economic efficiencies [77-81].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-driven video generation can cut production cycles from years to hours and lower costs, with EY estimating a 15 % cost reduction for media firms [S12][S16][S13].
MAJOR DISCUSSION POINT
AI‑driven efficiency
Argument 6
AI enables interactive, personalized, and regionally nuanced consumer experiences
EXPLANATION
AI breaks the traditional one‑way media model, allowing for conversational discovery, interactive storytelling, and sophisticated regionalisation beyond simple dubbing. This creates richer, more engaging experiences for diverse Indian audiences.
EVIDENCE
The speaker contrasts the old monologue model (produce-receive) with AI-shattered monologue, highlighting new frontiers such as conversational discovery, interactive storytelling, and regionalisation that captures the authentic texture of India’s distinct markets [88-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Shankar describes AI-shattered monologue models that enable conversational discovery, interactive storytelling, and deep regionalisation beyond simple dubbing [S7].
MAJOR DISCUSSION POINT
AI‑enhanced consumer experience
Argument 7
AI unlocks sophisticated monetization: segmentation, dynamic pricing, new value categories
EXPLANATION
AI makes granular consumer segmentation possible, enabling dynamic pricing and packaging that reflect varied economic realities across 800 million viewers. This opens up entirely new revenue streams for the media sector.
EVIDENCE
The speaker explains that the industry has historically relied on blunt advertising and subscription models, but AI enables genuine consumer segmentation, dynamic pricing, and the creation of new value categories previously unimaginable for media and entertainment [92-98].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
According to Shankar, AI makes genuine consumer segmentation possible and supports dynamic pricing and packaging that reflect diverse Indian consumption patterns [S7].
MAJOR DISCUSSION POINT
AI‑driven monetisation
Argument 8
AI can raise India’s share of the global media market from <2% to 4‑5%
EXPLANATION
With AI lowering barriers and expanding capabilities, India could double or more its current sub‑2% share of the nearly $3 trillion global media market, translating into tens of billions of dollars in new value creation.
EVIDENCE
The speaker cites the global media market size of nearly $3 trillion (projected $3.5 trillion by 2029), India’s current share of less than 2%, and argues that moving to a 4-5% share would generate tens of billions of dollars in additional value [99-103].
MAJOR DISCUSSION POINT
Potential market share growth
Argument 9
Industry must proactively disrupt itself rather than wait for external disruption
EXPLANATION
Media companies need to lead the change by embracing AI now, rather than resisting as they did with digital newsrooms and streaming. Failure to self‑disrupt will leave them vulnerable to being overtaken.
EVIDENCE
The speaker urges disruption, recalling past resistance to digital newsrooms and streaming, and warns that incumbents typically defend their forts until they are buried by the very walls they built, emphasizing the need to act before external forces force change [106-112].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Shankar urges media companies to lead self-disruption now, warning that defensive resistance will leave them vulnerable to external forces [S8][S7].
MAJOR DISCUSSION POINT
Self‑disruption imperative
Argument 10
Build AI‑native creative talent that blends storytelling with technology
EXPLANATION
Future media success will depend on professionals who combine artistic storytelling with AI technical skills. Large‑scale skilling and upskilling programmes are essential to create this hybrid talent pool.
EVIDENCE
The speaker states that the most valuable future media professional will be a blend of technologist and artist, and calls for fusing India’s deep creative traditions with its engineering strengths through massive skilling and upskilling initiatives [124-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI Impact Summit stresses large-scale skilling and upskilling programmes to create hybrid technologist-artist talent for the media sector [S10][S12].
MAJOR DISCUSSION POINT
Developing AI‑native talent
Argument 11
Policy should be an accelerator, crafting India‑specific frameworks instead of importing Western models
EXPLANATION
Regulatory frameworks must support AI‑driven media growth by removing obstacles and reflecting India’s unique ambitions, rather than copying Western regulations. Tailored policies will have a multiplier effect on competitiveness.
EVIDENCE
The speaker urges policy to act as an accelerator, avoid wholesale import of Western regulatory constructs, cites China’s tailored approach as an example, and stresses that early-stage guardrails will massively boost future competitiveness [130-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussions on Global South AI governance highlight the need for tailored, India-centric regulatory frameworks rather than wholesale adoption of Western models [S10].
MAJOR DISCUSSION POINT
India‑centric policy design
Agreements
Agreement Points
Similar Viewpoints
Unexpected Consensus
Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains remarks from a single participant (Speaker 1) throughout the entire session [1-160]. No other speakers are recorded, and therefore there are no points of contention, partial consensus, or surprising divergences between different speakers.

None – the absence of multiple voices means the discussion is uniformly aligned with Speaker 1’s perspective, implying no direct conflict that could affect policy or strategic decisions on the topics addressed.

Takeaways
Key takeaways
India’s media and entertainment sector has grown rapidly, becoming the fifth largest globally, but remains largely domestically focused. Structural constraints—limited production budgets, capital scarcity, and under‑utilized talent—prevent Indian content from achieving global scale. Artificial intelligence can dramatically lower production costs, accelerate content creation, and unlock new creative possibilities. AI enables personalized, interactive consumer experiences and sophisticated monetization models such as granular segmentation and dynamic pricing. Leveraging AI could increase India’s share of the global media market from under 2% to 4‑5%, generating tens of billions of dollars in value. Three strategic commitments are essential: (1) industry self‑disruption, (2) development of AI‑native creative talent, and (3) policy frameworks that accelerate rather than hinder innovation.
Resolutions and action items
Industry leaders are urged to proactively adopt AI tools and redesign revenue models to stay ahead of disruption. Stakeholders should launch large‑scale skilling and upskilling programs to create a workforce that blends storytelling with AI expertise. Policymakers are asked to craft India‑specific AI regulations and guardrails that support the media sector without importing Western models wholesale.
Unresolved issues
Concrete mechanisms for attracting the large capital required to fund globally competitive productions remain undefined. Specific strategies for drawing and retaining global creative talent in India are not detailed. The exact design of AI‑driven monetization models (e.g., pricing algorithms, segmentation criteria) is left open. How to coordinate and fund the proposed skilling initiatives across industry, academia, and government is not resolved. Potential intellectual‑property and data‑privacy frameworks for AI‑generated content were mentioned but not clarified.
Suggested compromises
Adopt a hybrid regulatory approach: avoid wholesale import of Western AI rules while learning from other jurisdictions (e.g., China) to shape India‑specific policies. Encourage incumbent media firms to cooperate in the transition rather than resist, positioning disruption as a shared opportunity rather than a zero‑sum game.
Thought Provoking Comments
I am not here to talk about the technology of AI. Enough debate has happened on that and I do not want to add to the debate on whether we are ready. Whether we are ready and whether that whole debate of good versus evil.
Shifts the focus from technical feasibility and ethical debates to practical, value‑creating applications of AI for the media industry, setting a pragmatic tone for the rest of the talk.
This reframing steered the conversation away from abstract arguments and opened space for concrete discussion on how AI can be leveraged, prompting the audience to think about implementation rather than speculation.
Speaker: Speaker 1
Our big domestic market itself has been a distraction. We can get easily satisfied as long as we are getting attention and business in India, but that limits our ability to translate ambition into global competitiveness.
Challenges the common belief that India’s massive internal audience is an unalloyed advantage, exposing a paradox where domestic size creates complacency and caps growth.
This observation introduced a critical tension that led to the subsequent analysis of capital constraints and talent migration, deepening the conversation about why India has not become a global content powerhouse.
Speaker: Speaker 1
Capital constraints and a talent paradox: we have world‑class creative and technical talent, yet we cannot afford the high‑cost services that we ourselves produce for Western studios.
Links financial limitations directly to a structural talent dilemma, highlighting how the same talent that fuels global productions is underutilized domestically.
By connecting money and talent, the speaker prompted listeners to consider systemic reforms—such as new financing models or AI‑driven cost reductions—as essential for breaking the chicken‑and‑egg cycle.
Speaker: Speaker 1
AI‑powered production is not just reducing costs, it is unlocking an unprecedented capacity to produce more and offer more. Our Mahabharata Ek Dharmayu series was delivered three to five times faster than a traditional pipeline.
Provides a concrete, measurable example of AI delivering speed and cost efficiencies, turning abstract potential into tangible proof.
The case study served as a turning point, moving the dialogue from theoretical benefits to demonstrable outcomes, which reinforced the argument for AI adoption across the industry.
Speaker: Speaker 1
AI shatters the one‑directional producer‑to‑audience monologue. It allows us to create conversational discovery, interactive storytelling, and regionalization that goes beyond simple dubbing.
Introduces a new paradigm for audience engagement, suggesting that AI can transform the consumer relationship from passive consumption to active participation.
This sparked a shift toward discussing future consumer experiences, prompting the audience to envision new business models and content formats enabled by AI.
Speaker: Speaker 1
AI makes genuine consumer segmentation a reality. It enables dynamic pricing and packaging that actually reflect how people live, how they consume, and what they can afford.
Highlights AI’s potential to solve the long‑standing bluntness of advertising and subscription models in a highly heterogeneous market.
The comment broadened the conversation to include monetization strategies, leading participants to think about AI‑driven commerce as a lever for increasing market share globally.
Speaker: Speaker 1
First, disrupt ourselves or be disrupted. Incumbents defend the fortress until the walls come down and they are buried under it. We have the advantage of freedom to move, without the baggage that Hollywood carries.
Calls for proactive self‑disruption and frames India’s lack of legacy constraints as a strategic advantage, urging immediate action.
This rallying cry shifted the tone from analytical to motivational, galvanizing the audience toward collective urgency and setting up the three commitments that followed.
Speaker: Speaker 1
India must become the global hotbed for AI‑native creative talent – a blend of world‑class storytelling and AI fluency.
Identifies a future‑oriented skill set that redefines the archetype of a media professional, linking talent development directly to global competitiveness.
The statement redirected the discussion toward education, skilling, and workforce policy, influencing later remarks about policy as an accelerator rather than a roadblock.
Speaker: Speaker 1
Policy must be an accelerator, not a brake. We must resist importing Western regulatory constructs wholesale and craft frameworks that reflect India’s unique ambitions.
Challenges the default approach of mimicking Western regulations, urging a bespoke policy environment that can harness AI’s disruptive potential.
This prompted a strategic pivot toward governance, encouraging participants to think about how regulatory design can either enable or hinder the AI‑driven media renaissance.
Speaker: Speaker 1
The technology is the ultimate leveler. Everyone is starting at the same place; when barriers collapse, advantage shifts from deepest pockets to deepest wells of entrepreneurship, creativity, and adoption.
Summarizes the overarching thesis that AI democratizes the media value chain, positioning India to leapfrog traditional power structures.
Served as a concluding turning point, reinforcing the earlier arguments and leaving the audience with a clear, optimistic vision that framed the entire discussion.
Speaker: Speaker 1
Overall Assessment

Speaker 1’s remarks systematically reframed the AI‑media conversation from abstract debate to actionable strategy. By exposing the paradox of domestic size, linking capital and talent constraints, and showcasing concrete AI successes, the speaker introduced new analytical lenses that deepened the dialogue. Calls for self‑disruption, AI‑native talent, and tailored policy shifted the tone from descriptive to prescriptive, galvanizing the audience toward collective commitment. These pivotal comments steered the discussion toward concrete opportunities—speedier production, interactive consumer experiences, and nuanced monetization—while positioning India as a potential global media leader in the AI era.

Follow-up Questions
What can AI do for the Indian media industry that we are already not doing?
Identifies the core gap that AI could fill beyond current practices, guiding strategic focus for the sector.
Speaker: Speaker 1
Why has India not broken through as a global content powerhouse despite its large domestic market?
Seeks to understand structural and market factors limiting global reach, essential for formulating corrective policies.
Speaker: Speaker 1
How can India attract the capital needed to compete with global studios?
Addresses the financing bottleneck that restricts production budgets and limits ability to create globally competitive content.
Speaker: Speaker 1
What strategies can be employed to attract and retain global creative and technical talent?
Targets the talent shortage that hampers the creation of high‑quality content and the deployment of AI tools at scale.
Speaker: Speaker 1
What policy frameworks are required to accelerate AI adoption without stifling innovation?
Calls for a regulatory environment that acts as an accelerator rather than a barrier, shaping the future competitiveness of the industry.
Speaker: Speaker 1
Assess the potential impact of AI‑driven production on cost reduction and speed, using case studies such as the Mahabharata Ek Dharmayu series.
Needs empirical data to quantify AI’s efficiency gains and validate its business case for wider industry adoption.
Speaker: Speaker 1
Develop AI‑enabled consumer segmentation, dynamic pricing, and packaging models for India’s heterogeneous audience.
Explores new monetisation levers that could unlock revenue from diverse consumer groups beyond traditional advertising and subscription.
Speaker: Speaker 1
Design scalable skilling and upskilling programmes to create AI‑native creative talent that blends storytelling and technology.
Ensures a pipeline of professionals capable of leveraging AI tools, addressing the talent gap identified earlier.
Speaker: Speaker 1
Measure India’s current and projected share of the global $3‑3.5 trillion media market and model scenarios for achieving a 4‑5 % share.
Provides a quantitative benchmark to assess the economic upside of AI‑driven growth and to set realistic targets.
Speaker: Speaker 1
Conduct a comparative analysis of regulatory approaches (e.g., China vs. Western models) to inform India’s AI media policy.
Aims to craft a uniquely Indian regulatory framework that leverages best practices while avoiding unsuitable foreign models.
Speaker: Speaker 1

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote by Mathias Cormann OECD Secretary-General India AI Impact

Keynote by Mathias Cormann OECD Secretary-General India AI Impact

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened at the India AI Impact Summit, where OECD Secretary-General Mathias Cormann highlighted the organization’s role in guiding global AI policy and thanked India for its leadership [1][2][3]. He emphasized that the OECD provides evidence-based analysis to promote responsible AI innovation while managing risks [4]. Cormann said AI could raise labour productivity by up to one percentage point annually across OECD and G20 economies over the next decade [9], and linked this potential to massive private investment, citing roughly three-quarters of a trillion dollars planned by major tech firms this year [10]. Emphasising the need for public policy, he argued that foundational technologies such as internet connectivity and semiconductors were shaped by earlier policy interventions [11-12].


The OECD’s current work includes mapping the AI ecosystem, tracking public compute capacity and global AI venture-capital flows, which now account for 61 % of worldwide VC investment [14-16]. Its recent report on AI agents found that half of developers intend to use them, but highlighted gaps in security, privacy and accuracy that must be addressed [19]. To monitor risks, the OECD recorded a rise in reported AI incidents from 92 to 324 per month between 2022 and 2025 and promotes a common incident-reporting framework [21-22]. Policymakers are given benchmarking tools such as the newly released OECD AI Index and an upcoming interactive toolkit that will showcase international best practices [24-25].


The organization also coordinates international cooperation through its Global Partnership on AI, recently expanding membership to 46 countries [26-28]. For the private sector, the OECD issued a due-diligence guide and is updating the Hiroshima AI Process Code of Conduct to aid small- and medium-sized enterprises [30-32]. Recognising labour impacts, Cormann estimated that 27 % of jobs are at high risk of automation and pointed out that only 23 % of adults with low literacy receive AI training, underscoring the need for flexible, targeted upskilling [35-38]. In partnership with the ILO, the OECD released an Equitable AI Transitions Playbook to help governments, industry and workers manage the shift responsibly, and the session concluded with Speaker 2 thanking the Secretary-General and introducing the next panel on data sovereignty [39][41-44].


Keypoints

Major discussion points


AI’s transformative economic potential – The OECD estimates AI could raise labor productivity by up to one percent annually across OECD and G20 economies, driving greater efficiency, lower costs and higher living standards, while private-sector investment in AI infrastructure approaches three-quarters of a trillion dollars this year[8-10].


OECD’s data-driven support for policymakers – The organisation tracks the global AI ecosystem, including public compute capacity and venture-capital flows (now 61 % of worldwide VC, $259 bn, with the U.S. capturing 75 % of AI deal value) and publishes analyses such as the “Artificial II landscape” report to keep governments abreast of emerging technologies[14-19].


Systematic monitoring of AI-related risks – By collecting incident data (rising from 92 to 324 reports per month between 2022-2025) and offering a common reporting framework, the OECD helps countries classify and manage safety, privacy and accuracy hazards[20-23].


Workforce impacts and equitable transition – About 27 % of jobs are in occupations at high automation risk; participation in AI training is markedly lower for adults with low literacy (23 % vs 61 %). The OECD, together with the ILO, has produced an “Equitable AI Transitions Playbook” to guide up-skilling, reskilling and policy measures for a fair AI-driven shift[34-40].


Practical tools and international coordination – Recent releases include the OECD AI Index, an interactive toolkit of global best practices, the integrated global partnership on AI, and guidance for companies (e.g., the AI Due Diligence Guidance and the updated Hiroshima AI Process Code of Conduct) to foster responsible adoption across governments, industry and civil society[24-28][30-33].


Overall purpose / goal


The speech aims to showcase how the OECD leverages evidence-based analysis and international cooperation to help governments, businesses and workers harness AI’s benefits while mitigating its risks. By presenting data, risk-monitoring mechanisms, policy tools and collaborative frameworks, the OECD seeks to steer a responsible, inclusive AI transition worldwide.


Tone of the discussion


The tone is largely optimistic and forward-looking, emphasizing the “great opportunities” and “transformative” impact of AI. It remains evidence-driven and collaborative, highlighting the OECD’s role as a facilitator. Mid-speech the tone shifts to a more cautionary note when addressing risks such as incident spikes and job displacement, but it stays constructive, stressing the need for coordinated policy, training and equitable measures rather than expressing alarm. Overall, the discourse moves from confidence in AI’s promise to a balanced call for responsible stewardship.


Speakers

Mathias Cormann – Secretary General, OECD; expertise in international policy and AI governance. [S4]


Speaker 2 – Moderator/Host of the session; specific expertise not mentioned. [S1]


Additional speakers:


Sunil Gupta – Managing Director and Chief Executive Officer, Yota Data Services; expertise in data services and AI.


Nisubo Ongama – Chief Operating Officer, Kala; expertise in operations within the AI/technology sector.


Kala Sonia Vaigando – Founders Associate, Kala Limited; expertise in startup development and AI initiatives.


Seema Ambasta – Chief Executive Officer, L & T, Vioma; expertise in AI technology leadership.


Orgo Sengupta – Founder and Research Director, WIDI Center for Legal Policy; expertise in legal policy and AI governance.


Full session reportComprehensive analysis and detailed insights

The India AI Impact Summit opened with OECD Secretary-General Mathias Cormann thanking India for its leadership in convening the global AI community after successful meetings in the United Kingdom, Korea and France, and reaffirming the OECD’s commitment to support policymakers, businesses and citizens worldwide with evidence-based analysis and guidance on responsible AI innovation [1-4].


Cormann then quantified AI’s macro-economic promise, noting that the OECD estimates a strong level of adoption could lift labour productivity by up to one percent per year across OECD and G20 economies over the next decade. He linked this potential to the unprecedented scale of private-sector investment, pointing out that almost three-quarters of a trillion dollars in AI infrastructure is slated to be spent by major technology firms this year alone [8-10]. He argued that such rapid technological change cannot be left to market forces alone; effective public policy is essential to capture the benefits and manage the risks, just as earlier policy interventions underpinned the development of the internet, semiconductors and global supply chains [11-13].


To help governments navigate the fast-evolving AI landscape, the OECD provides a data-driven service that maps the AI ecosystem. It tracks the global distribution of public AI compute capacity to inform industrial-strategy decisions and to assess supply-chain security, and it monitors worldwide AI venture-capital flows – now accounting for 61 % of total VC investment ($259 bn) and dominated by the United States, which captures 75 % of AI deal value [14-18]. These analyses are refreshed regularly; for example, the recent “Artificial Intelligence landscape” report found that half of surveyed developers intend to use AI agents, while highlighting the need for progress on security, privacy and accuracy before broader adoption can occur [19].


Risk monitoring is another pillar of the OECD’s work. Between 2022 and 2025 the number of AI-related incidents reported in the media rose sharply from an average of 92 to 324 per month, prompting the organisation to develop a common framework for AI-incident reporting that promotes global consistency and interoperability [20-23]. This systematic data collection enables policymakers to classify emerging hazards and to design targeted mitigation measures.


Building on the incident data, the OECD has released practical tools for policy benchmarking. The OECD AI Index provides an evidence-based instrument for assessing national progress against the OECD Recommendation on AI; later this year an interactive toolkit will launch, offering a curated repository of international best practices for peer learning [24-25].


International coordination is pursued through the Integrated Global Partnership on AI (G-PI). The partnership, which convenes governments, industry and civil society, recently welcomed Malta and Saudi Arabia, expanding its membership to 46 countries across six continents and reinforcing a multilateral framework for responsible AI development, grounded in the OECD’s landmarks [26-28].


For the private sector, the OECD supports responsible adoption via the Hiroshima AI Process Code of Conduct, originally launched at the OECD AI Action Summit in Paris last year and now being updated to accommodate small and medium-sized enterprises. In addition, the organisation issued a due-diligence guidance for responsible AI, helping companies navigate an increasingly complex landscape of regulations, voluntary standards and stakeholder expectations [30-33].


The OECD also provides recommendations for governments, business, labour and other stakeholders to ensure inclusive participation in AI [29].


Cormann addressed the labour-market implications of rapid AI diffusion. He estimated that roughly 27 % of current employment is in occupations at the highest risk of automation, and highlighted a stark disparity in AI-training participation: only 23 % of adults with low literacy engage in relevant training compared with 61 % of those with higher literacy. He argued that upskilling must become more flexible, modular and tailored to individual circumstances to ensure inclusive access [34-38].


In partnership with the International Labour Organisation, the OECD produced the “Equitable AI Transitions Playbook”, which offers concrete policy examples for updating skills frameworks and for up-skilling and reskilling workers to harness AI’s benefits while mitigating displacement and other social risks [39-40].


The session concluded with Speaker 2 thanking the Secretary-General, expressing appreciation for the remarks, and introducing the next panel on data sovereignty, moderated by Mr Orgo Sengupta and featuring Mr Sunil Gupta (Yota Data Services), Nisubo Ongama (Kala Limited), Ms Seema Ambasta (Vioma), among others [41-47].


Overall, the address underscored a balanced narrative: AI promises substantial productivity gains and massive investment flows, yet it also brings rising safety incidents and uneven workforce impacts. The OECD positions itself as a facilitator of evidence-based policy, offering tools such as the AI Index, incident-monitoring framework and the Equitable AI Transitions Playbook, and championing international cooperation through the G-PI Council. Unresolved challenges include improving AI-training uptake among low-literacy adults, curbing the surge in AI-related incidents, strengthening compute-capacity strategies for supply-chain security, and ensuring that new OECD tools are effectively adopted by member states. These issues set the agenda for forthcoming discussions, including the data-sovereignty panel that follows [41-47].


Session transcriptComplete transcript of the session
Mathias Cormann

India AI Impact Summit. And thank you to India for your leadership in bringing together the global AI community following the successful summits in the United Kingdom, Korea, and France. The OECD is proud to work with you and support policymakers, people, and businesses all around the world in harnessing the benefits of AI. And we do so with our unique data, evidence -based analysis, and policy guidance, aiming to promote responsible innovation and adoption while managing the potential risks along the way. In yesterday’s discussions, we heard about the wide -reaching potential impacts of AI development on our economies and societies. And of course, they continue to evolve as adoption accelerates and new applications are introduced. But one thing is clear.

These impacts are already a transforming and will become more transformative going forward. At the OECD, we estimate that with a strong level of adoption, AI could boost labor productivity by up to one percentage point every year across OECD and G20 countries over the next decade. Greater efficiency, lower costs, higher living standards, and the opportunities are also reflected in the scale of investment in AI infrastructure with almost three quarters of a trillion dollars in investment planned by big tech companies this year alone. Amid the rapid technological change and the massive investment flows, effective public policy is essential to allow AI to reach its full potential. Indeed, the foundational technologies that made this technological revolution possible were very much shaped and supported by public policy, from internet connectivity to semiconductor, supply chains, and everything in between.

Today, the OECD helps policymakers develop pro -innovation, pro -adoption, and pro -safety AI policies, drawing on the lessons of these previous interventions, sharing experiences at the cutting edge of AI policy, and identifying policy best practice. First, the OECD helps policymakers understand how AI technologies and business models are evolving and who the key players are in the AI ecosystem. We are tracking the global distribution of public AI compute capacity to help countries design their industrial strategies and assess opportunities to enhance AI supply chain security. We are also tracking global AI investment, with our analysis released earlier this week showing that 61 % of all venture capital investment worldwide, or $259 billion US, now goes to AI firms, which is up from just 30 % three years ago.

We are tracking the global distribution of public AI compute capacity to help countries Firms in the United States attract the largest share of venture capital by a wide margin, comprising 75 % of global I .I. venture capital deal value. Our analysis is also helping policymakers keep up with the latest technological developments. Our new report on the Argentic I .I. landscape, published last week, highlights that half of developers in recent surveys plan to use I .I. agents in their work, while identifying the need for progress on security, privacy and accuracy of I .I. agents to support further adoption. Second, we help policymakers track and classify I .I.-related risks. Our data on I .I. incidents shows that between 2022 and 2025, in just three years, the number of I .I.

incidents and hazards reported by the media increased dramatically, from 92 to 324 per month on average. The OECD common framework for reporting IA incidents helps promote global consistency and interoperability in IA incident reporting. And thirdly, we help policymakers benchmark their IA policies relative to their peers and international standards. Just yesterday, we released the OECD IA Index, which provides policymakers with an evidence -based tool to assess their progress in implementing the OECD recommendation on IA. We will also launch an interactive toolkit this year, which will feature a repository of good practices from around the world to support evidence -based peer learning. Fourth, we help governments coordinate their efforts internationally. Our integrated global partnership on IA was designed to promote the responsible development and use of artificial intelligence grounded in the OECD’s landmarks.

Thank you. G -PI, the G -PI Council, which we meet later this morning, to officially welcome our two newest members, Malta and Saudi Arabia, bringing G -PI’s membership to 46 countries across six continents. Beyond governments, we also provide analysis and recommendations to support II adoption of companies. The reporting framework for the Hiroshima II Process Code of Conduct launched at the OECD II Action Summit in Paris last year promotes transparency and accountability for responsible II innovation. We’re now updating that framework to support adoption by small and medium -sized enterprises. And yesterday, we published the OECD due diligence guidance for responsible II, which supports companies around the world in navigating a growing landscape of rules, regulations, and voluntary frameworks.

And we support people by providing recommendations for governments, business, labor, and other stakeholders to work together and to ensure everyone has the best possible opportunity to participate in and benefit from AI technologies. While AI adoption offers many exciting opportunities, it also carries the risk of job displacement for some. We estimate that taking the effects of AI into account, about 27 % of employment is in occupations that are at the highest risk of automation. It will be particularly important to ensure access to training opportunities for those who need the most. And on that front, our analysis shows that among adults with low literacy skills, only 23 % participate in relevant AI training, compared with 61 % of adults with higher literacy skills.

To improve participation in AI training among adults, learning needs to be more flexible, modular, and targeted to individual circumstances and job experiences. For this summit, together with the International Labour Organization, we have developed the Equitable AI. AI Transitions Playbook. which provides examples of policies to update skills frameworks as well as initiatives to upskill and reskill workers for an equitable II transition in closing to fully harness the enormous benefit and benefits and opportunities flowing from II while mitigating and managing some of the associated risks and disruptions we need to ensure governments industry labor and experts work together to support responsible adoption the OECD will continue to support this cooperation guided by our II principles so that II

Speaker 2

Thank you so much, Secretary General of OECD. These remarks, we’re very grateful for your remarks. For the next panel on data sovereignty, we have Mr. Sunil Gupta, Managing Director and Chief Executive Officer, Yota Data Services. We have Nisubo Ongama, COO, Kala Sonia Vaigando, Founders Associate, Kala Limited. We have Ms. Seema Ambasta, Chief Executive Officer, L &T, Vioma. And this session is being moderated by Mr. Orgo Sengupta, Founder and Research Director, WIDI Center for Legal Policy. May I request all the dignitaries to come up on stage, please.

Related ResourcesKnowledge base sources related to the discussion topics (13)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Mathias Cormann thanked India for its leadership in bringing together the global AI community after successful summits in the United Kingdom, Korea and France, and reaffirmed OECD’s commitment to support policymakers, businesses and citizens worldwide with evidence‑based analysis and guidance on responsible AI innovation.”

The keynote transcript records Cormann’s opening remarks thanking India and referencing the UK, Korea and France summits, and stating the OECD’s pride in supporting policy, confirming the report’s description.

!
Correctionmedium

“Almost three‑quarters of a trillion dollars in AI infrastructure is slated to be spent by major technology firms this year.”

Panel discussion notes indicate that “we’re spending a trillion dollars this year” on AI infrastructure, suggesting the figure is $1 trillion rather than $0.75 trillion as reported.

Additional Contextmedium

“Rapid technological change cannot be left to market forces alone; effective public policy is essential to capture benefits and manage risks, similar to earlier interventions for the internet, semiconductors and supply chains.”

Other OECD remarks emphasize that private‑sector investment is necessary because infrastructure needs exceed government capacity, underscoring the need for public policy alongside market activity.

Additional Contextmedium

“OECD monitors worldwide AI venture‑capital flows, now accounting for 61 % of total VC investment ($259 bn) and dominated by the United States, which captures 75 % of AI deal value.”

Analyses of AI compute capacity note the United States’ significant advantage and leading investment in data‑centre construction, providing context for US dominance in AI VC flows, though exact percentages are not specified in the source.

Additional Contextlow

“OECD developed a common framework for AI‑incident reporting to promote global consistency and interoperability, in response to a rise in AI‑related media incidents from 92 to 324 per month between 2022‑2025.”

OECD discussions confirm the creation of a common AI‑incident reporting framework and a focus on interoperability, but the source does not contain the specific incident‑frequency statistics cited in the report.

External Sources (63)
S1
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — -Speaker 2: Role appears to be event moderator or host. Area of expertise and specific title not mentioned.
S2
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S3
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S4
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — -Mathias Cormann- Secretary General, OECD (Organisation for Economic Co-operation and Development) -Moderator- Role: Ev…
S5
Policymaker’s Guide to International AI Safety Coordination — -Mathias Cormann- Secretary General of the OECD
S6
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — 1030 words | 136 words per minute | Duration: 452 secondss India AI Impact Summit. And thank you to India for your lead…
S7
AI/Gen AI for the Global Goals — Shea Gopaul: So thank you, Sanda. And like Sandra, I’d like to thank the African Union, as well as Global Compact. i…
S8
A Digital Future for All (afternoon sessions) — AI is enabling economic progress and entrepreneurship, especially in emerging markets. It can boost productivity across …
S9
Democratizing AI Building Trustworthy Systems for Everyone — Sure. Thank you, Justin, and it’s a pleasure to be here with the panel and the audience today. So I think our announceme…
S10
Democratizing AI Building Trustworthy Systems for Everyone — Private sector investment is necessary due to the scale of infrastructure needs that cannot be met by governments alone
S11
Policymaker’s Guide to International AI Safety Coordination — In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is buil…
S12
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — During a session focused on the impact of digitalisation on employment, experts from the International Labour Organisati…
S13
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — ## Next Steps and Commitments ### Bias and Inclusivity ### International Cooperation ### OECD Research Findings – Re…
S14
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — Huang describes AI as a five-layer system starting with energy at the bottom, followed by chips and computing infrastruc…
S15
AI drives productivity surge in certain industries, report shows — A recent PwC (PricewaterhouseCoopers International Limited) reporthighlightsthat sectors of the global economy with high…
S16
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — AI is driving exceptional economic growth in the United States, with economists predicting 3-4% growth and techno-econom…
S17
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — The IMF calculated that AI has potential to provide up to 0.8% boost to global growth over the coming years, which would…
S18
Open Forum #30 High Level Review of AI Governance Including the Discussion — Lucia Russo: Thank you, Yoichi. Good morning and thank you my fellow panelists for this interesting discussion. As Yoich…
S19
OECD releases AI Incidents Monitor to address AI challenges with evidence-based policies — The OECD.AI Observatoryreleaseda beta version of the AI Incidents Monitor (AIM). Designed by the OECD.AI Observatory, th…
S20
Cybersecurity regulation in the age of AI | IGF 2023 Open Forum #81 — Gallia Daor:Sure. Thank you. So indeed, in 2019, the OECD was the first intergovernment organization to adopt principles…
S21
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — Thank you, Ambassador. And on behalf of the OECD, I just want to thank once again the Netherlands for the leadership in …
S23
Ad Hoc Consultation: Thursday 8th February, Morning session — Although no explicit evidence or arguments were cited in the speaker’s remarks, the strong endorsement for the language …
S24
Geneva Manual exercise group 2 — Due to the succinctness of Orhan’s remarks, it is evident that he did not present any substantial arguments or detailed …
S25
Any other business /Adoption of the report/ Closure of the session — An acknowledgment followed concerning the multitude of discussions in informal settings, illustrating the commitment of …
S26
Powering AI Global Leaders Session AI Impact Summit India — This analogy is particularly insightful because it demonstrates how the same transformative technology can lead to compl…
S27
AI and Data Driving India’s Energy Transformation for Climate Solutions — “that analysis -based decision -making has to be adopted.”[13]. “And so for that we need for the right public policy, we…
S28
AI for Social Empowerment_ Driving Change and Inclusion — Arguments:Urgent need for comprehensive policy responses including competition policy, tax policy, labor law reforms, an…
S29
Open Forum #30 High Level Review of AI Governance Including the Discussion — ### OECD’s Evolution and Approach Lucia Russo: Thank you, Yoichi. Good morning and thank you my fellow panelists for th…
S30
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — ### Country Implementation Examples ### Initiative Background ### International Cooperation ### OECD Research Finding…
S31
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Furthermore, the G7 delegates are focused on creating a report that specifically examines the risks, challenges, and opp…
S32
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — Huang describes AI as a five-layer system starting with energy at the bottom, followed by chips and computing infrastruc…
S33
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — AI is driving exceptional economic growth in the United States, with economists predicting 3-4% growth and techno-econom…
S34
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — The IMF calculated that AI has potential to provide up to 0.8% boost to global growth over the coming years, which would…
S35
From Innovation to Impact_ Bringing AI to the Public — Sharma’s central thesis positions AI not as a threat to employment but as a productivity multiplier that will enable Ind…
S36
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — India AI Impact Summit. And thank you to India for your leadership in bringing together the global AI community followin…
S37
Open Forum #30 High Level Review of AI Governance Including the Discussion — Lucia Russo: Thank you, Yoichi. Good morning and thank you my fellow panelists for this interesting discussion. As Yoich…
S38
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Lucia Russo: Okay. Thank you. I can start with that, and as I mentioned, we have at the OECD the public government d…
S39
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — All speakers emphasized the critical importance of international cooperation for successful AI development. The OECD too…
S40
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — Cormann outlined the OECD’s comprehensive approach to supporting policymakers through four key areas. First, the organis…
S41
AI Meets Cybersecurity Trust Governance &amp; Global Security — Building confidence and security in the use of ICTs | Artificial intelligence | Data governance Building trust through …
S42
Cybersecurity regulation in the age of AI | IGF 2023 Open Forum #81 — Gallia Daor:Sure. Thank you. So indeed, in 2019, the OECD was the first intergovernment organization to adopt principles…
S43
eTrade for all leadership roundtable: Unlocking digital trade for inclusive development — While there is fear of job substitution by AI, the ILO suggests this concern may be overemphasized. A study conducted by…
S44
AI in practice across the UN system: UN 2.0 AI Expo — TheUN 2.0 Data & Digital Community AI Expoexamined how AI is currently embedded within the operational, analytical and i…
S45
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Thank you for inviting me to this important summit. It is an honor to be here in India at this pivotal moment for global…
S46
AI will have a significant impact on jobs, the OECD said — According to the Organisation for Economic Co-operation and Development (OECD), more than a quarter of jobs in their mem…
S47
Panel Discussion Inclusion Innovation & the Future of AI — I believe that what we are doing is building systems that are going to be smarter than humans at all cognitive labor… …
S48
Artificial super intelligence predicted by 2035 — SoftBank CEO Masayoshi Sonhas reaffirmed his beliefthat artificial super intelligence (ASI) will become a reality by 203…
S49
Software.gov — Keeping pace with rapidly evolving technology is a challenge that governments face. Bogdan-Martin emphasizes that techno…
S50
WS #53 Promoting Children’s Rights and Inclusion in the Digital Age — – The rapid pace of technological change outpacing policy development
S51
WS #64 Designing Digital Future for Cyber Peace &amp; Global Prosperity — Rapid pace of technological change outpacing policy frameworks
S52
Global AI Governance: Reimagining IGF’s Role &amp; Impact — Nobile provided a key reframing of AI governance challenges, arguing that “the debate is not humans versus machines but …
S53
Creating Eco-friendly Policy System for Emerging Technology — Ingrid Volkmer:Could you bring up my slides, please? Okay. Hi, everyone. My name is Ingrid Volkmar. I’m a professor at t…
S54
Mind the AI Divide: Shaping a Global Perspective on the Future of Work — A limited number of countries are leading the way in developing compute capacity, while many others are beginning from a…
S55
Why science metters in global AI governance — Artificial intelligence | Monitoring and measurement | Capacity development
S56
State of play of major global AI Governance processes — Juha Heikkila:Thank you very much, and thank you very much indeed for the invitation to be on this panel. So indeed the …
S57
AI and the future of digital global supply chains (UNCTAD) — In conclusion, AI has emerged as a powerful tool that can significantly impact trade logistics. It can optimize routes a…
S58
Survey finds developers value AI for ideas, not final answers — As AI becomes moreintegrated into developer workflows, a new report shows that trust in AI-generated results erodes. Acc…
S59
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S60
AI agents offer major value but trust and data gaps remain — AI agents coulddrive up to $450 billion in economic value by 2028, according to new research by Capgemini. The gains wou…
S61
WS #98 Towards a global, risk-adaptive AI governance framework — Lucia Russo: First of all, let me thank you for organizing this very important session, and welcome all the other spe…
S62
WS #102 Harmonising approaches for data free flow with trust — Clarisse Girot: Thank you, Tymia. And hi, everybody. Good morning, good afternoon, wherever you are. Thanks very muc…
S63
OECD DIGITAL ECONOMY PAPERS — The OECD approaches digital security through the framework of risk management. Risk can be defined as ‘ the effect of un…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Mathias Cormann
6 arguments136 words per minute1030 words452 seconds
Argument 1
Economic and productivity benefits of AI – AI could boost labor productivity by up to 1 % per year across OECD and G20 countries (Mathias Cormann)
EXPLANATION
Mathias Cormann argues that widespread adoption of artificial intelligence can raise labour productivity by roughly one percentage point each year in OECD and G20 economies. This boost would translate into greater efficiency, lower production costs and higher living standards over the next decade.
EVIDENCE
He cites the OECD’s own estimate that, with strong adoption, AI could increase labour productivity by up to one percentage point annually across OECD and G20 countries over the next ten years [9].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s productivity boost is highlighted in discussions of AI enabling economic progress and productivity gains across sectors, especially in emerging markets [S8].
MAJOR DISCUSSION POINT
AI-driven productivity gains
Argument 2
Scale of AI investment and the need for effective public policy – Nearly three‑quarters of a trillion dollars in AI infrastructure investment planned this year; public policy is essential to realise AI’s full potential (Mathias Cormann)
EXPLANATION
Cormann highlights the massive scale of private sector spending on AI infrastructure, estimating close to $750 billion will be invested this year. He stresses that without clear, forward‑looking public policies, the economic and societal benefits of this investment cannot be fully captured.
EVIDENCE
He notes that “almost three quarters of a trillion dollars in investment planned by big tech companies this year alone” reflects the scale of AI infrastructure spending [10], and immediately follows with the claim that “effective public policy is essential to allow AI to reach its full potential” amid rapid technological change and massive investment flows [11].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The scale of private-sector AI infrastructure spending and the necessity of public-policy frameworks are noted in remarks about massive private investment and the need for policy to address infrastructure gaps [S10].
MAJOR DISCUSSION POINT
Investment magnitude and policy necessity
AGREED WITH
Speaker 2
Argument 3
OECD’s data‑driven support for policymakers – Tracking global AI compute capacity, venture‑capital flows, and AI‑related incidents; providing the AI Index and evidence‑based policy guidance (Mathias Cormann)
EXPLANATION
Cormann explains that the OECD supplies policymakers with a suite of data‑driven tools, including monitoring of compute capacity, venture‑capital investment, and AI incident reporting. These data underpin the OECD AI Index and other evidence‑based guidance that help governments design pro‑innovation, safe AI policies.
EVIDENCE
He describes several tracking activities: monitoring global public AI compute capacity to inform industrial strategies [15]; analysing venture-capital flows, noting that 61 % of global VC ($259 bn) now goes to AI firms, up from 30 % three years earlier [16-17]; reporting a sharp rise in AI incidents from 92 to 324 per month between 2022-2025 [21]; and announcing the release of the OECD AI Index as an evidence-based benchmarking tool [24] together with an upcoming interactive toolkit of good practices [25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
OECD’s role in tracking compute capacity, VC flows and incidents, and providing the AI Index and benchmarking tools is described in multiple OECD keynote summaries [S4] and [S6].
MAJOR DISCUSSION POINT
Data‑based policy support
Argument 4
Managing AI‑related risks and safety – Rapid rise in reported AI incidents (from 92 to 324 per month); establishment of a common incident‑reporting framework to promote consistency (Mathias Cormann)
EXPLANATION
Cormann points out that AI‑related hazards are increasing sharply, as reflected in media‑reported incidents. To address this, the OECD has introduced a common framework for AI incident reporting to ensure consistent, interoperable data across jurisdictions.
EVIDENCE
He provides the incident statistics, showing a jump from an average of 92 incidents per month in 2022 to 324 per month in 2025 [21], and then mentions that “the OECD common framework for reporting IA incidents helps promote global consistency and interoperability in IA incident reporting” [22].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The rise in AI incidents and the OECD’s common incident-reporting framework are mentioned in the OECD keynote overviews of safety coordination and incident reporting [S4] and [S6].
MAJOR DISCUSSION POINT
AI risk monitoring and reporting
Argument 5
Impact of AI on employment and the need for upskilling – About 27 % of jobs are at high risk of automation; low‑literacy adults have low participation in AI training, highlighting the need for flexible, modular upskilling programs (Mathias Cormann)
EXPLANATION
Cormann warns that AI adoption will displace workers, estimating that roughly a quarter of current jobs face high automation risk. He stresses that upskilling efforts must be tailored, especially for low‑literacy adults who currently have low participation rates in AI‑related training.
EVIDENCE
He states that “about 27 % of employment is in occupations that are at the highest risk of automation” [35] and adds that among adults with low literacy skills only 23 % take part in relevant AI training versus 61 % of higher-literacy adults [37]. He recommends more flexible, modular learning approaches to improve participation [38] and mentions the Equitable AI Transitions Playbook developed with the ILO as a policy resource [39-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Workforce displacement figures (≈27 % at high automation risk) and the training gap for low-literacy adults are cited in the OECD keynote on employment impacts [S4] and further discussed in a session on digitalisation and employment quality [S12].
MAJOR DISCUSSION POINT
Workforce displacement and reskilling
Argument 6
International cooperation and standards for responsible AI – Expansion of the G‑20 AI Council to 46 members; launch of the OECD AI Index, interactive toolkit, due‑diligence guidance, and the Equitable AI Transitions Playbook to foster global coordination (Mathias Cormann)
EXPLANATION
Cormann emphasizes the importance of multilateral collaboration, noting the growth of the G‑20 AI Council to 46 countries. He also lists a suite of OECD initiatives—AI Index, toolkit, due‑diligence guidance, and the Equitable AI Transitions Playbook—that aim to harmonise standards and share best practices worldwide.
EVIDENCE
He welcomes Malta and Saudi Arabia as new members, bringing G-20 AI Council membership to 46 countries across six continents [28]; references the OECD AI Index released the previous day [24]; mentions an upcoming interactive toolkit of global good practices [25]; cites the OECD due-diligence guidance for responsible AI published recently [32]; and describes the Equitable AI Transitions Playbook co-developed with the ILO that offers policy examples for upskilling and equitable transition [39-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of multilateral AI governance, the expansion of the G-20 AI Council and the rollout of the AI Index, toolkit and playbook are referenced in the OECD cooperation overview and a forum on trustworthy AI emphasizing international cooperation [S13] and [S4].
MAJOR DISCUSSION POINT
Global AI governance and coordination
AGREED WITH
Speaker 2
Agreements
Agreement Points
AI’s transformative impact and the need for effective public policy
Speakers: Mathias Cormann, Speaker 2
Scale of AI investment and the need for effective public policy – Nearly three‑quarters of a trillion dollars in AI infrastructure investment planned this year; public policy is essential to realise AI’s full potential (Mathias Cormann)
Both speakers acknowledge that AI is rapidly transforming economies and that strong public-policy frameworks are required to capture its benefits, with Mathias highlighting the massive investment and policy necessity [8-11] and Speaker 2 expressing gratitude for the Secretary-General’s remarks on the topic [41-42].
POLICY CONTEXT (KNOWLEDGE BASE)
The consensus that AI’s transformative impact requires effective public policy is echoed in the AI Impact Summit India, which highlighted how governance shapes societal outcomes of transformative technologies [S26], reinforced by Indian policy analyses stressing the need for appropriate public policy and data strategies for AI-driven transformations [S27], and further supported by calls for comprehensive policy responses-including competition, labor, and social protection-in AI for social empowerment literature [S28].
Support for OECD’s role and international cooperation on AI governance
Speakers: Mathias Cormann, Speaker 2
International cooperation and standards for responsible AI – Expansion of the G‑20 AI Council to 46 members; launch of the OECD AI Index, interactive toolkit, due‑diligence guidance, and the Equitable AI Transitions Playbook to foster global coordination (Mathias Cormann)
Both speakers endorse the OECD’s multilateral work on AI, with Mathias pointing to the growing G-20 AI Council membership and a suite of OECD tools for benchmarking and best-practice sharing [28][24-25][32][39-40], and Speaker 2 thanking the OECD Secretary-General and indicating readiness to continue collaborative discussions [41-42].
POLICY CONTEXT (KNOWLEDGE BASE)
Support for the OECD’s role and international cooperation aligns with the OECD’s own evolution in AI governance discussed at Open Forum #30, where collaborative work with Japan and multi-stakeholder groups was highlighted [S29]; subsequent forums emphasized the necessity of international cooperation for trustworthy AI and the OECD toolkit [S30]; and the G7’s request for OECD assistance underscores its central position in global AI governance efforts [S31].
Similar Viewpoints
Both speakers recognize that AI’s rapid growth calls for coordinated policy action and international cooperation, emphasizing the OECD’s central role in providing data‑driven guidance and global standards [8-11][24-25][28][41-42].
Speakers: Mathias Cormann, Speaker 2
Scale of AI investment and the need for effective public policy – Nearly three‑quarters of a trillion dollars in AI infrastructure investment planned this year; public policy is essential to realise AI’s full potential (Mathias Cormann) International cooperation and standards for responsible AI – Expansion of the G‑20 AI Council to 46 members; launch of the OECD AI Index, interactive toolkit, due‑diligence guidance, and the Equitable AI Transitions Playbook to foster global coordination (Mathias Cormann)
Unexpected Consensus
Explicit appreciation of the OECD’s leadership despite the brief nature of Speaker 2’s remarks
Speakers: Mathias Cormann, Speaker 2
International cooperation and standards for responsible AI – Expansion of the G‑20 AI Council to 46 members; launch of the OECD AI Index, interactive toolkit, due‑diligence guidance, and the Equitable AI Transitions Playbook to foster global coordination (Mathias Cormann)
While Speaker 2’s contribution consists only of a thank-you, it nonetheless aligns with Mathias’s extensive emphasis on OECD-driven multilateral governance, showing an unexpected level of consensus on the organization’s leadership role [41-42][28].
Overall Assessment

The two speakers show clear alignment on the importance of AI’s economic impact, the necessity of robust public‑policy frameworks, and the value of OECD‑led international cooperation. The consensus is limited to these high‑level acknowledgements, as Speaker 2 does not introduce new substantive arguments.

Moderate consensus – agreement on overarching themes (AI transformation, policy need, and multilateral coordination) but limited depth due to the brevity of Speaker 2’s remarks. This suggests a shared baseline for future detailed discussions on AI governance and investment.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript shows virtually no substantive disagreement. Mathias Cormann presents a series of data‑driven arguments about AI’s economic impact, investment scale, risk monitoring, and workforce implications. Speaker 2 merely thanks the speaker and introduces the next panel, offering no contrasting viewpoint.

Minimal – the interaction is largely complementary, indicating strong alignment on the importance of AI policy and cooperation. This suggests that, for the topics covered, consensus exists among the participants, facilitating coordinated action rather than contentious debate.

Partial Agreements
Both speakers acknowledge the significance of the OECD’s AI agenda. Mathias stresses the massive private‑sector investment and the necessity of public policy to capture its benefits [10-11], while Speaker 2 expresses gratitude for the remarks, implicitly endorsing the importance of the issues raised [41]. No conflict is evident; the two statements converge on the shared goal of advancing AI responsibly.
Speakers: Mathias Cormann, Speaker 2
Scale of AI investment and the need for effective public policy – Nearly three‑quarters of a trillion dollars in AI infrastructure investment planned this year; public policy is essential to realise AI’s full potential (Mathias Cormann) Thank you so much, Secretary General of OECD. These remarks, we’re very grateful for your remarks. (Speaker 2)
Takeaways
Key takeaways
AI could boost labor productivity by up to 1 % per year across OECD and G20 economies. Nearly three‑quarters of a trillion dollars in AI infrastructure investment is planned this year, underscoring the need for effective public policy. The OECD provides data‑driven support to policymakers, tracking AI compute capacity, venture‑capital flows, and AI‑related incidents, and offers tools such as the AI Index and evidence‑based guidance. Reported AI incidents have risen sharply (from 92 to 324 per month), prompting the creation of a common incident‑reporting framework. Approximately 27 % of jobs are at high risk of automation; participation in AI training is low among adults with low literacy, highlighting the need for flexible, modular upskilling programs. International cooperation is expanding, with the G‑20 AI Council now at 46 members and new OECD resources (AI Index, interactive toolkit, due‑diligence guidance, Equitable AI Transitions Playbook) to promote responsible AI.
Resolutions and action items
Release of the OECD AI Index to benchmark national AI policies. Launch of an interactive toolkit featuring a repository of good practices (planned for this year). Update of the OECD reporting framework for the Hiroshima AI Process Code of Conduct to accommodate SMEs. Publication of OECD due‑diligence guidance for responsible AI for companies worldwide. Development and dissemination of the Equitable AI Transitions Playbook (in partnership with the ILO) to support upskilling and reskilling policies.
Unresolved issues
How to substantially increase AI training participation among low‑literacy adults. Specific measures needed to curb the rapid rise in AI‑related incidents and hazards. Details on ensuring AI supply‑chain security while expanding compute capacity. Balancing accelerated AI investment with the development of robust safety and privacy standards. Further clarification on how new OECD tools will be operationalised and adopted by member countries.
Suggested compromises
None identified
Thought Provoking Comments
AI could boost labor productivity by up to one percentage point every year across OECD and G20 countries over the next decade.
Quantifies the macro‑economic upside of AI in a clear, comparable metric, moving the conversation from abstract hype to measurable impact.
Set the baseline for the discussion of AI’s benefits, prompting subsequent references to higher living standards, lower costs and massive private investment. It framed the rest of the speech as a balancing act between this potential gain and emerging risks.
Speaker: Mathias Cormann
Our analysis shows that 61 % of all venture‑capital investment worldwide – about $259 billion – now goes to AI firms, up from just 30 % three years ago.
Highlights the rapid reallocation of capital toward AI, signalling a structural shift in the innovation ecosystem and underscoring why policy attention is urgent.
Shifted the tone from optimism to urgency, leading Cormann to stress the need for effective public policy to channel this influx responsibly. It also primed the audience for later data‑driven policy tools the OECD is developing.
Speaker: Mathias Cormann
Between 2022 and 2025 the number of AI incidents reported by the media rose dramatically, from 92 to 324 per month on average.
Introduces concrete evidence of rising safety and security concerns, challenging the earlier narrative of unmitigated benefit.
Created a turning point in the speech: after outlining benefits, the discussion pivoted to risk management. This led directly to the introduction of the OECD common framework for reporting AI incidents and the AI Index, deepening the conversation around governance.
Speaker: Mathias Cormann
About 27 % of employment is in occupations that are at the highest risk of automation; among adults with low literacy only 23 % participate in AI‑related training versus 61 % of higher‑literacy adults.
Links macro‑level productivity gains to micro‑level distributional challenges, foregrounding equity and the social dimension of AI adoption.
Shifted the focus toward inclusive policy measures, prompting the mention of flexible, modular training and the OECD‑ILO “Equitable AI Transitions Playbook.” It broadened the conversation from technical governance to workforce development.
Speaker: Mathias Cormann
The OECD AI Index provides policymakers with an evidence‑based tool to assess their progress in implementing the OECD Recommendation on AI, and we will launch an interactive toolkit featuring a repository of good practices from around the world.
Offers a concrete, actionable mechanism for peer learning and benchmarking, moving the dialogue from problem‑identification to solution‑implementation.
Encouraged participants to think about collaborative, data‑driven policy design, setting the stage for future international coordination and signaling that the OECD is not just a monitor but a facilitator of best‑practice sharing.
Speaker: Mathias Cormann
We are tracking the global distribution of public AI compute capacity to help countries design their industrial strategies and assess opportunities to enhance AI supply‑chain security.
Introduces a novel analytical angle—public compute capacity—as a strategic lever for national AI policy, expanding the policy toolkit beyond funding and regulation.
Prompted a shift toward discussion of industrial strategy and supply‑chain resilience, indicating that AI policy must also address infrastructure and geopolitical considerations.
Speaker: Mathias Cormann
Overall Assessment

The discussion was driven by a series of data‑rich statements from Mathias Cormann that moved the conversation through distinct phases: first, a quantification of AI’s economic promise; second, an illustration of the rapid surge in private investment; third, a stark presentation of rising AI‑related incidents; fourth, an emphasis on the unequal impact on workers and the need for inclusive upskilling; and finally, the unveiling of concrete OECD tools (AI Index, compute‑capacity tracking, playbooks) to translate insight into policy action. Each pivot introduced new dimensions—risk, equity, infrastructure, and governance—that deepened the dialogue and reframed the audience’s perspective from celebrating AI’s potential to confronting its systemic challenges and collaborative solutions.

Follow-up Questions
What steps are needed to advance security, privacy, and accuracy of AI agents to support broader adoption?
Cormann highlighted the need for progress on these aspects, indicating a gap in current knowledge and practice that requires further investigation.
Speaker: Mathias Cormann
How can global AI incident reporting be standardized and improved for consistency and interoperability?
He referenced the OECD common framework for reporting AI incidents, suggesting further work to refine and implement it worldwide.
Speaker: Mathias Cormann
What strategies can increase AI training participation among adults with low literacy skills?
Cormann noted low participation rates and called for more flexible, modular training tailored to individual circumstances.
Speaker: Mathias Cormann
What policies and initiatives are most effective for upskilling and reskilling workers to ensure an equitable AI transition?
He mentioned the Equitable AI Transitions Playbook, implying a need to evaluate and expand such policy measures.
Speaker: Mathias Cormann
How can the OECD AI Code of Conduct be adapted to support adoption by small and medium‑sized enterprises (SMEs)?
Cormann indicated an update to the framework for SMEs, pointing to a research need on appropriate adaptations.
Speaker: Mathias Cormann
What mechanisms can enhance international coordination among governments for responsible AI development and use?
He described the integrated global partnership on AI, suggesting further study on effective coordination models.
Speaker: Mathias Cormann
How should countries assess and develop their public AI compute capacity to strengthen industrial strategies and supply‑chain security?
Cormann mentioned tracking global distribution of compute capacity, indicating a need for deeper analysis and guidance.
Speaker: Mathias Cormann
How can the OECD AI Index and interactive toolkit be utilized to benchmark and improve national AI policies?
He introduced these tools, implying further research on their impact and best practices for implementation.
Speaker: Mathias Cormann

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote Address_Revanth Reddy_Chief Minister Telangana

Keynote Address_Revanth Reddy_Chief Minister Telangana

Session at a glanceSummary, keypoints, and speakers overview

Summary

The India AI Impact Summit 2026 opened with a welcome and an invitation to deliver a keynote on technology-led governance and AI’s role in state growth [1-4]. Chief Minister A. Revanth Reddy thanked the audience and highlighted the gathering of global AI experts [5-6].


He traced humanity’s progress from fire and the wheel to modern inventions such as electricity and the internet, arguing that AI now represents the most transformative breakthrough [7-9][13-15]. Reddy claimed AI surpasses human intelligence, can act autonomously, and combined with robotics gives machines both physical and mental capabilities [16-22]. Noting that India missed earlier industrial and manufacturing revolutions, he emphasized that the country has only contributed services, not global AI products, and must both produce and use AI to stay competitive [25-33]. He urged India to achieve leadership across all AI layers-chips, green energy, data storage, platforms, applications, and services-and to create a national roadmap for the top three layers [34-36].


Reddy proposed establishing a centralized AI war room with state participation to monitor rapid AI developments, suggesting Hyderabad could host it [37-40]. He called for a world-class AI university focused on original research, and for India to enter the full AI supply chain by manufacturing GPU chips and securing rare minerals [40-44]. To mitigate socioeconomic impacts, he recommended a system to estimate AI-induced job losses and massive investment in reskilling, alongside an AI fund and a start-up village in Telangana to nurture unicorns [45-51]. He advocated for semi-annual AI summits in rotating cities, the creation of a national AI council and a dedicated AI ministry to regulate misuse and protect security, and the use of AI for social justice and poverty reduction [52-58].


The organizers thanked the chief minister for his insightful address and praised Telangana’s initiatives [62-66]. The audience responded with a standing ovation, underscoring broad support for the proposed AI agenda [67]. The summit’s outcome was a consensus that coordinated policy, infrastructure, and research initiatives are essential for India to achieve global AI leadership [34-36][55-57].


Keypoints

AI’s transformative power demands new governance. The speaker describes AI as surpassing human intelligence, possessing agency, and being able to act autonomously, emphasizing the need for rules to prevent misuse [13-22][56].


India must build a complete AI ecosystem to avoid past missed revolutions. He argues that the country should both use and produce AI technologies, developing capabilities from chips and green energy to platforms and services [33-36][42-44].


Creation of dedicated institutions and mechanisms is essential. Proposals include an AI war-room, a world-class AI university, a national AI council and ministry, an AI fund and startup village, and semi-annual AI summits to coordinate research, policy, and industry [37-41][50-53][55-57].


Addressing socio-economic impacts is a priority. The speaker calls for systems to estimate AI-driven job losses, massive investment in reskilling, and using AI to advance social justice, inclusion, and poverty reduction [45-49][57].


Invitation for collaboration with Telangana and global partners. He urges national and international institutions to work with Telangana, positioning the state as a hub for AI development and partnerships [58-60][61-62].


Overall purpose: The discussion aims to articulate a strategic vision for “technology-led governance” by positioning AI at the core of India’s future growth, outlining concrete policy, infrastructure, and educational initiatives, and rallying support from government, industry, and academia to make India a global AI leader.


Overall tone: The conversation begins with formal, welcoming remarks, shifts to an enthusiastic and visionary tone as the speaker extols AI’s potential, becomes urgent and prescriptive when highlighting gaps and the need for institutional action, and concludes on a celebratory, appreciative note as the host thanks the speaker and calls for applause. The tone remains positive throughout but moves from admiration to a call-to-action.


Speakers

Speaker 1


– Role/Title: Event moderator / host [S1]


– Area of expertise: Event facilitation (not specified)


A. Revanth Reddy


– Role/Title: Chief Minister of Telangana [S4]


– Area of expertise: Technology-led governance, AI policy, public administration


Additional speakers:


Honorable Prime Minister Narendra Modi


– Role/Title: Prime Minister of India


– Area of expertise: National leadership, policy


Ashwini Vaishnavi


– Role/Title: Minister for Electronics and Information Technology, Government of India


– Area of expertise: Electronics, IT, digital policy


Full session reportComprehensive analysis and detailed insights

Opening Welcome (Speaker 1) – The host offered a formal greeting and invited the chief minister to deliver the keynote on “technology-led governance and harnessing AI for the state’s growth” [1-4].


Keynote – Chief Minister A. Revanth Reddy


– He thanked the audience and acknowledged the presence of leading AI experts from around the world [5-6].


– He traced humanity’s progress from the discovery of fire, the wheel and agriculture, through democracy, the rule of law and universal voting rights, to modern inventions such as electricity, the aeroplane, vaccines and the internet [7-12].


– He noted that after the industrial revolution human physical capability can no longer match machines, citing the inability to fly like an aeroplane, swim like a ship or run at motorcycle speed [13-15].


– Describing AI as “the greatest invention” of the present era, he said a GPU-based AI system now exceeds human intelligence, can compose poetry, generate reports, produce films and presentations, and “knows almost everything” [16-20].


– He emphasized AI’s agency, contrasting it with an aeroplane’s flight and a car’s motion that depend on human commands, and stating that AI can “order to itself” [21-22].


– He reflected on India’s past, observing that the country missed both the industrial and manufacturing revolutions, contributed mainly to the services revolution (software and telecom), and has not created global products such as Google, Facebook or YouTube [23-32].


– He argued that a nation must both use and produce a technology, and called for India to lead across all layers of the AI value-chain-chips, green energy, data storage, platforms, applications and services-and to draft a roadmap that secures leadership in the top three layers [33-36].


Institutional and infrastructural proposals


– An AI war-room that brings together central and state authorities to monitor rapid AI developments, with Hyderabad proposed as the host [37-40].


– A world-class AI university with state-of-the-art facilities focused on original research [40-41].


– Domestic GPU-chip manufacturing, full participation in the AI supply chain, and procurement of rare minerals [42-44].


– A system to estimate AI-induced job losses and massive investment in reskilling programmes for displaced workers [45-49].


– A dedicated AI fund to support start-ups and the creation of an AI start-up village in Telangana that could serve the whole country [50-51].


– Bi-annual AI summits rotating across Indian cities, with Hyderabad as a possible venue [52-54].


– A national AI council modelled on the GST Council or NITI Aayog, and AI ministries at both centre and state levels to draft laws preventing misuse of AI, especially where national security is concerned [55-57].


– Leveraging AI to advance social justice, inclusion and poverty eradication [57].


– He concluded by inviting collaboration in Telangana and ending with the patriotic salutes “Jai Bharat. Jai Telangana.” [65-66].


Closing Remarks (Speaker 1) – The host thanked the chief minister, praised Telangana’s ongoing AI initiatives, invited participants to the upcoming sessions, and the audience gave a big round of applause for Shri A. Revanth Reddy [66-67]; the host’s final remarks were delivered thereafter [68].


Session transcriptComplete transcript of the session
Speaker 1

So I’d like to welcome you on behalf of the India mission and the India AI impact summit 2026. Your leadership is exemplary and we’ve been honored to have you here. So I would like to invite you to the to the dais to deliver a keynote session on technology -led governance and harnessing the power of AI in the state’s growth. Thank you. you

A. Revanth Reddy

Good afternoon, friends. My pleasure to address this event because of some of the best of minds from all over the world have come together at the Artificial Intelligence Summit in India. I congratulate the Government of India, Honorable Prime Minister Narendra Modi and Ashwini Vaishnavi Minister for Electronics and IT for making this Across human history, great ideas, discoveries and inventions have changed our lives. Discoveries of fire, wheels and agriculture, changed our lives. Ideas like democracy, rule of law, universal voting rights and reservations changed our lives. Technology like electricity, aeroplane, vaccines and internet changed our lives. In the past, inventions added to human physical strength and innovation. After industrial revolution, our bodies never matched machines. We cannot fly like a plane, swim like a ship, or run at the speed of motorcycle or car.

Today, we are witnessing the rise of our greatest invention, that is AI. Artificial intelligence has made a GPU chip more intelligent than humans. It can write poetry and reports, make films and presentations, and it knows almost everything. These days, people say on social media that humans are not the most intelligent anymore. AI is more intelligent. AI also has agency, power to decide. An aeroplane can fly only if it has the power to decide. We tell it. A car will move or stop only if we tell it. AI can order to itself. Combined AI and robotics Machines have both physical and mental capabilities. In this context is important when we set an agenda for the future and AI race has already begun.

We see leadership of a few countries, companies and people. India missed the industrial revolution and the manufacturing revolution. We played a role in services revolution, especially software and telecom. But even in software, we created services but not global products. Google search, Google maps, Twitter, Facebook, YouTube and WhatsApp. We Indians use them. We Indians worked in these companies, but we don’t own them. We did not create them. There are two ways any country can influence a global trend. Use or produce. With AI, we have to both produce and use. India must become a leader in all layers of AI. Chips, green energy, data storage, platforms, applications and services. We must create a roadmap to ensure real leadership in top three layers.

Secondly, India must create a war room with center and states to monitor and respond to AI developments. Thank you. An AI war room for India is crucial. because development in AI can be very quick. Hyderabad can build an AI war room for India with support of the government of India. We need to establish an AI university of global standards with top facilities focusing on original research. We have seen many controversies in this event. Fourthly, to lead an AI revolution, we have to manufacture GPU chips. We have to become part of the entire supply chain. We must get rare minerals. Fifth, we have to put a system to estimate job losses because of AI. India cannot delay this anymore.

We have to invest massively We have to invest in the future. We have to invest in the future. We have to invest in the future. in reskilling of people who lose their jobs. India needs an AI fund for start -ups so our youth can work on all areas of AI and aim to become unicorns. Telangana can establish an AI start -up village for entire country with support of government of India. We need more AI summits. Not once a year, but every six months. Different cities can host them, like Hyderabad. I request Honourable Prime Minister Sri Narendra Modi ji to establish a national AI council, like GST Council or NITI IO. We need an AI ministry both at centre and state level.

to help make laws to prevent misuse of AI, especially against national security and interests. We need to use AI strongly for achievements of social justice, inclusion and removal of poverty. Finally, I invite you to Telangana for discussions, for partnerships. I welcome global and national institutions to work in my state in AI. Thank you. Jai Bharat. Jai Telangana.

Speaker 1

Thank you, sir. Thank you very much. On behalf of the organizers, I would like to invite you to some of our more interesting sessions. Thank you for the insightful speech. And we are all inspired by the work which is being done in Telangana under your leadership. Thank you very much. Please, audience please, a big round of applause for Shri A. Nivant Reddy, the Honourable Chief Minister of Telangana.

Related ResourcesKnowledge base sources related to the discussion topics (17)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The host offered a formal greeting and invited the chief minister to deliver the keynote on “technology‑led governance and harnessing AI for the state’s growth”.”

The knowledge base records a formal opening of the summit and acknowledges the participant’s (the chief minister’s) role, confirming that a formal greeting and invitation were made [S8] and [S4].

Additional Contextmedium

“He emphasized AI’s agency, contrasting it with an aeroplane’s flight and a car’s motion that depend on human commands, and stating that AI can “order to itself”.”

Several sources discuss AI as an agent capable of autonomous decision-making, highlighting the distinction between tool-like machines and agentic AI systems [S64] and [S65] and [S66].

Additional Contextmedium

“He noted that after the industrial revolution human physical capability can no longer match machines, citing the inability to fly like an aeroplane, swim like a ship or run at motorcycle speed.”

The knowledge base includes commentary on the industrial revolution’s impact on human labour and the superior physical capabilities of machines, providing background for this observation [S62].

Additional Contextmedium

“A system to estimate AI‑induced job losses and massive investment in reskilling programmes for displaced workers.”

Discussions in the knowledge base highlight concerns about job displacement due to AI and the need for reskilling, aligning with the proposed system [S62].

Additional Contextlow

“He traced humanity’s progress from the discovery of fire, the wheel and agriculture, through democracy, the rule of law and universal voting rights, to modern inventions such as electricity, the aeroplane, vaccines and the internet.”

The knowledge base references broad human progress in democracy, rule of law, and improvements in health, education, and technology, offering contextual support for the historical overview [S58] and [S59].

External Sources (66)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Keynote Address_Revanth Reddy_Chief Minister Telangana — -Participant: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or organizer…
S6
Welcome address — ## Overview and Context
S7
WS #53 Promoting Children’s Rights and Inclusion in the Digital Age — Speaker 1: Yes. So hi, everyone. When we talk about child rights online, so I’m gonna bring in a little bit from th…
S8
Keynote Address_Revanth Reddy_Chief Minister Telangana — Evidence:AI can write poetry and reports, make films and presentations, and it knows almost everything. People say on so…
S9
Open Forum: A Primer on AI — The investment for artificial intelligence surpasses the collective investment for human intelligence.
S10
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — But I think if we get this right and these next steps, and I think India has a really critical part to play in this on t…
S11
Fixing Healthcare, Digitally — In conclusion, Revanth Reddy Anumula’s vision and actions underscore his dedication to leveraging technology to improve …
S12
https://dig.watch/event/india-ai-impact-summit-2026/keynote-address_revanth-reddy_chief-minister-telangana — Good afternoon, friends. My pleasure to address this event because of some of the best of minds from all over the world …
S13
Protecting Democracy against Bots and Plots — Artificial Intelligence can deliver various results that need to be regulated to prevent misuse.
S14
Closing Ceremony — Multiple speakers addressed the transformative challenges posed by artificial intelligence and the need for new approach…
S15
Agentic AI in Focus Opportunities Risks and Governance — This observation highlights a fundamental shift in governance – from regulating human behavior to regulating autonomous …
S16
The Global Power Shift India’s Rise in AI &amp; Semiconductors — And again, AI leadership will not really happen by accident. It will require a deliberate alignment across policy, indus…
S17
Global AI Policy Framework: International Cooperation and Historical Perspectives — The concept of open sovereignty, emphasis on building upon existing institutions while ensuring inclusive representation…
S18
Building Population-Scale Digital Public Infrastructure for AI — Summary:All speakers agree that moving from fragmented pilot projects to systematic, coordinated approaches is essential…
S19
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Alain Ndayishimiye:Yes thank you moderator once again let me take the opportunity to greet everyone whatever you are in …
S20
Leveraging the UN system to advance global AI Governance efforts — Tshilidzi Marwala:Thanks very much, Doreen. Turning to the United Nations University, Mr. Chiditsi Marwada, so how can t…
S21
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — AI policies in Africa should ideally espouse a context-specific and culturally sensitive orientation. The prevailing ten…
S22
Why science metters in global AI governance — “But if your potential or probable outcome is the end of jobs, then you need to think about universal basicism.”[113]. “…
S23
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Tatjana Titareva: Thank you, Alex. I would like to say indeed that part of the roadmap is the need for capacity building…
S24
AI for Social Empowerment_ Driving Change and Inclusion — Education and Skills System Overhaul:Investment requires fundamental reimagining rather than incremental improvement. Cu…
S25
Telangana government and UNESCO partner to drive ethical AI development and adoption — The Government of the Indian state Telangana and UNESCOhave collaborated to implementthe UNESCO Recommendation on the Et…
S26
Comprehensive Discussion Report: President Emmanuel Macron at the World Economic Forum — Both speakers agree that economic growth is fundamental for reducing social divisions and that it should be a central pr…
S27
Global AI Policy Framework: International Cooperation and Historical Perspectives — The speakers demonstrate significant consensus on key principles including the need for inclusive governance, building o…
S28
Laying the foundations for AI governance — ### Science-Based Policy as Common Ground – The need for collaboration between industry and regulators Artemis Seaford…
S29
How to make AI governance fit for purpose? — ## Key Initiatives and Announcements ## Key National Positions ### Chinese Perspective ### French Perspective ### Un…
S30
Advancing Scientific AI with Safety Ethics and Responsibility — Summary:Both speakers emphasize that governance frameworks developed in Western contexts may not be appropriate for deve…
S31
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S32
Comprehensive Report: 18th Meeting of the Disarmament and International Security Committee — Despite geopolitical divisions on other issues, there was notable agreement on the need for responsible governance of ar…
S33
AI governance needs urgent international coordination — AGIS Reports analysisemphasises that as AI systems become pervasive, they create significant global challenges, includin…
S34
Fixing Healthcare, Digitally — Anumula argues that affordable and high-quality healthcare is essential for the development and progress of any society….
S35
Building Population-Scale Digital Public Infrastructure for AI — Thank you so much, Shankar. And absolutely a pleasure and honor to be here with all of you. Thank you so much. The way I…
S37
Agents of Change AI for Government Services &amp; Climate Resilience — Telangana’s satellite-driven heat analysis extends beyond temperature mapping to shape urban zoning, green belt planning…
S38
Protecting Democracy against Bots and Plots — Artificial Intelligence can deliver various results that need to be regulated to prevent misuse.
S39
Keynote Address_Revanth Reddy_Chief Minister Telangana — The speaker asserts that artificial intelligence has reached a level of intelligence beyond humans and can act autonomou…
S40
Closing Ceremony — Multiple speakers addressed the transformative challenges posed by artificial intelligence and the need for new approach…
S41
Driving Enterprise Impact Through Scalable AI Adoption — Summary:Both speakers agree that AI is a powerful tool but emphasize the need for human oversight, critical thinking, an…
S42
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — This statement encapsulates a fundamental tension in the AI age – the temptation to automate decision-making versus the …
S43
Keynote Address_Revanth Reddy_Chief Minister Telangana — Good afternoon, friends. My pleasure to address this event because of some of the best of minds from all over the world …
S44
Keynote-Jeet Adani — The speaker contends that inclusion without capability represents weakness, while capability without sovereignty leads t…
S45
Building Population-Scale Digital Public Infrastructure for AI — Summary:All speakers agree that moving from fragmented pilot projects to systematic, coordinated approaches is essential…
S46
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And this requires proactive and coherent policy responses. First, people must be at the center of AI strategy, as we hea…
S47
State of play of major global AI Governance processes — Thomas Schneider:Thank you, and thanks for convening this session. Before I go to the treaty itself, I would like to giv…
S48
GOVERNING AI FOR HUMANITY — – 190 Discussions about AI often resolve into extremes. In our consultations around the world, we engaged with those who…
S49
Leveraging the UN system to advance global AI Governance efforts — Tshilidzi Marwala:Thanks very much, Doreen. Turning to the United Nations University, Mr. Chiditsi Marwada, so how can t…
S50
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Economic | Development | Sociocultural Mario Nobile emphasized that sustainability, energy consumption, and job displac…
S51
AI for Social Empowerment_ Driving Change and Inclusion — so just a little tongue -in -cheek we go back to the 1600s we’d asked chat GPT then if Galileo was correct it would have…
S52
Fixing Healthcare, Digitally — Anumula argues that affordable and high-quality healthcare is essential for the development and progress of any society….
S53
Telangana government and UNESCO partner to drive ethical AI development and adoption — The Government of the Indian state Telangana and UNESCOhave collaborated to implementthe UNESCO Recommendation on the Et…
S54
Telangana launches Aikam to scale AI deployment — The Telangana government haslaunchedAikam, a new autonomous body aimed at positioning the state as a global proving grou…
S55
Keynote-HE Emmanuel Macron — Artificial intelligence Reference to previous address by Antonio Guterres; formal titles and protocol; mention of the A…
S56
Welcome address — Doreen Bogdan Martin:Good afternoon everyone. Welcome to day zero of the AI for Good Global Summit. Our eagerly anticipa…
S57
Opening address of the co-chairs of the AI Governance Dialogue — Majed Sultan Al Mesmar: Bismillah ar-Rahman ar-Rahim. Excellencies, distinguished guests, colleagues, friends, As-salamu…
S58
S59
The Arc of Progress in the 21st Century / DAVOS 2025 — Steven Pinker argues that there has been significant progress in various aspects of human flourishing over time, includi…
S60
UN Secretary General Statement | UNGA 78 — Seventy-five years after the Universal Declaration of Human Rights, there has been enormous progress in some areas, from…
S61
Internet Governance Forum 2024 — In a world driven by economic growth and efficiency, can humans compete with machines? Should they? Is there space to ad…
S62
Are we creating alien beings? — Geoffrey Hinton: Well, at this point, I should emphasize I’m a scientist, not an economist. Yeah, but you’re a smart guy…
S63
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — – Naveen GV- Piyush Nangru- Speaker 4- Ashish Gupta- Shweta Chaudhary Human Intelligence vs Artificial Intelligence in …
S64
Comprehensive Discussion Report: AI’s Existential Challenge to Human Identity and Society — The discussion’s foundation rested on Harari’s crucial distinction between AI as an agent rather than a mere tool. He ar…
S65
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Owen Larter from Google DeepMind provided an industry perspective on the technical requirements for robust AI assurance,…
S66
The fading of human agency in automated systems — Finally, governance frameworks need more precise and honest language. When decisions are effectively automated, they sho…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument92 words per minute136 words87 seconds
Argument 1
Welcome and praise of leadership (Speaker 1)
EXPLANATION
Speaker 1 opened the ceremony by extending a warm welcome on behalf of the India mission and the AI Impact Summit. He praised the guest’s leadership and invited them to deliver the keynote on technology‑led governance and AI’s role in state growth.
EVIDENCE
Speaker 1 welcomed the audience on behalf of the India mission and the India AI Impact Summit 2026, praised the leadership of the guest, and invited them to the dais for a keynote on technology-led governance and harnessing AI for state growth [1-4].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The gratitude and admiration expressed for the chair’s leadership in the opening remarks aligns with the welcome address and the explicit praise for the chair’s stewardship documented in [S5] and [S6].
MAJOR DISCUSSION POINT
Opening remarks and leadership acknowledgment
A
A. Revanth Reddy
11 arguments95 words per minute697 words437 seconds
Argument 1
AI as the greatest invention surpassing human intelligence (A. Revanth Reddy)
EXPLANATION
He framed AI as humanity’s most significant invention, claiming it now exceeds human intelligence in many domains. He highlighted AI’s ability to create content, make decisions, and its growing perception on social media as more intelligent than humans.
EVIDENCE
He stated that AI is the greatest invention of our time, that it can write poetry, reports, make films and presentations, and that people on social media claim humans are no longer the most intelligent, asserting that AI is more intelligent and possesses agency to decide actions [13-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reddy’s claim that AI now exceeds human intelligence and can generate poetry, reports, films, and presentations is corroborated by the keynote transcript, which notes AI’s creative capabilities and the perception on social media that AI is more intelligent than humans [S4] and [S8].
MAJOR DISCUSSION POINT
AI surpasses human intelligence
Argument 2
India must become a leader in all AI layers, learning from missed past revolutions (A. Revanth Reddy)
EXPLANATION
He argued that India missed earlier industrial and manufacturing revolutions and has only participated in the services revolution without creating global products. Consequently, India must lead across all AI layers—hardware, energy, data, platforms, applications and services—by drafting a roadmap and establishing monitoring mechanisms.
EVIDENCE
He noted that India missed the industrial and manufacturing revolutions, contributed mainly to the services revolution without owning global products, and called for leadership in every AI layer (chips, green energy, data storage, platforms, applications, services) with a clear roadmap and a war-room to monitor AI developments [25-36].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for comprehensive AI leadership across chips, green energy, data storage, platforms, applications and services, and the reference to India missing earlier industrial and manufacturing revolutions, are both documented in the keynote address [S4] and reinforced in [S8].
MAJOR DISCUSSION POINT
Need for comprehensive AI leadership
Argument 3
Manufacture GPU chips and secure rare minerals for AI hardware (A. Revanth Reddy)
EXPLANATION
He emphasized that India should develop domestic capability to produce GPU chips and become part of the full AI hardware supply chain. Securing access to rare minerals is essential for this manufacturing effort.
EVIDENCE
He called for India to manufacture GPU chips, integrate into the entire AI supply chain, and obtain the rare minerals required for chip production [42-45].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The proposal to develop domestic GPU chip manufacturing and to secure rare mineral supply chains is supported by the keynote, which emphasizes building a full AI hardware supply chain and acquiring the necessary rare minerals for semiconductor production [S4] and [S8].
MAJOR DISCUSSION POINT
Domestic AI hardware production
Argument 4
Build supporting infrastructure such as green energy, data storage and platforms (A. Revanth Reddy)
EXPLANATION
He identified critical infrastructure components—green energy, data storage, and digital platforms—that must be built alongside AI chips and applications to sustain AI growth. These layers are essential for a robust AI ecosystem.
EVIDENCE
He listed the necessary AI infrastructure components, specifically mentioning green energy, data storage, and platforms as part of the AI layers that India must develop [35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The identification of green energy, data storage and digital platforms as essential AI infrastructure components appears in the keynote discussion of the AI layers that India must develop [S4] and [S8].
MAJOR DISCUSSION POINT
AI supporting infrastructure
Argument 5
Establish an AI war room for rapid monitoring and response (A. Revanth Reddy)
EXPLANATION
He proposed creating a national AI war room that brings together central and state authorities to monitor AI developments and respond swiftly. Hyderabad was suggested as a potential location with government backing.
EVIDENCE
He advocated for an AI war room involving the centre and states to monitor and react to AI developments, proposing Hyderabad as the host with support from the Government of India [37-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The suggestion to create a national AI war room involving centre and state authorities, with Hyderabad as a potential host, is detailed in the keynote address [S4] and reiterated in [S8].
MAJOR DISCUSSION POINT
AI monitoring mechanism
Argument 6
Create a world‑class AI university focused on original research (A. Revanth Reddy)
EXPLANATION
He urged the establishment of a globally competitive AI university equipped with top‑tier facilities, dedicated to original research. Such an institution would nurture homegrown AI talent and innovation.
EVIDENCE
He recommended setting up an AI university of global standards with top facilities that emphasizes original research [40].
MAJOR DISCUSSION POINT
AI education and research hub
Argument 7
Set up an AI fund and a startup village to nurture AI startups and unicorns (A. Revanth Reddy)
EXPLANATION
He suggested creating a dedicated AI fund to finance startups and establishing an AI start‑up village in Telangana to foster entrepreneurship, aiming to produce AI unicorns across the country.
EVIDENCE
He called for an AI fund to support start-ups and proposed an AI start-up village in Telangana to nurture AI entrepreneurship and generate unicorns [50-51].
MAJOR DISCUSSION POINT
Funding and ecosystem for AI startups
Argument 8
Form a national AI council and an AI ministry at centre and state levels for regulation and security (A. Revanth Reddy)
EXPLANATION
He appealed to the Prime Minister to create a national AI council, akin to the GST Council, and to set up an AI ministry at both central and state levels. This body would draft laws to prevent AI misuse, especially concerning national security and public interest.
EVIDENCE
He requested the establishment of a national AI council similar to the GST Council and an AI ministry at centre and state levels to legislate against AI misuse, particularly for national security and interests [55-57].
MAJOR DISCUSSION POINT
Governance and regulatory framework for AI
Argument 9
Organise AI summits every six months in different cities (A. Revanth Reddy)
EXPLANATION
He advocated increasing the frequency of AI summits to twice a year, rotating the venue among various Indian cities such as Hyderabad, to maintain momentum and broaden participation.
EVIDENCE
He suggested holding AI summits every six months rather than annually, with different cities like Hyderabad hosting them to foster continuous dialogue [52-54].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reddy’s recommendation to hold AI summits biannually in rotating cities such as Hyderabad is reflected in the keynote, which calls for more frequent AI summits and venue rotation [S4] and [S8].
MAJOR DISCUSSION POINT
Regular AI convenings
Argument 10
Estimate AI‑induced job losses and invest heavily in reskilling displaced workers (A. Revanth Reddy)
EXPLANATION
He called for a systematic assessment of potential job losses due to AI and urged massive investment in future‑oriented initiatives, particularly in reskilling programs for workers whose jobs are displaced.
EVIDENCE
He proposed establishing a system to estimate AI-related job losses, emphasized the urgency of massive investment, and highlighted the need for extensive reskilling of affected workers [45-49].
MAJOR DISCUSSION POINT
Job impact and reskilling strategy
Argument 11
Leverage AI to advance social justice, inclusion and poverty eradication (A. Revanth Reddy)
EXPLANATION
He stressed that AI should be deployed to promote social justice, inclusion, and the eradication of poverty. An AI ministry would guide these socially beneficial applications.
EVIDENCE
He emphasized using AI strongly to achieve social justice, inclusion, and poverty removal, calling for an AI ministry to steer such efforts [57].
MAJOR DISCUSSION POINT
AI for inclusive development
Agreements
Agreement Points
Both speakers stress that AI should be a central driver for state‑level growth and governance.
Speakers: Speaker 1, A. Revanth Reddy
India must become a leader in all AI layers, learning from missed past revolutions Leverage AI to advance social justice, inclusion and poverty eradication
Speaker 1 invited the keynote on “technology-led governance and harnessing the power of AI in the state’s growth” [3] and Reddy called for India to lead across all AI layers and to use AI for social justice and poverty removal, linking AI to broader state development [34-36][57].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the emphasis in the World Economic Forum report where AI is identified as a core pillar for economic competitiveness and growth at sub-national levels [S26], and it aligns with the Global AI Policy Framework’s call for AI to underpin state-level development strategies [S27].
Both speakers highlight Telangana (and Hyderabad) as a focal point for building AI infrastructure and ecosystems.
Speakers: Speaker 1, A. Revanth Reddy
Establish an AI war room for rapid monitoring and response Create a world‑class AI university focused on original research Set up an AI fund and a startup village to nurture AI startups and unicorns Organise AI summits every six months in different cities
Speaker 1 praised the work being done in Telangana under the chief minister’s leadership [65] while Reddy proposed a Hyderabad-based AI war room, a global-standard AI university, an AI start-up village in Telangana, and bi-annual AI summits rotating among Indian cities [37-40][51][52-54].
POLICY CONTEXT (KNOWLEDGE BASE)
The focus on Hyderabad as a hub for AI-driven healthcare and software is echoed in the regional analysis that positions Telangana as a global leader in these sectors [S34], while the description of satellite-driven heat analysis and edge-computing nodes across the state further illustrates concrete AI ecosystem initiatives in Telangana [S37].
Similar Viewpoints
Both see AI as essential for advancing the nation’s socioeconomic agenda and for delivering inclusive growth, with the chief minister’s leadership positioned as a catalyst [3][34-36][57].
Speakers: Speaker 1, A. Revanth Reddy
India must become a leader in all AI layers, learning from missed past revolutions Leverage AI to advance social justice, inclusion and poverty eradication
Both endorse making Telangana a hub for AI research, entrepreneurship, and policy coordination, signalling a joint commitment to concrete institutional building [65][37-40][51][52-54].
Speakers: Speaker 1, A. Revanth Reddy
Establish an AI war room for rapid monitoring and response Create a world‑class AI university focused on original research Set up an AI fund and a startup village to nurture AI startups and unicorns Organise AI summits every six months in different cities
Unexpected Consensus
Agreement on the need for dedicated AI governance structures (council/ministry/war‑room).
Speakers: Speaker 1, A. Revanth Reddy
Establish an AI war room for rapid monitoring and response Form a national AI council and an AI ministry at centre and state levels for regulation and security
Speaker 1’s invitation to discuss “technology-led governance” [3] aligns with Reddy’s call for an AI war room and a national AI council/ministry, revealing a shared expectation for formal AI governance mechanisms that was not explicitly stated in the opening remarks.
POLICY CONTEXT (KNOWLEDGE BASE)
The call for a dedicated AI governance body reflects the inclusive governance principles outlined in the Global AI Policy Framework, which advocates building on existing institutions and creating specialised AI councils or ministries [S27]; it is also supported by discussions on multi-stakeholder involvement and the need for tailored governance frameworks for developing contexts [S29][S30].
Overall Assessment

The two speakers converge on three core ideas: (1) AI as a strategic engine for state‑level economic and social development; (2) Telangana/Hydderabad as the focal point for building AI infrastructure, research, and entrepreneurship; (3) the necessity of dedicated governance bodies (war room, council, ministry) to steer AI deployment. These points reflect a high degree of consensus on both vision and concrete actions.

Strong consensus – the overlap between the opening remarks and the detailed policy roadmap indicates a unified political commitment, which could accelerate policy formulation, investment, and capacity‑building initiatives across the identified domains.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The exchange consists of a brief opening welcome by Speaker 1 and an extensive keynote by A. Revanth Reddy. Speaker 1’s remarks are limited to welcoming the guest and praising leadership, while Reddy outlines a comprehensive AI strategy for India. There is no explicit conflict or contradictory stance between the two speakers; the dialogue is largely complementary, with Speaker 1 facilitating the platform for Reddy’s proposals. Consequently, the level of disagreement is minimal, indicating a consensus‑oriented setting that supports the broader agenda of advancing AI in India.

Very low – the speakers do not present opposing views; the interaction is supportive and non‑contentious, suggesting smooth alignment on the overarching goal of promoting AI development.

Takeaways
Key takeaways
AI is described as the greatest invention, surpassing human intelligence and possessing agency. India has missed previous industrial and manufacturing revolutions and must now aim to lead in all layers of AI (chips, green energy, data storage, platforms, applications, services). Building robust AI infrastructure—including GPU manufacturing, rare mineral procurement, green energy, and data storage—is essential for leadership. Strong institutional and policy frameworks are required: an AI war room, a world‑class AI university, a national AI council, and dedicated AI ministries at central and state levels. Socio‑economic strategies are needed to assess AI‑driven job losses, invest in massive reskilling, and use AI for social justice, inclusion, and poverty eradication. Continuous engagement through regular AI summits and dedicated startup ecosystems (AI fund and startup village) will sustain momentum.
Resolutions and action items
Proposal to establish an AI war room coordinated between the centre and states, with Hyderabad as a potential hub. Recommendation to create a world‑class AI university focused on original research. Suggestion to set up a national AI fund and an AI startup village in Telangana to nurture startups and unicorns. Call for the formation of a national AI council (modeled on GST Council or NITI Aayog) and an AI ministry at both central and state levels for regulation and security. Plan to organize AI summits every six months in different Indian cities, including Hyderabad. Invitation to global and national institutions to partner with Telangana on AI initiatives.
Unresolved issues
Specific funding sources, budget allocations, and timelines for the AI war room, university, fund, and startup village were not detailed. Mechanisms for acquiring rare minerals and establishing a domestic GPU manufacturing supply chain remain undefined. Criteria and methodology for estimating AI‑induced job losses and the scale of reskilling programs were not specified. Exact legislative and regulatory frameworks for the proposed AI ministry and council were not outlined. Roles and responsibilities between central and state governments in the AI governance structure need clarification.
Suggested compromises
None identified
Thought Provoking Comments
AI also has agency, power to decide. An aeroplane can fly only if it has the power to decide. We tell it. A car will move or stop only if we tell it. AI can order to itself.
This reframes AI from a passive tool to an entity with decision‑making capability, challenging the common perception that AI merely follows human commands.
It shifted the tone of the speech from a historical overview to a forward‑looking debate about control and responsibility, paving the way for subsequent calls for governance mechanisms such as an AI war room and a national AI council.
Speaker: A. Revanth Reddy
India missed the industrial revolution and the manufacturing revolution. We played a role in the services revolution, especially software and telecom, but we created services, not global products like Google, Facebook, YouTube.
By candidly acknowledging India’s past shortcomings, the speaker challenges the audience to rethink India’s current position in the global technology landscape.
This admission created a turning point that justified the later, more ambitious proposals (building chips, AI supply chain, AI university) and set a critical baseline for the need to ‘produce’ AI, not just ‘use’ it.
Speaker: A. Revanth Reddy
India must become a leader in all layers of AI – chips, green energy, data storage, platforms, applications and services – and create a roadmap to ensure real leadership in the top three layers.
It expands the conversation from a single‑dimensional focus on software to a holistic, end‑to‑end AI ecosystem, introducing strategic depth.
The comment opened multiple new discussion strands: hardware manufacturing, energy policy, data infrastructure, and platform development, each later referenced in specific policy suggestions (e.g., GPU chip production, AI war room).
Speaker: A. Revanth Reddy
We need to establish an AI war room with centre and states to monitor and respond to AI developments, and an AI university of global standards focusing on original research.
Proposes concrete institutional mechanisms for both rapid response and long‑term talent development, moving the dialogue from abstract vision to actionable governance.
These proposals introduced the themes of national security, inter‑governmental coordination, and education reform, signalling a shift toward policy implementation and prompting the audience’s applause and endorsement.
Speaker: A. Revanth Reddy
We have to put a system to estimate job losses because of AI and invest massively in reskilling of people who lose their jobs.
Highlights the socioeconomic consequences of AI, challenging any purely techno‑optimistic narrative and foregrounding human impact.
This comment added a social‑justice dimension to the conversation, influencing the later statement about using AI for inclusion and poverty eradication, and underscoring the need for an AI fund and start‑up village.
Speaker: A. Revanth Reddy
I request the Honourable Prime Minister to establish a national AI council, like the GST Council or NITI Aayog, and an AI ministry at both centre and state level to make laws preventing misuse of AI, especially against national security and interests.
Calls for a dedicated, high‑level policy body, directly challenging the existing institutional framework and proposing a new governance architecture.
This suggestion crystallised the earlier calls for a war room and university into a broader legislative agenda, setting a clear next step for policymakers and signaling a shift from discussion to institutional action.
Speaker: A. Revanth Reddy
We need to use AI strongly for achievements of social justice, inclusion and removal of poverty.
Frames AI as a tool for equitable development, expanding the conversation beyond economic competitiveness to ethical and humanitarian goals.
It broadened the scope of the dialogue, linking technological ambition with societal outcomes and reinforcing the earlier emphasis on reskilling and job‑loss mitigation.
Speaker: A. Revanth Reddy
Overall Assessment

The speech by A. Revanth Reddy introduced a cascade of transformative ideas that moved the discussion from a celebratory opening to a strategic roadmap for India’s AI future. Key comments challenged prevailing assumptions about AI’s role, highlighted India’s historical gaps, and proposed concrete institutional, economic, and social measures. Each insight acted as a turning point, expanding the conversation’s breadth and depth, and culminating in a clear call for coordinated governance, education, and ethical deployment. Although the only explicit response was a brief applause from the moderator, the identified comments collectively set the agenda for future policy debates and positioned AI as both a national priority and a catalyst for inclusive growth.

Follow-up Questions
How to establish an AI war room at the national level involving centre and states to monitor and respond to rapid AI developments
A coordinated war room is seen as crucial for timely policy response and security oversight as AI advances quickly.
Speaker: A. Revanth Reddy
What roadmap is needed for India to achieve leadership in the top three layers of AI (chips, green energy, data storage, platforms, applications, services)
A strategic plan is required to guide investment and capability building across critical AI infrastructure domains.
Speaker: A. Revanth Reddy
How to set up an AI university of global standards with top facilities focused on original research in India
World‑class education and research are essential to produce indigenous AI talent and innovations.
Speaker: A. Revanth Reddy
What steps are required for India to manufacture GPU chips and become part of the full AI hardware supply chain, including securing rare minerals
Domestic chip production reduces dependence on imports and strengthens the AI ecosystem.
Speaker: A. Revanth Reddy
How to develop a system to estimate AI‑induced job losses and design effective reskilling programs
Understanding employment impacts is vital for social stability and for planning large‑scale workforce transformation.
Speaker: A. Revanth Reddy
What model should be used for an AI fund to support start‑ups and how to create an AI start‑up village in Telangana for nationwide benefit
Funding and ecosystem hubs can accelerate entrepreneurship and help Indian start‑ups become global unicorns.
Speaker: A. Revanth Reddy
Should AI summits be held bi‑annually across different Indian cities, and what format would maximize knowledge sharing?
More frequent summits can sustain momentum, foster collaboration, and disseminate best practices.
Speaker: A. Revanth Reddy
How to establish a National AI Council, similar to the GST Council or NITI Aayog, and define its mandate
A dedicated council would provide coordinated policy guidance and oversight for AI development.
Speaker: A. Revanth Reddy
What legal framework is needed for an AI ministry at centre and state level to prevent misuse of AI, especially concerning national security and interests
Regulations are essential to safeguard against harmful applications while encouraging responsible innovation.
Speaker: A. Revanth Reddy
How can AI be leveraged effectively for social justice, inclusion, and poverty alleviation in India
Targeted AI solutions can address inequities and improve public service delivery for marginalized populations.
Speaker: A. Revanth Reddy
What partnerships with global and national institutions are required to advance AI initiatives in Telangana and across India
Collaboration can bring expertise, technology transfer, and investment to accelerate AI adoption.
Speaker: A. Revanth Reddy

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards

NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on identifying talent gaps and requirements for developing India’s next-generation AI ecosystem, featuring experts from academia, government, and industry. The panel was moderated by Sh. Subodh Sachan, Director of STPI headquarters, who emphasized that AI is transforming business operations and the workforce, requiring people to coexist with AI systems rather than simply use them.


The panelists defined next-generation AI talent as requiring critical thinking skills, foundational understanding of AI capabilities and limitations, and the ability to solve real-world problems rather than just learning libraries and algorithms. Dr. Sarabjot Singh Anand stressed the importance of questioning AI outputs and avoiding the trap of treating AI as an infallible oracle. Professor Dr. Jawar Singh emphasized the need for understanding hardware implementation alongside algorithms, noting the significant power consumption gap between AI processors and human brains.


From a telecommunications perspective, Dr. Devinder Singh explained that while current 5G networks use AI as an add-on, future 6G networks will have AI built into every component, requiring engineers to understand machine learning and distributed decision-making. The discussion highlighted the challenge of curriculum velocity in educational institutions, where traditional bureaucratic processes cannot keep pace with rapidly evolving AI technologies.


Industry representatives emphasized the need for practical problem-solving experience, production exposure, and domain-specific knowledge beyond theoretical foundations. The panelists agreed that bridging the academia-industry gap through mentorship programs and real-world projects is crucial for developing AI talent capable of creating solutions for India’s diverse sectors including healthcare, law, agriculture, and governance.


Keypoints

Major Discussion Points:

Defining Next-Generation AI Talent: The panelists discussed what constitutes next-gen AI professionals, emphasizing the need for critical thinking, domain expertise, ethical judgment, and real-world problem-solving capabilities rather than just technical skills. They stressed the importance of understanding AI’s limitations and avoiding over-reliance on AI as an infallible oracle.


Skills Gap and Training Challenges: A significant focus was placed on identifying current talent gaps in the AI ecosystem, particularly the disconnect between academic curriculum and industry needs. The discussion highlighted issues with outdated educational systems, the need for faster curriculum updates, and the importance of practical, hands-on learning over purely theoretical approaches.


Industry-Academia Collaboration: The conversation emphasized the critical need for stronger partnerships between educational institutions and industry to bridge the skills gap. This included discussions about mentorship programs, real-world project exposure, and the necessity of bringing industry practitioners into the educational process.


Sector-Specific AI Applications: The panelists explored how AI is transforming various sectors, from telecommunications (with 6G networks having AI built into every component) to agriculture, law, and healthcare. They emphasized that effective AI talent must understand domain-specific challenges and applications rather than just generic AI algorithms.


Infrastructure and Standards Development: Technical discussions covered the hardware requirements for AI systems, the importance of AI standards (particularly in telecommunications), and the need for robust, secure, and fair AI implementations. This included conversations about power efficiency, hardware security, and bias mitigation in AI systems.


Overall Purpose:

The discussion aimed to address the talent gap in India’s AI ecosystem by bringing together experts from academia, government, and industry to identify current challenges and propose solutions for developing next-generation AI professionals. The conversation was part of a broader national effort aligned with India’s AI initiatives and skill development programs.


Overall Tone:

The discussion maintained a professional and collaborative tone throughout, with participants showing mutual respect and building upon each other’s points. The atmosphere was constructive and solution-oriented, with speakers sharing practical insights and real-world experiences. While there was acknowledgment of significant challenges in the current system, the tone remained optimistic about India’s potential to develop world-class AI talent through coordinated efforts between various stakeholders.


Speakers

Speakers from the provided list:


Sh. Subodh Sachan – Director of SGPA headquarters, 27 years in industry and government, works in technology space and startup ecosystem, moderator of the discussion


Dr. Sarabjot Singh Anand – Co-founder and Chief Data Scientist of TATRAS, Co-founder of Sabath Foundation, has academic roots at Warwick and Ulster, works in AI talent development especially in Punjab region


Dr. Devinder Singh – Deputy Director General of TEC (Department of Telecommunications), expert in telecom standards formalization and standardization ecosystem


Professor Dr. Jawar Singh – Professor at Indian Institute of Technology Patna, Founder of Kuturna Labs, has experience with successful business exit, focuses on hardware aspects of AI


Professor Dr. Alok Pandey – Professor and Dean at UP Jindal University, three decades of experience in finance, governance, higher education, and financial technology, expert in AI applications


Kunal Gupta – Managing Director of Mount Talent Consulting, runs talent advisory and job search portal, works closely with industry on talent requirements


Vikash Srivastava – Chief Growth Strategist of Vincis IT Services Private Limited, 16+ years in enterprise consulting and cloud workforce upskilling, technology training collaborator with STPI


Audience – Vikram Tripathi from a village in Prayagraj, participating in upcoming panchayat elections


Additional speakers:


None identified beyond the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive panel discussion, moderated by Sh. Subodh Sachan, Director of STPI headquarters, brought together leading experts from academia, government, and industry to address the critical talent gaps in India’s rapidly evolving AI ecosystem. The discussion took place against the backdrop of significant national AI initiatives, including the 10 lakh AI skill drive and the Skill India digital programme, with STPI’s skill-up programme expanding to include 18 training partners across India and plans for multiple regional training hubs.


Defining Next-Generation AI Talent: Beyond Technical Skills

The panel began with a fundamental question about what constitutes next-generation AI talent, revealing perspectives that extend far beyond traditional technical competencies. Dr. Sarabjot Singh Anand, co-founder and chief data scientist of TATRAS, established a crucial framework by identifying two distinct groups in the AI ecosystem: those who will generate the next wave of AI innovations and those who must use AI to enhance their job performance. His most significant insight was highlighting the dangerous trend of “outsourcing thinking to AI,” emphasising that critical thinking and risk-taking abilities are more important than any specific technology.


Professor Dr. Jawar Singh from IIT Patna brought a hardware-centric perspective, arguing that next-generation AI professionals must understand not only algorithms but also their implementation on hardware systems. His comparison between AI processors consuming 500-700 watts versus the human brain’s 20-watt efficiency highlighted fundamental challenges in AI development. He emphasised that solid grounding in hardware, computer science, and engineering domains is essential, particularly given concerns about AI security and the need for trusted, reliable implementations.


Dr. Devinder Singh from the Department of Telecommunications provided insights into sector-specific requirements, explaining that while current 5G networks use AI as an add-on feature, future 6G networks will have AI built into every component. This evolution requires engineers to understand machine learning, distributed decision-making, and self-learning systems, representing a fundamental shift from human-controlled to AI-supervised operations.


Professor Dr. Alok Pandey from OP Jindal University introduced the concept of “T-shaped” professionals, combining deep domain specialisation with AI fluency. He provided specific examples of AI applications including M&A valuation and money laundering prevention, highlighting the need for sector-specific AI implementations across healthcare, law, education, and finance rather than generic solutions.


Kunal Gupta from Mount Talent Consulting described next-generation AI as “infrastructure of intelligence” rather than merely a tool. His vision of AI as a platform that multiplies human reasoning, research, creativity, and judgements while enabling vernacular language access represents a fundamental shift in how we conceptualise AI’s role in society.


Skills Gap Analysis and Educational Challenges

The discussion revealed significant gaps between current educational approaches and industry requirements. Dr. Sarabjot Singh Anand emphasised assessing problem-solving skills, self-learning agency, and curiosity rather than just technical library knowledge. His observation that students often focus on learning specific libraries without understanding foundational principles highlighted a critical educational challenge.


Vikash Srivastava from Vincis IT Services outlined three essential layers beyond traditional theoretical foundations: applied problem-solving with real datasets and domain-specific knowledge, production exposure showing how models move from development to scalable systems, and real-world deployment scenarios. His company uses AI to assess skill gaps and recommend adaptive learning paths, demonstrating practical applications of AI in talent development itself.


Kunal Gupta identified three critical dimensions of skill gaps: the challenge of problem definition (which he argued represents 50% of any solution), the sector-specific nature of skill requirements, and the urgent need for educational policy reform. He noted that India’s syllabuses are typically 20 years behind industry requirements, with curriculum update processes taking five to seven years.


Educational System Reform: Addressing Bureaucracy and Speed

Professor Dr. Alok Pandey argued for “de-bureaucratising” education, introducing the concept of “curriculum velocity”—the speed at which educational content must change to remain relevant. He noted that faculty members cannot simply be commanded to teach new courses when the technology landscape changes rapidly, and suggested that MOUs with Western countries could help accelerate curriculum development.


Professor Dr. Jawar Singh clarified that centrally funded technical institutions (CFTIs) already possess significant autonomy to update curricula rapidly, distinguishing them from state technical institutions that face greater bureaucratic constraints. This distinction highlighted that solutions must be differentiated based on institutional type and funding structure.


The panel emphasised that educational transformation requires industry-academia collaboration. Dr. Sarabjot Singh Anand stressed the need for industry professionals to provide mentorship, with his Sabud Foundation’s approach of centring training around “passion projects” with social impact, supported by mentors from industry.


Government Initiatives and Ecosystem Development

Sh. Subodh Sachan outlined significant government initiatives, including the STPI skill-up programme with 18 training partners across India and plans for multiple regional hubs. These initiatives align with broader programmes including the 10 lakh AI skill drive and the Skill India digital programme, demonstrating coordinated national effort in AI talent development.


The discussion revealed these initiatives are designed as collaborative ecosystems rather than traditional training programmes, emphasising “partners and collaborators” with aligned objectives to integrate government, academia, and industry resources.


Industry-Specific Implementation and Future Directions

Dr. Devinder Singh’s insights into telecommunications illustrated how AI is transforming entire industries. His explanation of the evolution from 5G to 6G networks, where AI will predict faults and take corrective action autonomously with intelligence distributed at the edge, demonstrated how professional roles will fundamentally change.


Professor Dr. Jawar Singh discussed the emergence of efficient models, briefly mentioning DeepSeek’s impact on market perceptions, and emphasised the importance of neuromorphic computing and brain-inspired approaches to bridge efficiency gaps between current processors and human cognition.


Rural Implementation and Inclusive Development

Audience participation highlighted the need for AI talent development beyond urban centres. Questions about AI implementation at the panchayat level and leveraging corporate CSR funds for rural AI initiatives demonstrated the scope of inclusive AI adoption challenges.


Kunal Gupta’s emphasis on vernacular language capabilities in AI platforms addresses inclusivity challenges, suggesting that AI could democratise technology access similar to how platforms enabled non-English speakers to become content creators.


Strategic Implications and Conclusions

The discussion revealed that next-generation AI talent development requires fundamental reconceptualisation of education, moving from knowledge transfer to capability building. The emphasis on critical thinking, problem-solving, and domain integration suggests that AI talent development focuses as much on developing human cognitive capabilities as on technical training.


The panel’s consensus on industry-academia collaboration, combined with government infrastructure support, points towards a coordinated ecosystem approach. The success of initiatives like the Sabud Foundation’s mentorship model and STPI’s collaborative training partnerships suggests effective solutions require breaking down traditional boundaries between sectors.


Most significantly, the discussion highlighted that as AI becomes more capable, uniquely human skills become more valuable. This paradox—that advancing AI makes human cognitive skills more important—represents a fundamental insight for talent development strategy, requiring focus on professionals who can work symbiotically with AI systems while maintaining critical oversight and creative problem-solving capabilities.


The panel ultimately demonstrated that addressing India’s AI talent gap requires coordinated action across educational reform, industry engagement, and government policy, representing not merely a technical challenge but a systemic transformation of how we approach education and professional development in an AI-driven future.


Session transcriptComplete transcript of the session
Sh. Subodh Sachan

Where do we see a talent gap? What is the requirement in terms of growing this whole ecosystem? Because when we come and we talk about today, it is the era, this is the most exciting time in the industry because AI is transforming everything. AI is transforming the way the businesses are being conducted. AI is transforming the whole workforce also because it’s not about what you are able to do, but it’s about co -exist with the whole AI ecosystem together. So my name is Sibodh and I’m director of SGPA headquarters. I’ve been part of the industry, I’ve been part of the government for almost 27 years. And being in the space of technology, being in the space of working closely within the startup ecosystem, within the academias, there’s always a gap in opportunity which we have witnessed.

And that’s why this particular topic today is very, very close to my heart in terms of how we ensure the industry move forward, how do we ensure that the AI as a technology can bring transformative changes overall. so I am happy that today you know very briefly today’s discussion will align very closely with the national efforts I am sure all of you when you talk about the NDIAI overall theme some of you have witnessed already that there is a lot of activity around the skilling, there is already 10 lakh AI skill in drive which has been initiated there is already a skill India digital program happening this is a new version of skill India altogether within STPI we have focused on you know a program called STPI skill up and I am happy to in fact announce also here that we are going to soon start the multiple regional hubs for training and ensuring that the training across technologies can happen and we have been joined by a lot of our training partners, the current training partner ecosystem is around 18 training partners across India and two of them in fact are here today with us and three of them are here with us and as we move forward, we’ll add more such training partners and collaborators.

We are calling them partners and collaborators because the aim and the objective is all aligned within the ecosystem of skilling up, right? The SIPI skill up becomes that particular program. Let me introduce our speakers. I’m not taking much time. So it’s my privilege to introduce my first speaker, Professor Dr. Alok Pandey, a professor and dean of UP Jindal University, a very senior academic leader with almost three decades of experience, focus across finance, governance, higher education, and I think multiple implementation within the financial technology space. He also comes with a great perspective on the AI. So let me request Professor Dr. Alok Pandey to come on stage and take the space. Please welcome Professor Pandey with a big round of applause.

A limited audience, but ensure that your applause covers the whole hall also. I’ll also like to introduce and welcome Professor Dr. Jawar Singh Professor Dr. Jawar Singh is also a professor from the Indian Institute of Technology Patna, he is also founder of Kuturna Labs and just we were chatting and he has just briefly told about his successful exit so he is not just the professor who is teaching but he is also practicing the same in the form of his own ideas implementation so we are literally and I am sure we are proud to have you Dr. Jawar Singh please welcome you on the dais let me also introduce Dr. Devinder Singh Deputy Director General of TEC this is the Department of Telecommunications in India Dr.

Devinder Singh has spent multiple years in the standards formalization standards ecosystem because you understand the telecom space especially is governed by the standards and these standards are very critical because unless and until because the interoperable ecosystem can only work if each and every device each and every node can be standardized and has to be standardized right So, Dr. Devendra Singh, Sri Devendra Singh represents the government from the Postal Telecom. So, let me welcome with a warm applause from the audience, Dr. Sri Devendra Singh on the dais, please. I’m also honored to join by Dr. Sarabjot. Dr. Sarabjot Singh Anand, he’s a co -founder and chief data scientist of TATRAS, also the co -founder of Sabath Foundation.

I have known, you know, Sarabjot Singh from almost, if I’m not wrong, seven, eight years now. And I’ve seen his passion in the space of AI. It’s not about just what he wants to achieve through his, you know, TATRAS data, but also about how, you know, and I think his work in the space of growing AI talent is well recognized in probably in some regions, especially in the region of Punjab, right? So, Dr. Sarabjot, thank you for being here. I request and welcome you on the dais of pioneer data science. A big round of applause for him. He has also roots in academia at Warwick and Ulcer. He has a very global perspective in this particular space altogether.

Let me introduce our next two speakers or two panelists on this agenda today. Vikas Srivastava, he’s a chief growth strategist of Vincis IT Services Private Limited. Vincis is one of our technology training collaborator and partner of STPI Scalar program. Vikas has almost 16 plus years in enterprise consulting, cloud workforce upskilling. And I think Vikas has a great perspective to share in terms of what is really reskilling requirement today within the whole ecosystem of the AI workforce. So, with a big round of applause, please welcome Vikash Srivastava. last but not the least let us also give a warm welcome to Kunal Gupta managing director of Mount Talent Consulting you know he has been doing talent advisory he runs his own job search portal he understands he works very closely with industry and has a clear perspective what is the industry requirement and where is the gap so with a round of applause Kunal welcome on the dais as well thank you for everyone and let me you know let me probably switch my place as well so it will be easier for us to start the whole discussion hello yes so I think let me quickly start and I will probably start from my immediate left Dr.

Sarabjot you know when we talk about next gen AI AI you And when we talk about next -gen AI as a space, next -gen AI as the whole, from the talent perspective, from the opportunity perspective, what is your perspective? Briefly, we’ll touch upon each one of you on the defining next -gen AI so that the audience understands very clearly what does next -gen AI really means. So over to you.

Dr. Sarabjot Singh Anand

So to me, there are two camps here, right? One is the people who want to generate the next wave of AI, and then, of course, they’re the ones that have to use AI to be more efficient in their jobs. Now, for both of them, I think what is very, very important is that they have to be critical thinkers more than any technology as such, because there is, you know, a great move towards outsourcing your thinking to AI, and that’s a problem. We need to recognize that AI is not perfect. We need to recognize that there are certain deficiencies in it. and therefore we have to question what we get from that AI. And if we can get people who can critically think about the problem they are trying to solve and then take risks, I think risk -taking is going to be another very, very important aspect and having a foundational understanding of what is possible today with AI and what is not possible today with AI.

Because if we don’t recognize the deficiencies and start to regard AI as an oracle that always tells us the truth, we are going to get into trouble. So these are very, very important aspects apart from of course technology. Thank you.

Sh. Subodh Sachan

To Devender Singh, your perspective on the next -gen AI technology in a very brief.

Dr. Devinder Singh

Hello. Next -gen AI, I feel he should have a strong expertise in AI and he should have skills to solve the real -world problems also. And he should adapt to new technologies also. He should be able to work in research. He should be able to work in different sectors also. And above all, I feel he should be aware of the regulations. Thank you. in the sector and in AI also. Thank you.

Sh. Subodh Sachan

Thank you. Yes.

Professor Dr. Jawar Singh

Yeah, hello. So to me, actually, the NextGen AI should not only the AI, NextGen should be aware of the AI algorithms, but basically they can make products or solution with customer facing. And they should understand not only the algorithms, but the way those algorithms are mapped onto the hardware. To me, a grounding, solid grounding of hardware, solid grounding of computer science, or even the engineering domain is must, actually. Thank you.

Sh. Subodh Sachan

Yes. Professor Alok, sorry for my mistake in pronouncing your name wrong. Yes.

Professor Dr. Alok Pandey

Thank you. I think the next gen AI is largely a T -shaped thing. You need to be domain specialists, deep domain specialists. You need to be fluent in AI skills, whatever software, hardware, etc. you are looking at. And then you should be able to understand red teaming and containment. So, if you have these three, then probably we will be able to solve most of the problems we face in India today.

Sh. Subodh Sachan

Please, Kunal.

Kunal Gupta

I think your question is very important. What do I understand or what do we understand with next gen AI? You know, next gen AI is the infrastructure. It’s not for intelligence like you currently have this infrastructure wherein we are able to express our views and they go out to the world. you know next generation of AI is like this infrastructure meant to multiply our intelligence our reasoning our research our values our creativity our judgments and what the future holds for us you know we are going to see a new wave of new materials and for a very long time we haven’t seen any major materials coming apart from the basic alloys that we have been using the process changes which are going to come about in the next generation with the use of the next generation AI the generation of models you know we talk about many things about differentiation in the society from a digital divide to this new edge AI divide but it could also at the same time help us reach out to the inclusive society in general with vernacular languages you know multiplying and extrapolating the reach of what a common normal common man can do earlier they were dependent on languages like English you but with the expansion of the next -gen AI platforms, tools, local vernacular languages wherein you can speak and give instructions to the computer in Hindi, in your local languages, get access to data, knowledge.

Like I said, you know, you could just build anything. We have seen this with a tool called TikTok, you know, a tool about 10 -12 years back which started. And it created a wave of influencers, otherwise a language or a platform meant only for the English and the literate. You know, went on to the masses. So I think next -gen AI, like I said, is just in one word an infrastructure of intelligence, multiplying our ability to think and, you know, make judgments in the future as well. Thank you.

Sh. Subodh Sachan

Very well said. It is the infrastructure level intelligence which can be, which has to be created and which defines the next -gen AI. And carrying forward the same thought, I’ll ask Vikas to share his opening remarks on the next -gen AI.

Vikash Srivastava

Thank you. So I think most of the important aspect has on you. I think we’ve covered the panel. What I wanted to add is for me, the next -gen talent combines three important things. First is technical mastery. Second is ethical judgment. And third is real -world problem -solving capabilities. So we need people who understand, as I said, you know, we should know, the people should know where AI fits and where AI doesn’t, right? So I think this is the most important thing which I wanted to add. Thank you.

Sh. Subodh Sachan

I think for the audience, it is important to understand that when we talk about next -gen AI and we talk about next -gen AI talent and the next -gen AI talent gap, right, there is, we got a clear perspective right from a critical thinking, right, going to the level of not just the, you know, opening up the layers of the AI, but from the perspective that one has to start thinking about the new ways and new layers in which the AI technology is having an impact. But there’s materials which, you know, Whether it is you know the infrastructure intelligence again which we talked about. Whether it is a foundational knowledge and foundational you know algorithm which we talked about.

The next gen AI talent gaps exist everywhere. And accordingly you know I think I will ask Sarabhjotji now to probably talk something on specifically. You know from your perspective of both as TATRAS and Sabud Foundation. You have seen the whole AI evolution. And you have seen the gaps which have been there. And you have tried to fill the gaps already. So my question to you is you know when you talk about the evaluative you know evaluation of the fresh AI talent. How do you what is your approach? Because that approach will lead us you know in terms of ensuring that you know how will this whole space will grow up right. So you are opening Ramon on that here.

Dr. Sarabjot Singh Anand

Sure thank you. So you know when we look at talent today. What we assess is their problem solving skills. We look at. you know how keen are they on learning themselves so have they taken control of their own agency in learning in the future because what’s happening today and I’ve seen this over the years right a lot of students are very focused because they want to get a job they are focused on learning libraries you know even in 2018 when we started Sabudha Foundation because we found there was a huge gap in AI skilling here in India we found that till we got them to program a neural network they felt they weren’t doing anything right and now of course it’s LLMs everybody wants to learn lang chain and that’s about it but they have to understand the foundations the foundations if they are weak we are going to do interesting things but are not going to do amazing things and so the focus has to be on building a strong foundation increasing their curiosity in terms of what they are doing and getting them to think about how they can be creative in the solutions that they are engineering for their customers.

Now, in Tataras, we work with startups in the US and develop their AI for them. Now, to do that, somebody mentioned domain being very important. And what we are constantly training our folks to say is understand the problem from the customer’s perspective. Right? It’s not just about algorithms. When you create a solution, a successful solution is going to be one that solves the problem. It doesn’t matter what technology you use. And that is a key differentiation between the training that we provide and what is available otherwise in terms of just skilling on libraries. Right.

Sh. Subodh Sachan

Thank you. And I think, you know, for all the people outside sitting here, the most important part, I think, as Sarojotji said, and probably any one of you can add also, whenever you feel like, you know, curiosity is one part. Right? Because curiosity to our human mind. adds that element of learning and when the curiosity is there there comes a creativity and once you have this curiosity combined with creativity then only you can understand the customer problem if I am not wrong you can understand the customer ecosystem it’s just about the customer ecosystem from the perspective where you make money but when you talk about social impact because social impact even the people who are getting benefited from the technology they might not be directly paying you but you are creating great amount of social equivalence outside there so it becomes important from their perspective and when we combine these three and we map that with the AI which is such a powerful language right now in terms of technology I think the solutions which you see outside are just a few examples of what really can wonders can be created when you know you bring these three elements together right so I think in similar lines you know I will ask Dr.

Devinder Singhji because he comes from his background on the whole telecom space right and today we talk about native AI telecom infrastructure. When you talk about native AI, they not be just AI ready, but they need to also bring in AI in their own operations. When I say AI readiness, it’s all about the scale, the kind of compute, kind of technology, kind of infrastructure they need to create. But how do they approach from a standards perspective when you see? When you see from the standards perspective, because you see future. You are looking into 6G as a standards. What is the role of an AI in terms of standard creation? And what is the role in terms of technology when standards are getting defined?

So from that angle, your thoughts are on the same.

Dr. Devinder Singh

The present telecom engineers, they are very strong in networking. But the future network that the 6G would be coming, it will be more dependent on the AI. The present technology, 5G, is in that case the AI is add on that. But in 6G, each and every component has got AI inbuilt in that only. So at present, the planning is done in a static way. Components are selected and then the effect is seen. But in 6G, it will be self -learning type of thing. So the engineers would be required to know machine learning. And the present cases, whenever there is some fault and alarm is generated, an engineer is supposed to take corrective action. But in 6G, it would be happening, AI will predict what kind of fault can come and it will take corrective action on its own.

And at present, most of the decisions are taken at the central level only. But in 6G, the intelligence will be distributed at the edge also. So the decision will be taken at a distributed level. So the engineer must be able to, plan everything. Thank you. in considering that the distributed decision will be taken. So in that case, as far as the standards are concerned, standards for 6G are being finalized. They are not final, but it is already decided that each and every component will be having AI. In addition to that, you were talking about the standards. Since in TEC, in the telecom engineering center, we have already published some standards on AI. So the telecom engineers should also be aware of the standards which they are supposed to follow for implementing of an AI.

At present, the telecom engineers are using AI, but in future, the telecom engineers will design and operate and use it. Most of the decisions will be taken by AI. The human will only supervise only. Thank you.

Sh. Subodh Sachan

Thank you. I think when we talk about telecom space, I think two or three critical things, which probably, you know, is of interest to the audience, as Devendraji talked about, you know, we talk about the AI, agentic AI and the agentic AI in taking care of the operations part. And I think from a skill perspective, it is important that when you look into the agentic AI ecosystem, you need to go deeper into the particular technology or particular sector, because each industry gives its new challenges and problem even for agentic AI ecosystem, right. And when you talk about the infrastructure readiness, right, it is important that today, the whole telecom sector is one of the sector and I think I’ll come back to you on the on the perspective of how is telecom sector creating robustness.

Right. And I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector Next, I think I’ll touch upon again, you know, I think to Professor Jawar, you know, when you look into the layer of the hardware and below, right?

Because when we talk about the AI and we talk about six layers as has been spoken about across the spectrum, the most promising and most important layer is also about not just applications, but also about the hardware, which is powering up the whole AI, you know, the need and speed

Professor Dr. Jawar Singh

All right. So, I mean, this is quite interesting because very rarely people talk about how those algorithms runs, how those AI models runs actually. So honestly, if I say these models are very expensive, expensive in terms of not cost, but in terms of power, I will say. And cost is obviously associated with it actually. So, if I say a simple, if I take a simple example, the power requirement of if I take a very basic NVIDIA processor, it consumes around 500 to 700 watt. But if I say the same processor, I mean, not, I mean, our human brain is also having a very beautiful processor. I can say it can compute a lot and it just consumes 20 watt power.

So, you can see there is a huge gap between. The processing capabilities, the most instead of the processors that we have and the. most cognitive processes that we all have is there is a lots of gap actually so the gap need to be bridged so there are lots of research is going on in this domain we usually call the neuromorphic computing or brain inspired computing where these algorithms can be mapped at the hardware in a efficient manner another example i can give you the deep seek when it comes and when it first time pops up or surfaced in the market actually the nvidia’s stocks i mean slumped down quite severely and the reason was that their processor was quite efficient actually that was the only thing so people think okay we may do the same thing in a more efficient way so there is a we need people basically they not only think from the algorithm perspective but they also think from the hardware perspective hardware security also i will add here one more term hardware security because the ai can be weaponized and can also be used for neutralization purpose.

So the hardware play a very crucial role. Algorithms are okay. So we need people basically they understand not from the algorithm but all the way down to the hardware implementation. How your implementation is secure, trusted and the reliable. So I hope No, thank you. Thank you.

Sh. Subodh Sachan

You touched upon the element of the security and I think when AI comes into play, the cyber just not the security elements of the algorithms but also the important thing which has popped up is the biasness and the robustness and the fairness of the algorithms. So I am sure some of you will talk about that as a gap from a talent perspective and how do we skill and reskill the people who can be used for this particular filling the layers across the layers they can be probably be more coming into the ecosystem. So with that I think I will ask Dr. Alok. when you look into from a university academia perspective today we talk about population scale AI implementation and I think when we talk about population scale AI implementation it is not about the whole critical thinking and the thinking part also need to be changed and hence academia needs to be geared up to create that kind of curiosity and learning from the perspective of students so what is your take from the industry and academia when you see a gap.

How are you gearing up your students from a perspective of scale of AI perspective and from that particular element?

Professor Dr. Alok Pandey

We have large contracts. Say I have to do an M &A valuation, an M &A due diligence. And, you know, competition commission has asked me whether I should go for this merger acquisition or not. I am a lawyer. How do I do it? I can use a generative AI software for that. I can do money laundering prevention, not just spam prevention like, it’s very effective spam prevention by, you know, Airtel and all. But I can do money laundering calculations and identify which transactions work in which manner through a generative AI. So we need to develop products in these lines. The second thing I would say that safety and security of these products. How are we going to look at, you know, the safe usage?

Now, there’s a term which has come up. You know, this is called a coming wave. Mustafa Suleiman has written a book, The Coming Wave, and everybody uses. This is the coming wave. And the wave is going to drown all of us if safety and security is not there. Every young person who uses AI needs to understand what is red teaming and what is containment. I should kill my technology if it doesn’t work in my favor. Right. And finally, domain integration. So AI healthcare, AI law, AI education, AI finance. So all these levels basically need to be understood by educational institutions. If you ask me another question that how do we scale it up, then I’ll of course later speak on that.

But I’ll tell you that we need to really work out an infrastructure. We need to work out on academic strength. We need to have large number of trained faculty members. We need to have MOUs with Western countries. All companies are based in either China or Europe or America. And the universities are generating a lot of trained resources. And Indian universities need to move forward in that direction. So I basically feel that yes, there is a huge gap today. And we need to really answer these gaps through not just viable funding from government, but also from industry.

Sh. Subodh Sachan

I tend to. Partly agree in terms of, you know. the length and the breadth of the AI ecosystem has changed dramatically everywhere but you know when we look into the Indian talent right and I strongly believe because I have been in this industry for very long now and kind of you know energy we are seeing in this particular 10 arenas hall and probably the conferences happening other side there is a huge talent which has popped up now and they are generating very good solutions and today India from a solution producer perspective they are not just doing something at the application layer or the agentic layer but they are also looking into the foundational layer and that’s why we have saw the launch of the recent LLMs also right and when we look into let’s say the launch of the Sarvam LLM or other LLMs it’s very clear people are trying to see there is a lot of data available in our country and this data needs to be understood and as I think as you talked about if you take just one particular sector called law and justice right and there is one start one company here Lex Leges and I was interacting with them yesterday they have understood this problem they have created their LLM on particular you know not the large language model but they have approached this problem with the same mentality of LLM right and hence they have been able to ensure exactly what you are describing as a problem you know how do you create a solutions for that and it works on Indian data it works on the Indian contract laws it works on the Indian you know past judgments so that is the need of our now you as an entrepreneur if you have entrepreneurial mindset as from a perspective of audience sitting here or as somebody who wants to get into this particular as a workforce you need to clearly have the idea about each in every domain whether it is health as you talked about or whether it is law and justice has its own set of challenges and problems right and when there are challenges and problems with right skill and right talent you can actually approach and be very successful and we see this as a you know leapfrog moment for each one of you From the industry perspective also So taking that thought forward I’ll ask Kunal Kunal you have been talking about Skill gaps Especially working with students and working professionals From your platform perspective Whatever you are seeing From your own job portal and job placements perspective What is your take again In terms of most commonly seen The certain abilities Which is required Which probably should be seen by each one of them In a short term What are the skills they need to fill in Whether it is Learning to coexist With the LLMs outside there Whether it is Learning to do the coding Whether in the AGI As Professor Alok talked about Or creating new Machine learning algorithms What do you see as a typical problem In a short term Where talent has to be ready for that

Kunal Gupta

I think I see the problem threefold when it comes to skill gap, specifically in a dynamic country like India, wherein we are living across many generations across the country. You know, a generation which is far ahead in the future, a generation which is far behind in terms of development, capability and education as well. The biggest skill gap that I see right now is the application. And more importantly, how do we define a problem? You know, what we do is we have this mentality out of whatever ecosystem that we have built, we just start copying others in terms of this is the trend, so we need to go for this trend without really understanding how to define the problem first.

Define the problem is about 50 % of the solution achieved in itself. Define the problem in any sphere that the person is in. You know, like Dr. Saab said about different usage in different fields, whether it is healthcare, whether it is… whether it is law, whether it is… is agriculture for that matter, which is catering to such a huge population in our country. Who would have thought of hydroproning, you know, producing such huge results without the application of soil, no dependency on weather, and you can create your own environment for creating absolutely green vegetation, you know, in the best of atmosphere without germs and without the usage of application of pesticides as well. So coming back to your question, skill gap again is going to be defined sector specific.

Different sectors are going to have different specific gaps at different specific application levels. When it comes to industry, again, what is the solution that I am providing or we are providing as a company, you know, our aim is to develop an employability intelligence layer. You know, how do we define skill gap, basis, what kind of jobs are coming from the market, basis, the jobs, what is the current skill set of the candidates, we have a scientific gap analysis in terms of what is missing. it’s not that we have a very nice application tracking system we do a recommendation algorithm using a lot of ai the aim is you know in my view the aim is not to exclude people or reject people using ai when it comes to skill gap analysis the aim is to show them that this is what is missing this is what has to be developed it is not rocket science that can’t be developed you take a course of one month three months six months or you do it while working in another job role while coming to your ambitious job role you know it takes time nothing is built in a day but more importantly i think a more bigger gap that i see right now which is going to come as a huge pressure on the educational setups whether it is at a university level or at a school level you know any which ways we keep talking about the fact that india’s syllabus is very uh you know it is not aligned to the industry it is not aligned to the industry it is not aligned to the industry it is not aligned to the industry it is about 20 year old we don’t update our syllabuses you know it takes six committees to take five years, seven years to come up with new curriculums.

By the time the new curriculum is implemented, it has already gone obsolete from an industry perspective. I think in the last six months, the speed of growth of AI that we have seen is going to put the maximum pressure on the policy makers of the country, specifically those catering to one is the core foundational education, the higher skilling education, and more importantly, the industry skilling which has to be needed to ensure that people understand why productivity is needed, how productivity is done. Students need to understand that most industry needs output. We need production. We need results. We are not, you know, industry cannot always bridge the gap. And in India, you know, I’ll have to say it, whether it is an MSME or whether it is large industries, everybody has done their bit in terms of scaling those whom they select.

And, you know, today the success that we see in conferences like these is a lot of people. People who have grown through the industry and how industry has scaled people up. Correct. Colleges will need to ensure that AI education is for all. You know, application of AI for all. Output will increase. Output will lead to, you know, more analysis in terms of how to improve the output production. Production leads to more research. Research is going to lead to more efficiency in production. And it’s a loop. You know, currently the application is going to lead to higher output in my view. Higher output in terms of what an engineer can do in eight hours of work.

What a company can do in terms of per year revenues. Whatever models can do. Whatever processes can do. And based on that, it’s a regular running cycle. We can’t sit in a relaxed manner right now. Specifically in this changing world right now.

Professor Dr. Alok Pandey

I’ll just add to what Kunal has said. We need to de -bureaucratize education today to a great extent. In fact, we brought in this concept of institution of eminence. And I’m happy that I’m part of an institution of eminence. where we can create our own curriculum. You know, curriculum velocity is so high that you can’t give a command to faculty members to teach a particular course, especially in technology. And especially when you’re talking about integrating with a particular domain when the faculty has to work with other specialists, identify something and the needs change frequently. You know what has happened? It’s not just that AI technologies have changed. The consumer and the user has started demanding change.

For example, if you look at the crop insurance, the crop insurance idea basically means that I should have satellite pictures and I should have an understanding as to whether a crop failed or not. And this is done best using AI. And if I need to train my agriculture college students, you know, who study in large agronomy institutions, I need to have a quick delivery of the curriculum. Sadly, we don’t have that. We don’t have expertise on those. Thank you. So if you de -bureaucratize curriculum, allow more autonomy to institutions who are into technology at least, or technology applications, we’ll have a much bigger national good at hand.

Sh. Subodh Sachan

And I think the start has already happened with the NEP, if I’m not wrong, right? The whole focus on the national education policy and the initiatives around that has been giving more autonomy and speed towards defining the curriculum. So I tend to see this as a problem which was there. And there’s already a lot of work has happened now, right now. I think if you talk about, I think, when you were at probably a globally level and you were probably, you know, from a Warwick experience, right? You would have seen these changes there. And do you see this coming back to India right now in that similar speed?

Dr. Sarabjot Singh Anand

I don’t, unfortunately, right? So at Warwick, we actually have the Jaguar Land Rover research labs on campus, right? And we were interacting with them. Even 14 years ago, we were looking at tracking, the cognitive load on a driver. as they drove a vehicle to understand whether we need to take some preventative action before he causes an accident, right? Now, of course, we are saying we don’t need a driver. So times are changing very quickly. I think the one thing when we started Sabud, we realized that curriculum is falling short. Academics are not equipped to deal with the change that’s happening. Even HR folks, when we look at it from an industry perspective, our HR folks are not evolving quick enough to evaluate candidates the right way, right?

So what we did in Sabud was, we said the centerpiece of our training is what we call a passion project, where we get students, we are training them in AI machine learning and technology, but we are getting them to think about how do I solve a problem of social impact? Right? And then we are giving them mentors. who are from Tatras and from other organizations that are actually creating AI solutions for the global north, as they say, right? And so now the students are getting mentorship. And I think the key thing we are missing today, which is shocking for a country our size, we have companies with lots of technologists that have no choice but to keep up with technology innovation.

At the same time, these people have to be trained to give back more. If we can get every person to be evaluated or valued based on how much they give back to others, then we can pair students with mentors in industry and get them to get the skills that no curriculum can give, right? Because you really need problem -solving skills that are existing outside of academia. Of course, academia. Academia has great depth, and therefore they have to be part of that. And so as Subodhji was saying, we’ve got to bring academia and industry closer together and solve this problem. It’s not going to happen from one side alone.

Sh. Subodh Sachan

No, I think, thank you for sharing your thought on the I’ll just take you know, professor, sorry, Vikash first and then you on, please go ahead.

Professor Dr. Jawar Singh

Specifically for this, so in that way basically from the curriculum point of view at least, I just want to make this a small caveat actually related to this curriculum updates at least the centrally funded technical institutions are not a problem at all. Even they are free of all those things actually. If I have to start a new course from the next semester, I’m free to run. So such kind of restrictions for curriculum updates, at least as far as I know, CFTIs are not bound to that actually and they are quite okay.

Professor Dr. Alok Pandey

It was not only for CFTIs because India is 1 .4 billion people, right, and majority of it are in tier 2. My basic problem is state technical institutions. The talent which comes from state technological universities, which is the best talent. And these people need scholarships. These people need multilingual support. And these teachers also need training. You know, and, you know, there’s a very large layer for the state institutions because education being both a center as well as state funded thing. And we are in a quagmire where, you know, new regulators are coming in, old regulators are falling and we need to identify how to do it. But my basic thing was not CFTIs. The central funded institutions are much better off.

But still, you know, the amount of manpower you need for developing AGI kind of systems. And it is yet to see just a matter of five years. We’ll see how this hypothesis works, whether we are able to generate something in artificial general intelligence. I think all of us will have to contribute towards this transformational change right from academia to the industry, to the policymakers like us. It becomes important. We understand the speed is not required. But. to develop the solution, speed is required in terms of how the solutions get developed by virtue of doing right things, right?

Sh. Subodh Sachan

And I think to Vikash, I think Vikash, you have seen, because you have been coming from the AI learning space, right? So my only question to you is you would have seen the conventional way of doing AI education in past and how that has changed today, right? Are we still looking into conventional classroom mechanism of making the AI learning or as you know, probably what Sarab said, it’s not about learning but practicing it while learning, right? So what’s your input around the same?

Vikash Srivastava

So in my view, conventional or traditional trainings, they focus heavily on theory, mathematics, model architecture, you know, those foundations are important but from an industry readiness, we require additional three layer. First is, you know, problem, I know, applied problem solving. So, learner must work on real data set. Or, you know, they should focus on the domain -specific knowledge. Or they should work with the deployment scenarios. Second is, you know, the production exposure. So, knowing how your model move from your notebook environment to real scalable or, you know, secure, you know, systems. And, you know, how the production happens. And last is.

Sh. Subodh Sachan

So, I think when we talk about the classroom learning and we talk about the learning about the mathematics. How do you see the, you know, the new tools and technologies being used for training? For example. you know are there any examples probably you can quote some examples we have seen that students are now able to not just see the typical you know learning of the classroom but what other tools and technologies they’re being exposed so the learning gets you know increased this the learning speed of learning becomes faster?

Vikash Srivastava

So basically in in our sector you know we are utilizing ai to assess the skill gaps so now there are tools who based on the you know participant profile is able to assess the learning gaps and recommend adaptive learnings so which eventually help you know increasing the employability outcome that’s that’s so so this is how the ai is helping today.

Sh. Subodh Sachan

Great i think while we you know we are probably somewhere almost towards the end but we have one more set of questions but just to keep the audience anybody wants to have one quick question please can somebody bring the mic to them? Can somebody please bring the mic to them ? Right so I think we’ll I wanted to go one more round of questions but just to keep the interactiveness because the audience is also limited I don’t want you guys to get bored about what we are speaking so anybody can probably ask one or two questions?

Audience

Thank you hello everyone namaskar just quickly you speak in Hindi you tell your name my name is Vikram Tripathi I am from a village in Prayagraj and the upcoming elections are the panchayat elections I am going to participate in them there is a district panchayat election there is a district panchayat member of 25 villages so if I win the election then in the first year the AI tools or softwares which are available which are the three sectors where I should use them and secondly is it possible that private companies, CSR funds Thank you.

Dr. Devinder Singh

One bias index is produced depending upon matrices. For one bias, I can use a number of matrices also. Result of all the matrices are clubbed to find the bias index for one particular parameter. Then, a system can have bias due to many things and different bias indexes are clubbed to find one fairness index. The fairness index ranges from 0 to 1. If it is 1, then it is considered fair. If it is 0, it cannot be used. But in practice, the fairness index will be from 0 to 1. Then it will depend upon the user also. If he wants to have how much fairness in the system. If the system is used to suggest what song would you like to hear, then some bias may be accepted.

If system is supposed to identify whether the person is the soldier is enemy or our own. then no bias can be accepted so that can be used for the by the deployer and those matrices or the framework we have suggested it can be used by deployer the developers also the engineers who are involved in developing those systems those people can also test their models if it is fair or not and it can be used by the regulators also the regulator may say the government may say for such sector the system should be tested and it should have at least this much fairness level similar to fairness we have got one standard for robustness also which can be used to check if the system gives consistent results in different situations

Sh. Subodh Sachan

Great and I am sure these standards are available in the public domain they are not draft stages.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Sarabjot Singh Anand
3 arguments158 words per minute892 words336 seconds
Argument 1
Critical thinking and risk-taking are essential, with foundational understanding of AI possibilities and limitations
EXPLANATION
Dr. Anand argues that next-gen AI talent must be critical thinkers who can question AI outputs rather than blindly accepting them. He emphasizes the importance of understanding AI’s deficiencies and limitations to avoid treating AI as an infallible oracle.
EVIDENCE
He mentions the risk of outsourcing thinking to AI and the need to recognize that AI is not perfect, highlighting the importance of questioning AI outputs.
MAJOR DISCUSSION POINT
Defining Next-Generation AI and Required Capabilities
AGREED WITH
Professor Dr. Jawar Singh, Professor Dr. Alok Pandey
DISAGREED WITH
Professor Dr. Jawar Singh
Argument 2
Focus on problem-solving skills, self-learning agency, strong foundations, curiosity, and customer-centric solution thinking
EXPLANATION
When evaluating AI talent, Dr. Anand emphasizes assessing problem-solving abilities and self-directed learning capabilities. He stresses the importance of strong foundational knowledge rather than just learning libraries, and understanding customer problems to create successful solutions.
EVIDENCE
He provides examples from his experience at Sabud Foundation where students focused on programming neural networks, and at Tataras where they work with US startups, emphasizing the need to understand problems from the customer’s perspective.
MAJOR DISCUSSION POINT
AI Talent Evaluation and Skills Assessment
AGREED WITH
Professor Dr. Alok Pandey, Vikash Srivastava
Argument 3
Industry-academia collaboration through mentorship programs is essential for bridging the skills gap
EXPLANATION
Dr. Anand advocates for closer collaboration between industry and academia, suggesting that companies with technologists should provide mentorship to students. He believes this pairing can give students problem-solving skills that no curriculum alone can provide.
EVIDENCE
He describes Sabud Foundation’s approach of using passion projects with mentors from Tataras and other organizations, and mentions his experience at Warwick with Jaguar Land Rover research labs on campus.
MAJOR DISCUSSION POINT
Educational System Transformation and Curriculum Challenges
AGREED WITH
Professor Dr. Alok Pandey
D
Dr. Devinder Singh
4 arguments155 words per minute661 words255 seconds
Argument 1
Strong AI expertise combined with real-world problem-solving skills and regulatory awareness
EXPLANATION
Dr. Singh defines next-gen AI professionals as those who possess deep AI expertise while being able to solve practical problems and adapt to new technologies. He emphasizes the importance of understanding regulations in both the sector and AI domain.
EVIDENCE
He mentions the need for professionals to work in research, different sectors, and be aware of regulations.
MAJOR DISCUSSION POINT
Defining Next-Generation AI and Required Capabilities
Argument 2
6G networks will have AI built into every component with self-learning capabilities and distributed intelligence at the edge
EXPLANATION
Dr. Singh explains that unlike 5G where AI is an add-on, 6G will have AI integrated into every component. The networks will feature self-learning capabilities, predictive fault management, and distributed decision-making at the edge rather than centralized control.
EVIDENCE
He contrasts current 5G technology where AI is added on versus 6G where each component will have AI built-in, mentions self-learning planning versus static planning, and describes predictive fault correction versus reactive alarm-based responses.
MAJOR DISCUSSION POINT
Industry-Specific AI Implementation and Standards
Argument 3
Bias and fairness indices ranging from 0 to 1 help determine system acceptability across different use cases
EXPLANATION
Dr. Singh describes a framework for measuring AI system fairness using indices that range from 0 to 1, where 1 represents complete fairness. The acceptable level of bias depends on the application, with critical applications like military identification requiring no bias while entertainment recommendations may tolerate some bias.
EVIDENCE
He provides specific examples: song recommendation systems may accept some bias, but military friend-or-foe identification systems cannot accept any bias. He mentions that these frameworks can be used by developers, deployers, and regulators.
MAJOR DISCUSSION POINT
AI Standards and Fairness Implementation
Argument 4
Standards for AI fairness and robustness are available in public domain for developers, deployers, and regulators
EXPLANATION
Dr. Singh confirms that TEC has published standards for AI implementation that are publicly available. These standards can be used by various stakeholders including system developers, deployers, and government regulators to ensure proper AI implementation.
EVIDENCE
He mentions that TEC (Telecom Engineering Center) has published AI standards and that they have standards for both fairness and robustness that provide consistent results across different situations.
MAJOR DISCUSSION POINT
AI Standards and Fairness Implementation
P
Professor Dr. Alok Pandey
3 arguments172 words per minute891 words310 seconds
Argument 1
Deep domain specialization with AI fluency and understanding of red teaming and containment
EXPLANATION
Professor Pandey describes next-gen AI as requiring a T-shaped skill set combining deep domain expertise with AI technical skills and safety knowledge. He emphasizes the importance of understanding red teaming and containment to ensure safe AI usage.
EVIDENCE
He mentions the need to be able to ‘kill technology if it doesn’t work in your favor’ and references Mustafa Suleiman’s book ‘The Coming Wave’ about AI safety concerns.
MAJOR DISCUSSION POINT
Defining Next-Generation AI and Required Capabilities
AGREED WITH
Dr. Sarabjot Singh Anand, Professor Dr. Jawar Singh
Argument 2
Domain integration across healthcare, law, education, and finance requires safety, security, and population-scale implementation
EXPLANATION
Professor Pandey argues for developing AI products across various domains while maintaining focus on safety and security. He emphasizes the need for population-scale AI implementation with proper safety measures and domain-specific integration.
EVIDENCE
He provides specific examples like M&A valuation, due diligence for competition commission, money laundering prevention, and mentions the need for infrastructure, academic strength, trained faculty, and MOUs with Western countries.
MAJOR DISCUSSION POINT
Industry-Specific AI Implementation and Standards
AGREED WITH
Dr. Sarabjot Singh Anand
Argument 3
De-bureaucratization of education and increased institutional autonomy are needed for rapid curriculum updates
EXPLANATION
Professor Pandey advocates for reducing bureaucratic constraints in education to enable faster curriculum development. He argues that technology education requires flexibility to respond quickly to changing demands and integrate with domain specialists.
EVIDENCE
He mentions the concept of ‘institution of eminence’ which allows creating custom curriculum, discusses ‘curriculum velocity’ being too high for traditional command structures, and provides the example of crop insurance requiring satellite imagery and AI integration for agriculture students.
MAJOR DISCUSSION POINT
Educational System Transformation and Curriculum Challenges
AGREED WITH
Kunal Gupta, Vikash Srivastava
DISAGREED WITH
Professor Dr. Jawar Singh
P
Professor Dr. Jawar Singh
3 arguments141 words per minute539 words227 seconds
Argument 1
Hardware understanding is crucial alongside algorithms for secure, trusted, and reliable implementations
EXPLANATION
Professor Singh emphasizes that next-gen AI professionals must understand not just algorithms but how they map onto hardware. He stresses the importance of hardware security, efficiency, and reliability in AI implementations.
EVIDENCE
He mentions the need for solid grounding in hardware, computer science, and engineering domains, and highlights that AI can be weaponized, requiring secure and trusted hardware implementations.
MAJOR DISCUSSION POINT
Defining Next-Generation AI and Required Capabilities
AGREED WITH
Dr. Sarabjot Singh Anand, Professor Dr. Alok Pandey
DISAGREED WITH
Dr. Sarabjot Singh Anand
Argument 2
Hardware efficiency and security are critical, with significant power consumption gaps between current processors and human brain capabilities
EXPLANATION
Professor Singh highlights the massive energy inefficiency of current AI processors compared to human brains and the need for more efficient hardware solutions. He also emphasizes hardware security concerns as AI can be weaponized.
EVIDENCE
He provides specific examples: NVIDIA processors consume 500-700 watts while human brain consumes only 20 watts, mentions DeepSeek’s efficiency causing NVIDIA stock to drop, and discusses neuromorphic computing research to bridge this gap.
MAJOR DISCUSSION POINT
Industry-Specific AI Implementation and Standards
Argument 3
Centrally funded technical institutions have flexibility, but state institutions face greater challenges with curriculum updates
EXPLANATION
Professor Singh clarifies that centrally funded technical institutions (CFTIs) actually have significant freedom to update curricula and start new courses quickly. He suggests that curriculum update restrictions are not a major problem for these institutions.
EVIDENCE
He states that CFTIs are free to start new courses from the next semester and are not bound by the curriculum restrictions that affect other institutions.
MAJOR DISCUSSION POINT
Educational System Transformation and Curriculum Challenges
DISAGREED WITH
Professor Dr. Alok Pandey
K
Kunal Gupta
3 arguments181 words per minute1228 words406 seconds
Argument 1
Next-gen AI serves as infrastructure for intelligence, multiplying human reasoning and creativity while enabling vernacular language access
EXPLANATION
Kunal describes next-gen AI as an infrastructure that amplifies human cognitive abilities including reasoning, research, creativity, and judgment. He emphasizes how AI can bridge language barriers by enabling interaction in local vernacular languages, making technology accessible to non-English speakers.
EVIDENCE
He provides the example of TikTok creating a wave of influencers by making content creation accessible beyond English-literate users, and mentions how vernacular language support can help common people access data and knowledge previously limited by language barriers.
MAJOR DISCUSSION POINT
Defining Next-Generation AI and Required Capabilities
Argument 2
Scientific gap analysis based on market job requirements versus current candidate skill sets, with emphasis on defining problems correctly
EXPLANATION
Kunal advocates for a systematic approach to identifying skill gaps by analyzing market job requirements against candidate capabilities. He emphasizes that defining the problem correctly is 50% of achieving the solution, and stresses the importance of sector-specific skill development.
EVIDENCE
He mentions their employability intelligence layer, AI-powered recommendation algorithms, and provides examples from agriculture (hydroponics) and different sectors having specific application-level gaps.
MAJOR DISCUSSION POINT
AI Talent Evaluation and Skills Assessment
Argument 3
Current syllabuses are outdated and policy makers face pressure to update educational frameworks rapidly
EXPLANATION
Kunal argues that India’s educational syllabuses are approximately 20 years outdated and take too long to update through committee processes. He believes the rapid pace of AI development will force policy makers to accelerate educational reforms.
EVIDENCE
He mentions that it takes six committees and 5-7 years to develop new curriculums, and by implementation time they’re already obsolete. He notes that the speed of AI growth in the last six months will pressure educational policy makers.
MAJOR DISCUSSION POINT
Educational System Transformation and Curriculum Challenges
AGREED WITH
Professor Dr. Alok Pandey, Vikash Srivastava
V
Vikash Srivastava
3 arguments132 words per minute262 words118 seconds
Argument 1
Technical mastery, ethical judgment, and real-world problem-solving capabilities are the three key components
EXPLANATION
Vikash defines next-gen AI talent as requiring a combination of technical expertise, ethical decision-making abilities, and practical problem-solving skills. He emphasizes the importance of understanding where AI fits and where it doesn’t in real-world applications.
MAJOR DISCUSSION POINT
Defining Next-Generation AI and Required Capabilities
Argument 2
AI-powered tools can assess skill gaps and recommend adaptive learning to improve employability outcomes
EXPLANATION
Vikash describes how AI tools can analyze participant profiles to identify learning gaps and provide personalized learning recommendations. This approach helps improve employability outcomes by targeting specific skill deficiencies.
EVIDENCE
He mentions tools that assess skill gaps based on participant profiles and recommend adaptive learning paths.
MAJOR DISCUSSION POINT
AI Talent Evaluation and Skills Assessment
Argument 3
Educational systems must shift from theory-focused to applied problem-solving with production exposure
EXPLANATION
Vikash argues that traditional training focuses too heavily on theory and mathematics, while industry readiness requires additional layers including applied problem-solving with real datasets, domain-specific knowledge, and understanding of deployment scenarios and production systems.
EVIDENCE
He mentions the need for learners to work on real datasets, focus on domain-specific knowledge, work with deployment scenarios, and understand how models move from notebook environments to scalable, secure systems.
MAJOR DISCUSSION POINT
Educational System Transformation and Curriculum Challenges
AGREED WITH
Professor Dr. Alok Pandey, Kunal Gupta
S
Sh. Subodh Sachan
2 arguments177 words per minute3433 words1160 seconds
Argument 1
STPI skill-up program with 18 training partners across India and upcoming regional training hubs
EXPLANATION
Subodh announces the STPI skill-up program which currently has 18 training partners across India and plans to establish multiple regional hubs for technology training. The program aims to create a collaborative ecosystem for skilling and reskilling in AI and related technologies.
EVIDENCE
He mentions that three training partners are present at the event and that they are expanding the network of training partners and collaborators.
MAJOR DISCUSSION POINT
Government Initiatives and Ecosystem Development
Argument 2
National efforts align with NDIAI theme including 10 lakh AI skill drive and Skill India digital program
EXPLANATION
Subodh describes how the discussion aligns with national AI initiatives, including a large-scale AI skilling drive targeting 10 lakh (1 million) people and the new version of Skill India digital program focused on AI capabilities.
EVIDENCE
He specifically mentions the 10 lakh AI skill drive and the Skill India digital program as existing national initiatives.
MAJOR DISCUSSION POINT
Government Initiatives and Ecosystem Development
A
Audience
1 argument59 words per minute98 words98 seconds
Argument 1
AI tools can be applied in three key sectors for village-level governance and development
EXPLANATION
An audience member from a village in Prayagraj asks about practical AI applications for rural governance, specifically requesting recommendations for three sectors where AI tools should be implemented if elected to district panchayat representing 25 villages.
EVIDENCE
The audience member mentions upcoming panchayat elections and their intention to participate in district panchayat elections.
MAJOR DISCUSSION POINT
Rural AI Implementation
Agreements
Agreement Points
Need for strong foundational knowledge and understanding of AI capabilities and limitations
Speakers: Dr. Sarabjot Singh Anand, Professor Dr. Jawar Singh, Professor Dr. Alok Pandey
Critical thinking and risk-taking are essential, with foundational understanding of AI possibilities and limitations Hardware understanding is crucial alongside algorithms for secure, trusted, and reliable implementations Deep domain specialization with AI fluency and understanding of red teaming and containment
All three speakers emphasize that next-gen AI professionals need deep foundational knowledge – whether in algorithms, hardware, or domain expertise – combined with understanding of AI’s limitations and security considerations
Importance of real-world problem-solving and customer-centric approach
Speakers: Dr. Sarabjot Singh Anand, Professor Dr. Alok Pandey, Vikash Srivastava
Focus on problem-solving skills, self-learning agency, strong foundations, curiosity, and customer-centric solution thinking Domain integration across healthcare, law, education, and finance requires safety, security, and population-scale implementation Educational systems must shift from theory-focused to applied problem-solving with production exposure
These speakers agree that AI education and talent development must move beyond theoretical knowledge to focus on solving real-world problems across various domains with practical implementation experience
Educational system transformation and curriculum modernization challenges
Speakers: Professor Dr. Alok Pandey, Kunal Gupta, Vikash Srivastava
De-bureaucratization of education and increased institutional autonomy are needed for rapid curriculum updates Current syllabuses are outdated and policy makers face pressure to update educational frameworks rapidly Educational systems must shift from theory-focused to applied problem-solving with production exposure
All three speakers identify significant problems with current educational systems being too slow to adapt, overly bureaucratic, and focused on outdated theoretical approaches rather than practical skills needed by industry
Need for industry-academia collaboration
Speakers: Dr. Sarabjot Singh Anand, Professor Dr. Alok Pandey
Industry-academia collaboration through mentorship programs is essential for bridging the skills gap Domain integration across healthcare, law, education, and finance requires safety, security, and population-scale implementation
Both speakers emphasize that bridging the AI skills gap requires closer collaboration between industry practitioners and academic institutions, with industry providing real-world experience and mentorship
Similar Viewpoints
Both emphasize that defining and understanding problems correctly is fundamental to successful AI implementation, with Kunal specifically stating that defining the problem is 50% of achieving the solution
Speakers: Dr. Sarabjot Singh Anand, Kunal Gupta
Focus on problem-solving skills, self-learning agency, strong foundations, curiosity, and customer-centric solution thinking Scientific gap analysis based on market job requirements versus current candidate skill sets, with emphasis on defining problems correctly
Both professors acknowledge curriculum update challenges but clarify that centrally funded institutions have more flexibility, while state institutions face greater bureaucratic constraints
Speakers: Professor Dr. Alok Pandey, Professor Dr. Jawar Singh
De-bureaucratization of education and increased institutional autonomy are needed for rapid curriculum updates Centrally funded technical institutions have flexibility, but state institutions face greater challenges with curriculum updates
Both emphasize the critical importance of security and reliability in AI systems, with Dr. Singh focusing on regulatory compliance and Professor Singh on hardware security
Speakers: Dr. Devinder Singh, Professor Dr. Jawar Singh
Strong AI expertise combined with real-world problem-solving skills and regulatory awareness Hardware understanding is crucial alongside algorithms for secure, trusted, and reliable implementations
Unexpected Consensus
AI accessibility through vernacular languages
Speakers: Kunal Gupta
Next-gen AI serves as infrastructure for intelligence, multiplying human reasoning and creativity while enabling vernacular language access
While only one speaker explicitly mentioned this, the focus on vernacular language accessibility represents an unexpected emphasis on digital inclusion in an otherwise technically-focused discussion about AI talent development
Energy efficiency and environmental concerns in AI hardware
Speakers: Professor Dr. Jawar Singh
Hardware efficiency and security are critical, with significant power consumption gaps between current processors and human brain capabilities
The discussion of AI’s environmental impact through energy consumption was unexpected in a talent development focused session, highlighting the intersection of technical skills and sustainability concerns
Government standards and frameworks for AI fairness
Speakers: Dr. Devinder Singh
Bias and fairness indices ranging from 0 to 1 help determine system acceptability across different use cases Standards for AI fairness and robustness are available in public domain for developers, deployers, and regulators
The detailed discussion of existing government standards for AI fairness was unexpected, showing that regulatory frameworks are more advanced than typically assumed in talent development discussions
Overall Assessment

Strong consensus on need for foundational AI knowledge, real-world problem-solving skills, educational system transformation, and industry-academia collaboration. Speakers agree that current educational approaches are inadequate and that AI talent development requires both technical depth and practical application experience.

High level of consensus among speakers on core issues, with complementary rather than conflicting viewpoints. This suggests a mature understanding of AI talent development challenges and indicates potential for coordinated policy and program development. The agreement spans academic, industry, and government perspectives, providing a solid foundation for comprehensive AI capacity building initiatives.

Differences
Different Viewpoints
Scope of curriculum update challenges in educational institutions
Speakers: Professor Dr. Alok Pandey, Professor Dr. Jawar Singh
De-bureaucratization of education and increased institutional autonomy are needed for rapid curriculum updates Centrally funded technical institutions have flexibility, but state institutions face greater challenges with curriculum updates
Professor Pandey argues that bureaucratic constraints significantly hinder curriculum updates across educational institutions, while Professor Singh clarifies that centrally funded technical institutions (CFTIs) actually have significant freedom to update curricula quickly. Pandey focuses on the broader challenge affecting state institutions and the majority of India’s educational system, while Singh emphasizes that CFTIs are not constrained by these bureaucratic limitations.
Primary focus for next-generation AI talent development
Speakers: Dr. Sarabjot Singh Anand, Professor Dr. Jawar Singh
Critical thinking and risk-taking are essential, with foundational understanding of AI possibilities and limitations Hardware understanding is crucial alongside algorithms for secure, trusted, and reliable implementations
Dr. Sarabjot emphasizes critical thinking, questioning AI outputs, and understanding AI limitations as the primary requirements, while Professor Jawar Singh stresses the importance of hardware understanding, security, and the technical implementation aspects. Both agree on foundational knowledge but differ on whether the emphasis should be on cognitive/analytical skills versus technical/hardware expertise.
Unexpected Differences
Institutional autonomy in curriculum development
Speakers: Professor Dr. Alok Pandey, Professor Dr. Jawar Singh
De-bureaucratization of education and increased institutional autonomy are needed for rapid curriculum updates Centrally funded technical institutions have flexibility, but state institutions face greater challenges with curriculum updates
This disagreement is unexpected because both speakers are from academic institutions and should have similar perspectives on educational constraints. However, Professor Singh’s clarification reveals a significant gap between the experiences of centrally funded versus state-funded institutions, suggesting that the curriculum update problem may be more nuanced and institution-specific than initially presented.
Overall Assessment

The discussion shows relatively low levels of fundamental disagreement, with most speakers sharing common goals around AI talent development, industry-academia collaboration, and the need for practical, domain-specific AI education. The main disagreements center on implementation approaches and institutional constraints rather than core objectives.

Low to moderate disagreement level. The speakers generally align on the vision for next-generation AI talent but differ on specific approaches, priorities, and institutional realities. This suggests a healthy diversity of perspectives that could complement each other in a comprehensive AI talent development strategy, rather than conflicting viewpoints that would hinder progress.

Partial Agreements
Both speakers agree on the need for closer industry-academia collaboration and domain-specific AI applications, but they differ on implementation approaches. Professor Pandey focuses on institutional reforms, MOUs with Western countries, and infrastructure development, while Dr. Sarabjot emphasizes mentorship programs and passion projects with social impact focus.
Speakers: Professor Dr. Alok Pandey, Dr. Sarabjot Singh Anand
Domain integration across healthcare, law, education, and finance requires safety, security, and population-scale implementation Industry-academia collaboration through mentorship programs is essential for bridging the skills gap
Both speakers agree that current educational approaches are inadequate for AI talent development, but they propose different solutions. Kunal focuses on systemic policy-level changes and rapid curriculum updates, while Vikash emphasizes shifting from theory-heavy to practical, production-oriented learning approaches.
Speakers: Kunal Gupta, Vikash Srivastava
Current syllabuses are outdated and policy makers face pressure to update educational frameworks rapidly Educational systems must shift from theory-focused to applied problem-solving with production exposure
Takeaways
Key takeaways
Next-generation AI talent requires a combination of critical thinking, domain expertise, technical mastery, ethical judgment, and real-world problem-solving capabilities rather than just technical skills There is a significant gap between current educational curricula and industry requirements, with educational systems being 20+ years behind current technology needs AI implementation must be sector-specific with deep understanding of domain challenges – whether in telecom (6G networks), healthcare, law, agriculture, or governance Hardware understanding and security considerations are as crucial as algorithm knowledge for developing efficient, secure, and reliable AI systems Industry-academia collaboration through mentorship programs is essential for bridging the skills gap that neither sector can address alone AI talent evaluation should focus on problem-solving skills, curiosity, creativity, and customer-centric thinking rather than just library knowledge Standards for AI fairness, robustness, and bias measurement are being developed and implemented across sectors with different tolerance levels based on use cases The speed of AI advancement is putting unprecedented pressure on policy makers and educational institutions to rapidly update frameworks and curricula
Resolutions and action items
STPI will launch multiple regional training hubs with 18+ training partners across India as part of the STPI skill-up program Government initiatives including 10 lakh AI skill drive and Skill India digital program are being implemented to address talent gaps Educational institutions need to de-bureaucratize curriculum development and increase institutional autonomy for rapid updates Industry professionals should be incentivized to provide mentorship and give back to students through structured programs AI-powered tools for skill gap assessment and adaptive learning recommendations should be implemented to improve employability outcomes Standards for AI fairness and robustness should be adopted by developers, deployers, and regulators across different sectors
Unresolved issues
How to effectively scale AI education across India’s diverse population including rural areas and vernacular language speakers Specific mechanisms for funding and implementing rapid curriculum updates across state technical institutions How to balance the need for foundational mathematical knowledge with practical application skills in AI training programs Addressing the significant power consumption gap between current AI processors and human brain efficiency Determining appropriate fairness index thresholds for different AI applications and sectors How to ensure AI talent development keeps pace with the accelerating speed of technological advancement Bridging the gap between centrally funded technical institutions (which have flexibility) and state institutions (which face bureaucratic constraints) Specific strategies for rural AI implementation and governance applications at village level
Suggested compromises
Focus on applied problem-solving and production exposure while maintaining theoretical foundations in AI education Allow different fairness index tolerances based on application criticality (higher tolerance for entertainment recommendations, zero tolerance for security applications) Combine classroom learning with practical mentorship programs to bridge the theory-practice gap Implement sector-specific AI training programs rather than one-size-fits-all approaches Use AI tools themselves to assess skill gaps and provide personalized learning recommendations Prioritize institutional autonomy for technology-focused institutions while maintaining broader educational standards
Thought Provoking Comments
There are two camps here, right? One is the people who want to generate the next wave of AI, and then, of course, they’re the ones that have to use AI to be more efficient in their jobs… what is very, very important is that they have to be critical thinkers more than any technology as such, because there is, you know, a great move towards outsourcing your thinking to AI, and that’s a problem.
This comment is deeply insightful because it identifies a fundamental paradox in AI adoption – that as we become more dependent on AI tools, we risk losing our critical thinking abilities. It challenges the common narrative that AI skills are primarily technical, instead arguing that human cognitive skills become more important, not less.
This comment set the philosophical tone for the entire discussion, shifting focus from purely technical skills to cognitive and analytical capabilities. It influenced subsequent speakers to address the balance between human judgment and AI assistance, and established critical thinking as a recurring theme throughout the panel.
Speaker: Dr. Sarabjot Singh Anand
Next gen AI is the infrastructure. It’s not for intelligence like you currently have this infrastructure wherein we are able to express our views and they go out to the world… next generation of AI is like this infrastructure meant to multiply our intelligence our reasoning our research our values our creativity our judgments
This reframes AI from being a tool or application to being fundamental infrastructure – like electricity or the internet. This perspective is profound because it suggests AI will become so embedded in society that it will amplify human capabilities across all domains, fundamentally changing how we think about AI integration.
This infrastructural perspective elevated the discussion from tactical skill gaps to strategic societal transformation. It influenced the conversation to consider broader implications of AI adoption and helped other panelists think about systemic changes rather than just individual competencies.
Speaker: Kunal Gupta
The power requirement of if I take a very basic NVIDIA processor, it consumes around 500 to 700 watt. But if I say the same processor, I mean, not, I mean, our human brain is also having a very beautiful processor… it just consumes 20 watt power. So, you can see there is a huge gap between the processing capabilities
This comment is thought-provoking because it highlights a critical but often overlooked constraint in AI development – energy efficiency. By comparing AI processors to the human brain, it challenges the assumption that current AI approaches are optimal and suggests there’s enormous room for improvement in hardware design.
This technical insight shifted the discussion from software and applications to fundamental hardware limitations. It introduced the concept of neuromorphic computing and energy constraints as key factors in AI talent requirements, broadening the scope beyond traditional programming skills to include hardware-software co-design thinking.
Speaker: Professor Dr. Jawar Singh
We need to de-bureaucratize education today to a great extent… curriculum velocity is so high that you can’t give a command to faculty members to teach a particular course, especially in technology… The consumer and the user has started demanding change.
This comment identifies a systemic problem where educational institutions cannot keep pace with technological change due to bureaucratic processes. The concept of ‘curriculum velocity’ is particularly insightful as it quantifies the speed mismatch between educational adaptation and technological evolution.
This comment redirected the discussion from individual skill gaps to institutional and systemic barriers. It prompted other speakers to discuss the role of autonomy in education and led to a broader conversation about how educational systems need fundamental restructuring, not just content updates.
Speaker: Professor Dr. Alok Pandey
The biggest skill gap that I see right now is the application. And more importantly, how do we define a problem?… Define the problem is about 50% of the solution achieved in itself.
This insight cuts through technical complexity to identify problem definition as the core skill gap. It’s profound because it suggests that technical AI skills are less important than the ability to identify and frame problems correctly – a fundamentally human cognitive skill.
This comment shifted the focus from technical training to problem-solving methodology. It influenced the discussion to consider that AI talent gaps might be more about business acumen and analytical thinking than coding or mathematics, leading to a more holistic view of required competencies.
Speaker: Kunal Gupta
In 6G, each and every component has got AI inbuilt in that only… the intelligence will be distributed at the edge also. So the decision will be taken at a distributed level… Most of the decisions will be taken by AI. The human will only supervise only.
This comment provides a concrete vision of how AI will fundamentally change professional roles, using telecom as a specific example. It’s insightful because it shows the transition from AI as an add-on tool to AI as the primary decision-maker, with humans in supervisory roles.
This comment grounded the abstract discussion in a concrete industry example, showing how AI integration will require completely different skill sets. It influenced the conversation to consider how traditional engineering roles will evolve and what new competencies will be needed for human-AI collaboration.
Speaker: Dr. Devinder Singh
Overall Assessment

These key comments collectively transformed what could have been a conventional discussion about AI training into a deeper exploration of fundamental challenges in human-AI coexistence. The discussion evolved from technical skill gaps to philosophical questions about human agency, from individual competencies to systemic institutional barriers, and from current applications to future societal infrastructure. The most impactful insight was the recognition that as AI becomes more capable, uniquely human skills like critical thinking, problem definition, and ethical judgment become more valuable, not less. This paradox – that advancing AI makes human cognitive skills more important – became the central thread that unified the diverse perspectives from academia, industry, and government representatives.

Follow-up Questions
How do we scale up AI education infrastructure and develop large numbers of trained faculty members?
This addresses the critical need for educational infrastructure to support population-scale AI implementation and the shortage of qualified educators in AI
Speaker: Professor Dr. Alok Pandey
How can we establish more MOUs with Western countries where AI companies and universities are generating trained resources?
This explores international collaboration opportunities to bridge the AI talent gap by learning from established AI education systems
Speaker: Professor Dr. Alok Pandey
How do we bridge the gap between expensive AI hardware power consumption (500-700 watts for NVIDIA processors) and efficient human brain processing (20 watts)?
This addresses the critical research area of neuromorphic computing and energy-efficient AI hardware development
Speaker: Professor Dr. Jawar Singh
How can we develop better evaluation methods for HR professionals to assess AI talent capabilities?
This addresses the gap in industry’s ability to properly evaluate and recruit AI talent due to outdated HR evaluation methods
Speaker: Dr. Sarabjot Singh Anand
How can we create systematic mentorship programs pairing industry technologists with students to bridge the academia-industry gap?
This explores scalable solutions for providing real-world problem-solving experience to students through industry mentorship
Speaker: Dr. Sarabjot Singh Anand
How do we address curriculum update challenges in state technical institutions serving the majority of India’s 1.4 billion population?
This focuses on the systemic challenge of updating AI education in state-level institutions that serve the largest student populations
Speaker: Professor Dr. Alok Pandey
What specific AI applications should be prioritized for rural governance and panchayat-level administration?
This addresses the practical implementation of AI tools at the grassroots governance level in rural India
Speaker: Vikram Tripathi (Audience)
How can private companies’ CSR funds be leveraged to support AI education and implementation in rural areas?
This explores funding mechanisms for AI adoption in underserved rural communities through corporate social responsibility
Speaker: Vikram Tripathi (Audience)
How do we develop sector-specific fairness and bias standards for different AI applications with varying tolerance levels?
This addresses the need for contextual AI ethics standards where different applications (entertainment vs. security) require different fairness thresholds
Speaker: Dr. Devinder Singh
How can we create adaptive learning systems that use AI to assess individual skill gaps and recommend personalized learning paths?
This explores the development of AI-powered educational tools that can customize learning experiences based on individual needs and gaps
Speaker: Vikash Srivastava

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

How AI Drives Innovation and Economic Growth

How AI Drives Innovation and Economic Growth

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened to examine how artificial intelligence can either narrow or widen development gaps in emerging economies [60-65]. Johannes Zutt highlighted AI’s capacity to boost productivity in sectors such as agriculture, health care and finance, noting that 15-16 % of South Asian jobs show strong complementarity with AI [12-20]. He also warned that AI may displace entry-level, knowledge-based jobs and that many low-income countries lack basic infrastructure-reliable electricity, broadband, and literacy-to deploy it effectively [21-31]. To address these constraints, the World Bank promotes “small AI”: affordable, locally relevant applications that operate with limited connectivity and data, citing India’s digital identity system and farmer-focused phone tools as exemplars [34-40][45-46].


Ufuk Akcigit argued that while the application layer of AI lowers entry barriers and encourages creative destruction, the foundational layer remains compute-, data- and talent-intensive, creating concentration risks that could spill over to downstream markets [86-98][99-101]. He stressed that without improving the business environment-such as reducing reliance on family size for firm growth-AI alone will not generate entrepreneurship in developing economies [111-115]. Anu Bradford emphasized the need for AI sovereignty and a rights-based regulatory approach, noting that Europe’s AI Act illustrates how regulation can protect public interests while still fostering innovation, and that India must adapt such frameworks to its own priorities [167-176][181-190].


Michael Kremer argued that targeted public-good AI, such as AI-generated weather forecasts for 38 million Indian farmers and AI-assisted traffic safety tools, can substantially reduce poverty if supported by evidence-based innovation funds and rigorous impact evaluation [133-158][263-292]. Iqbal Dhaliwal provided a concrete small-AI case in Indian schools where AI automated spelling checks, freeing teachers to focus on higher-order learning, and warned that hype must be separated from reality because technology often fails without complementary system changes [236-247][294-318].


Across the discussion, participants agreed that AI’s promise hinges on coordinated public-private effort, robust infrastructure, and policies that mitigate job displacement and market concentration [34-38][86-98][263-270]. They also concurred that neglecting governance-whether through weak regulation, insufficient public-sector adoption, or power dynamics that block scaling of successful pilots-poses a major risk to equitable outcomes [414-416][321-328]. The panel concluded that while AI can drive transformative gains in health, education and agriculture, realizing these benefits requires proactive policy, inclusive regulation, and safeguards against concentration and labor-market shocks [382-388][394-398]. Overall, the discussion underscored AI’s dual potential to accelerate development and exacerbate inequality, urging immediate action to shape its trajectory responsibly [414-416].


Keypoints


Major discussion points


AI as a catalyst for development in emerging markets - Johannes Zutt emphasizes “small AI” that works with limited connectivity, data and skills, citing examples such as pest-identification for farmers, AI-assisted nursing, and credit scoring [10-21]. He highlights India’s digital identity and payment infrastructure as foundations for scaling such tools [39-42]. Michael Kremer adds concrete public-good cases, notably AI-driven weather forecasts that reached 38 million Indian farmers and improved planting decisions [136-151].


Structural challenges and concentration risks - The panel notes that AI can displace entry-level, knowledge-based jobs and that the World Bank itself sees fewer such positions advertised [22-32]. Ufuk Akcigit distinguishes a high-barrier “foundational layer” (compute, data, talent) that tends toward concentration, warning that this may spill over to the application layer [85-100]. Anu Bradford and later speakers point to the global concentration of large-model development in the US and China, and to rising market concentration that could lock-in incumbents [185-190][342-347].


Policy, regulatory and governance imperatives - Both speakers stress the need for AI sovereignty and rights-based regulation. Anu Bradford describes the EU’s AI Act as a “rights-driven” approach and suggests India can adapt such lessons while preserving local priorities [166-176]. Michael Kremer argues that governments and multilateral development banks must fill gaps where private profit motives fall short, e.g., funding AI for public goods like digital IDs and weather forecasts [128-133]. He also proposes evidence-based innovation funds to accelerate responsible deployment [266-277].


Evidence-based implementation and evaluation - Iqbal Dhaliwal shares a school-AI pilot that freed teachers from routine grading, allowing them to focus on higher-order learning, illustrating the demand-driven benefits of “small AI” [236-247]. Michael Kremer outlines a four-stage evaluation framework-model performance, user impact, scalability, and continuous improvement-to ensure AI interventions deliver real outcomes [284-291]. The discussion repeatedly stresses the need to adapt institutional systems (e.g., teacher training, regulatory processes) to realize technology’s promise [308-315].


Overall purpose / goal of the discussion


The panel was convened to examine whether artificial intelligence will narrow or widen development gaps and to identify practical policy levers for emerging economies. Participants shared concrete use-cases, highlighted systemic risks, and debated how multilateral institutions, national governments, and the private sector can jointly shape an AI ecosystem that delivers inclusive growth.


Tone of the discussion


Opening (0-10 min): Optimistic and celebratory, emphasizing AI’s transformative potential.


Middle (10-30 min): Becomes more measured as speakers acknowledge infrastructure deficits, job displacement, and concentration of power, shifting to a cautious, problem-solving tone.


Later (30-48 min): Pragmatic and solution-focused, with concrete examples, policy recommendations, and calls for evidence-based pilots.


Closing (48-51 min): Balanced rapid-fire reflections, acknowledging both “big wins” (health, education) and “big risks” (market concentration, regulatory lag), ending on a sober yet hopeful note.


Overall, the conversation moves from enthusiastic optimism to critical realism, ending with a constructive, forward-looking outlook.


Speakers

Jeanette Rodrigues – Moderator/host of the panel discussion [S1]


Michael Kremer – Economist, Nobel laureate (mentioned in the transcript)


Johannes Zutt – World Bank representative (referred to as “John” in the discussion) [S8]


Iqbal Dhaliwal – Global Director of J-PAL at MIT [S9]


Ufuk Akcigit – Macroeconomist (provides analysis on creative destruction and AI’s impact on economies) [S11]


Anu Bradford – Expert on AI governance and regulation (contributes perspectives on AI sovereignty and regulatory approaches)


Additional speakers:


None (all speakers in the transcript are covered by the list above).


Full session reportComprehensive analysis and detailed insights

Jeanette Rodrigues: The session opened with Jeanette thanking the participants and stating the panel’s aim: to explore whether artificial intelligence (AI) will narrow or widen development gaps in emerging economies and to identify the policy levers that should guide real-world implementation [2][3][60-65][71]. She noted that this was the fourth AI summit (the first having been held in the UK) and that participants repeatedly described the first session as “full of fear” about AI, framing the debate as a balance between hope and fear [2-3].


Johannes Zutt: Johannes described AI as a structural transformation already reshaping economies worldwide [6-7]. He clarified that the World Bank does not develop AI applications itself; its comparative advantage lies in advisory work-ensuring data reliability and helping governments create “AI sandbox” environments for experimentation [45-46]. He highlighted basic constraints in many low-income countries-unreliable electricity, weak broadband, low literacy, and reliance on very simple devices [26-31]. To address these gaps he introduced the Bank’s “small AI” agenda: affordable, locally relevant tools that function with limited connectivity, data and skills [34-36], and noted that the Bank is assisting governments in setting up AI sandboxes for pilots [49-52]. Examples included India’s digital identity programme and farmer-focused phone applications, which illustrate how small AI can be deployed at scale when supported by government standards and private-sector innovation [39-42][45-46][49-52]. He also pointed to AI-enabled services in education and health that can fill skill gaps for teachers and frontline workers [21-33].


Michael Kremer: Michael presented concrete public-good AI interventions. He explained that the Indian government’s AI-generated weather forecasts are a non-rival, public-good resource, justifying public investment [263-267]. The forecasts correctly predicted an early monsoon in Kerala and a later-than-expected progression elsewhere, becoming the only source of information for millions of farmers [263-267]. He also described AI-enabled traffic-safety tools-automated traffic cameras and the HAB (AI-based driver-licence testing) program-which have reduced unsafe driving by 20-30 % in pilot sites [263-267][268-277]. Emphasising the role of multilateral development banks, he argued that market failures leave critical public-good AI under-invested and proposed evidence-based innovation funds that follow a four-step evaluation framework: 1) model performance; 2) user impact; 3) scalability; 4) continuous improvement [284-291].


Ufuk Akcigit: Ufuk offered a macro-economic perspective on AI-driven creative destruction. He distinguished a “foundational layer” (compute-, data- and talent-intensive) with high entry barriers that tends to concentrate power, from an “application layer” where low barriers enable small firms to compete with incumbents [85-93][94-98]. He warned that concentration at the foundational level can spill over into downstream markets, limiting inclusive benefits [99-101][324-340]. He also questioned why entrepreneurship has historically been weak in emerging economies-citing family size and gendered labour dynamics as key determinants of firm growth-and argued that without reforms to the business environment, AI alone will not spark the desired dynamism [111-115][84-85].


Anu Bradford: Anu focused on governance and AI sovereignty. She described the European Union’s AI Act as a “rights-driven” framework that seeks to protect fundamental rights while distributing AI benefits more broadly [173-176]. She argued that the Global South must develop its own regulatory sovereignty, adapting lessons from the EU without merely copying them, to ensure AI serves local public-interest goals [167-176][181-190]. She also warned of the geopolitical concentration of AI capabilities in the United States and China, noting that supply-chain choke points in semiconductors and raw materials create strategic vulnerabilities for developing nations [357-371].


Iqbal Dhaliwal: Iqbal illustrated the impact of “small AI” on the ground. In a pilot in Indian public schools, AI automated routine spelling checks, freeing teachers to focus on higher-order learning and thereby improving educational outcomes [236-247]. He stressed that the success was demand-driven-teachers, students and districts all asked for the tool-and that similar time-saving AI could benefit health-frontline workers [248-250]. He identified two recurring patterns: (a) trust-highly accurate AI diagnostics can fail in practice if users lack trust or proper training [294-318]; and (b) institutional adaptation-the GST-fraud detection model was not scaled because it threatened existing discretionary power, highlighting the need to adapt processes alongside technology [309-322].


Points of Consensus:


– All speakers concurred that AI’s transformative potential is contingent on basic infrastructure (electricity, connectivity, literacy) [6-7][61-71].


– The panel uniformly endorsed the “small AI” approach as a pragmatic pathway for low-resource settings [34-36][236-247].


– There was consensus that robust, rights-based yet locally adaptable regulation is essential to prevent misuse and manage risks such as job displacement and market concentration [21-33][173-176].


– Participants agreed that public-sector investment and evidence-based innovation funds are needed to develop AI public goods that the private market will not provide on its own [133-158][263-292][266-271].


Key Disagreements:


– Ufuk warned that the compute-heavy foundational layer will entrench concentration, whereas Johannes’s emphasis on deploying small AI did not directly address this structural bottleneck [94-98][34-36].


– On regulatory sovereignty, Anu advocated for a rights-driven, locally tailored framework, while Jeanette highlighted the dominance of US and Chinese AI developers and questioned whether true sovereignty is achievable [162-166][167-176].


– Johannes identified job losses as a challenge, whereas Ufuk called for a deliberate slowdown of AI adoption to give workers time to adjust [22-24][405-412].


– Johannes stressed formal governance mechanisms, whereas Iqbal argued that trust, training and system-level adaptation are equally critical for successful deployment [21-33][309-322].


Key Takeaways:


1. AI can be a powerful catalyst for productivity in agriculture, health, finance and education, but its impact is limited by infrastructural deficits [6-7][84-85][61-71].


2. The World Bank’s “small AI” strategy-affordable, offline-capable tools co-designed with governments and private innovators-offers a viable model for low-connectivity contexts [34-36][236-247].


3. High entry barriers of the foundational AI layer risk increasing market concentration and talent migration from academia to incumbents, threatening inclusive growth [94-98][324-340][342-347].


4. AI sovereignty requires rights-based regulation that can be customised to national priorities while learning from the EU’s approach [167-176][173-176].


5. Rigorous evaluation-covering model performance, user impact, scalability and continuous improvement-should guide AI pilots, as outlined by Michael [284-291].


Concrete Actions:


– The World Bank pledged to expand small-AI sandboxes across South Asian states, collaborating with governments to ensure interoperability and offline functionality [45-46][49-52].


– Multilateral development banks were urged to scale evidence-based innovation funds such as Development Innovation Ventures, providing tiered financing for pilots, rigorous testing and eventual scale-up [266-271].


– India’s digital identity and payment infrastructure were highlighted as foundational assets that other developing nations could emulate [39-42].


– Private-sector developers were encouraged to create demand-driven applications that free frontline workers’ time, while regulators were asked to adopt a rights-driven framework that balances innovation with safeguards [173-176][236-247].


Unresolved Issues:


– How can policy prevent the foundational AI layer from cementing incumbent dominance?


– What mechanisms will align AI talent pipelines with local ecosystems to avoid excessive brain-drain?


– How should finance ministers balance AI sovereignty with geopolitical dependencies in the semiconductor supply chain?


– How can public-sector procurement be reformed to avoid monopsonistic lock-in while ensuring rapid adoption of proven tools?


These questions point to a need for further research on competition-friendly policies, talent development programmes and procurement reforms [342-347][357-371][397-398].


Rapid-fire Closing:


– Iqbal warned that unchecked market concentration could become a regrettable legacy [382-384][324-340].


– Anu cautioned that over-reliance on generative AI might make humanity “dumber” if critical thinking is outsourced [387-392].


– Michael echoed the risk that public-sector inertia could deny the poor access to AI-driven services [394-398].


– Ufuk highlighted the labour-market risk of rapid AI adoption outpacing job creation [405-412].


Overall Assessment: The discussion moved from an initial optimism about AI’s transformative power to a nuanced, evidence-based appraisal of the structural, regulatory and societal challenges that must be addressed. The consensus calls for immediate, collaborative action to build the enabling environment, fund public-good AI, and design governance that safeguards inclusive growth while preserving the innovative dynamism essential for emerging economies to thrive in the AI era [414-416][382-388].


Session transcriptComplete transcript of the session
Jeanette Rodrigues

all around the Bharat Mandapam. So once again, thank you very much for your time this afternoon and for choosing us to have a conversation with. To start off, I would like to introduce John, who will make some opening comments for the World Bank.

Johannes Zutt

So thank you very much, Jeanette. It’s a great pleasure to be here speaking to all of you this afternoon. Over the past week, we’ve heard from a lot of world leaders, tech leaders, experts from across many, many countries about how AI is fundamentally reshaping our world, presenting not just a technological shift but a structural transformation with profound implications for economies and societies everywhere. For emerging markets and developing economies, as for all economies, AI could be a game changer. So sorry, that probably helps. I thought the mics were on. So, you know, for all countries, but especially for emerging markets and developing economies, AI can be a game changer, a unique opportunity to leapfrog longstanding development challenges.

It offers clear opportunities to enhance growth and productivity. We recently did some work in South Asia at the World Bank Group to see what sort of impact AI was having on jobs in the region, and we found that approximately 15 or 16 percent of jobs here have strong complementarity with AI. AI enables people in those jobs to expand their skills and their effectiveness in delivering the products and services that they are trying to provide. It also helps, you know, very, very diverse groups of people in many, many different sectors of the economy. It helps farmers to identify pests on their crops. It helps farmers to identify pests on their crops, diseases in their crops, and also how to address them.

It helps farmers to identify pests on their crops, diseases in their crops, and also how to address them. It helps nurses to identify the ailments and illnesses that their patients may be suffering, particularly the ones that they’re not very familiar with, but that they can research using appropriate AI applications. It helps financial institutions to understand better the ability of borrowers to take on loans, which, of course, expands the ability of the borrower to expand his or her business. So there’s clearly enormous potential for AI to fill skill gaps in the areas that I mentioned, also in education, in health care services, to detect patterns, to generate forecasts, to guide the allocation of public resources, and so on.

Of course, at the same time, on the flip side, AI also creates a number of challenges. One of them is there will be some job losses, particularly sort of entry -level jobs that are very much knowledge or document -based, performing relatively rote work that can be taken over by automation. And we’re actually seeing this in the World Bank Group. We went and looked at the number – the types of jobs that we are advertising these days compared to a couple of years ago, and what we found is that that layer, sort of at the bottom of the professional classes inside the bank group, there’s just fewer of those types of jobs being advertised in the World Bank Group today than there were a few years ago.

At the same time, you know, particularly for developing economies and emerging markets, many of them are going to struggle to harness the potential that AI offers because of very basic issues around the foundations for effective AI use. They may not have reliable electricity. We can start with that very basic one. They may not have an internet backbone that’s sufficiently strong. People in these countries may not have very, very basic skills of literacy and numeracy that enable them to work effectively with higher end devices. They may need to use very, very basic devices, not even smartphones, and rely on voice communication, asking a question and hearing a response. So there may be struggles of that kind in developing countries and emerging markets.

And I’m not even talking about all the governance and regulatory safeguards that can also come into play. So the question, of course, is how can emerging economies, developing markets, harness the potential of AI and avoid the pitfalls? And for us in the World Bank group, we’ve been very, very focused on focused recently on basically small AI. Small AI meaning practical, affordable, locally relevant AI that addresses specific problems and also works where connectivity, data, skills, infrastructure are fairly limited. And this is extremely important in countries like India where all of those conditions can apply. And yet there’s tremendous potential for people to expand their, to grow their productivity if they have timely access to information of the right kind in their local language tailored to their specific circumstances.

So that’s what we are trying to do in South Asia today and across the globe actually. And this is really about some of the examples that I mentioned earlier, having bespoke… applications that help farmers to do very basic investigation of the types of issues that they’re facing using their phone to analyze what’s going on to identify it to find out how to address it even to find out who within their local area in their market space can help them by providing the tools or the products that are necessary to address whatever they’re running into so India of course is a very strong example of what’s possible India has been a leading country in digital innovation for quite some time after the United States and China it has the largest if you like digital universe you in the in the world today it’s got some very good foundations there’s the the digital identity program as well as the digital payment platform that currently exists.

There are lots of Indian firms that are innovating in AI, including in the small AI applications that I’ve been talking about. And the governments of India have an objective of ensuring that there is AI for all. So they are very, very aware of the challenges that need to be overcome to make AI accessible to a very, very broad spectrum of the population and not just the very rich that, to some extent, need assistance the least, right? It’s the poorer parts of the country that benefit the most because they will be leveraging a tool that they are not very familiar with and have not been using that much in the past. So we’re working in India.

We’re working in a lot of different states, Uttar Pradesh, Maharashtra, Kerala, Haryana, Telgana. these different aspects working with governments to work on the foundational elements, interoperability, making sure that the accessibility is possible, that programs can run offline as it were so that people who aren’t able to get online all the time can benefit and so on. And then we’re also working with private sector investors who are developing apps. I mean we’re not actually developing many apps ourselves. That’s not really in our comparative advantage. Our comparative advantage as the World Bank Group is to do the more advisory work, make sure that the backbone information that’s embedded in the application is reliable and trustworthy because of course that’s critical for ensuring successful uptake.

But we are helping governments to create. We are helping governments to create the space that enables experimentation in AI sandbox to develop the different applications that people in this incredibly creative country are coming up with to help people get on with their work and become more productive. So I think it’s important to recognize that if we’re going to make effective use of this tool, we need both a public -facing effort to address the standards and the other issues, the interoperability and so on that I mentioned before, but also a private -sector -facing effort because it’s the private sector that’s actually generating, creating most of these applications that are working, particularly in the small AI area.

We’re doing a little bit on bigger AI. There’s obviously a connection between the two. Big AI can, through computational power, generate new knowledge that can help us to do things that we haven’t done so well in the past much, much better. But for… There are countries like India translating that. into small AI will also be very, very important for uptake. So I’m looking forward to hearing from all the distinguished speakers in this panel about their thoughts on what’s happening today in this sector. So thank you very much.

Jeanette Rodrigues

Thank you very much, John. John spoke about, of course, the use cases for AI, and on the other side of the spectrum we have the large language models, we have the foundational AI. But no matter where you sit on the spectrum, no matter where your interests lie, AI, innovation never disperses and never diffuses equally. Today on this panel, I hope to unpack what determines whether AI narrows the development gap or whether it widens the development gap. Especially we are looking to talk about the real world. What should policymakers in the real world think about and keep at the top of their mind as they go ahead preparing policies considering AI? Before I start, just setting the stage.

To a man, to a woman, everybody I spoke with who’s attended the first AI summit to today, this is, I think, the fourth AI summit being held. The first one was held in the UK. And without exception, all of them made it a point to tell me how the first session was full of fear. It was, oh, my God, AI is this terrible technology which is going to steal all our jobs, make us redundant. And when they come to India, they see the hope that technology and AI brings. And that’s the spirit of the discussion this afternoon, to figure out how can we balance both of those extremes, hope and concern, and go ahead in a pragmatic, policy -first way to prepare for the real world.

So if I could start with you, Ufuk, how do you think about AI? And especially, where do you see areas of creative destruction? To foster the innovation that we need.

Ufuk Akcigit

Thank you very much. And so, of course, creative destruction is an important driver of economic growth in the long run. So that’s why, you know, it’s an interesting question how AI will affect creative destruction in general. Of course, we are at a very early phase of AI, and it’s a GPT. And typically, you know, when GPTs are emerging, there’s a huge surge of new businesses. And this should not be misleading. I think the main question we should be asking ourselves is what will happen to the creative destruction in the future? How does the future look like in terms of creative destruction? And I’m a macroeconomist, so that’s why I like to look at this with a, you know, bird’s eye view.

And I would like to, you know, separate advanced economies from emerging or developing economies. So when it comes to advanced economies, there, again, we need to split the issue into two layers. One, the foundational layer. and the other one is the application layer. When we look at the application layer, it’s great. You know, the entry barriers are low. Small businesses can do what only large businesses could do in the past, and, you know, they can do their accounting, marketing. You know, there are so many opportunities now. The entry barrier is low. As a result, this suggests that, you know, this is going to be more, you know, friendly for creative destruction on the application. But then there’s also the foundation layer, and I think that’s exactly where the bottleneck is.

When we look at the foundation layer, the entry barrier is really, really high, and, you know, the compute is very compute -heavy. It’s very data -heavy. It’s very talent -heavy. So as a result, you know, this market, at least this layer, is very concentration -prone. Of course, it’s very early. But, you know, normally we have to be concerned about the foundational layer and how things will pan out because this is the upstream to the application layer, which is downstream to foundation layer. So that’s why whatever will happen at the foundational layer will potentially spill over to application layer two. So that’s why I think we need to look at early indicators. But, you know, in the interest of time, I don’t want to go into the empirical evidence yet.

Maybe we can come back in the second layer. When we look at the developing countries, so I think, you know, I agree with Johannes. You know, I think AI is creating fantastic opportunities. So that’s why I think it’s really important to understand the opportunities as well as the risks for developing countries. And together with the World Bank, we are working on the world development. Report 2026, which is going to be on AI and development. And these are exactly the issues that we are focusing on. But I think before we go into those details, we should ask ourselves one major question. Why was there no entrepreneurship and dynamism before the AI revolution in emerging economies? Why was, you know, when we looked at the firm’s life cycle, for instance, why was it not up or out?

Why was it not, you know, very competition friendly? Why did the best predictor of firm size in emerging economies or developing economies was the size of the family and or the number of male children? These are still lingering issues and AI is not, you know, will not bring magic unless we understand and fix the business environment in these economies. You know, AI will just create new tools. But at the end of the day, we need to make sure that the business friendly environment is there for entrepreneurs to come and exercise their ideas

Jeanette Rodrigues

Ufuk, that’s a very interesting leaping of point, the real world. And the intention of this panel is to get exactly there. So if I may turn to you, quite literally turn to you, Michael, and ask you about the real world. You’re obviously doing a lot of work on the ground. Where do you see the potential for AI to spur gains? And are there any really transformative breakthrough areas that you’re looking at right now?

Michael Kremer

Yes. Thank you. Thanks very much. You know, I don’t want to minimize the existence of forces that may widen gaps. I think that if policymakers, primarily at the national level, but also in multilateral development banks, take appropriate actions and make appropriate investments, then I think AI has the potential to substantially narrow some of the gaps. And, you know, I think the… which policy actions to take can be informed by thinking through relevant market failures and relevant government failures. Let me give a concrete example or two. So private firms have incentives to develop and improve applications of AI that can generate profits. But there are some very important applications of AI for public goods, for example, that will not attract commercial investment to measure it with their needs.

And that’s an area where I think governments and multilateral development banks can play an important role. And I think some of this very much echoes what you were saying about small models, but also I’ll mention the link between the two. So an obvious example where I think India has been a leader for the world is in the development of digital identity. You know, this is… will enable, as Ufuk was saying, this enables a lot of work by individual entrepreneurs, a lot of other applications. So that’s a huge success, and I think multilateral development banks together with India can help bring that to many other countries. Let me take another example, one that’s not as well -known, but picks up on your comment about farmers.

So one thing that’s critical for farmers, they have to make a bunch of decisions that are weather -dependent. You know, when do you plant, for example? What varieties do you use? A drought -resistant variety, another variety. That, most farmers don’t have access to state -of -the -art weather forecasts around the world. I’m not talking about one country. In low – and middle -income countries, they don’t have access to that. Now, there’s a huge advance. We tend to think of large language models, but obviously AI is pushing science forward, and that includes in weather forecasting. There’s really a revolution driven by AI. But weather forecasts are non -rival. They’re largely non -excludable. They’re the classic definition of a public good.

So there’s a strong rationale for national governments, in some cases supported by multilateral development banks, to make investments in producing and disseminating AI weather forecasts. Again here, India is a leader. So if you, India in particular, in particular, India’s, the Indian government distributed forecasts to AI weather forecasts to 38 million farmers last year. And the evidence suggests that farmers, both from India, from this particular case, that in areas, I’ll say a little bit about last year’s monsoon, it came early in Kerala and southern India, but then there was an unexpected delay in the progression. The AI forecasts got that right, that was the only source of information that reached farmers with that. In the areas, we did a survey above that line, and farmers are responding, and they transplant more, they use hybrid seeds more.

Evidence from around the world is consistent with this. Farmers respond to these AI weather forecasts. So I think that’s one example, but many others, and happy to discuss them in education and traffic enforcement and elsewhere.

Jeanette Rodrigues

Michael, your answer should be read the book. Okay. We’ve spoken about the use cases of India, but setting up digital IDs, of course, is a sovereign decision. It’s something India could do unilaterally. When it comes to the large language models, that’s not reality. The large language models are concentrated in the US, in China now with DeepSeek. Anu, in a world where you largely have the rules being set by the two large powers, the US and China, arguably, there’s of course the EU as well, and you’ve done a lot of work on that. Who sets the AI rules for the Global South? Is there even the possibility for the Global South to talk about sovereignty?

Anu Bradford

So I think the Global South has the same kind of incentive for their own AI sovereignty, including then regulatory sovereignty, to design the rules that better work for their economies, for their societies, for what the public interest in these jurisdictions calls for. But regulating AI is really difficult even for very established bureaucracies. You need to be able to make sure that it is an innovation -friendly, and yet you at the same time need to be careful in managing the risks for individuals and societies. So even very established regulators like the European Union have found it one of the most challenging tasks to come up with the AI Act. So there’s probably something to be learned from these jurisdictions that have gone ahead and done the kind of thinking that had then resulted into some of those regulatory frameworks that we have now in place.

So if you think about the choices that India has when it looks around, one of them is to think about, okay, how does the EU go about this? The EU follows what I would call a rights -driven approach to regulation. So what is really characterizing this, the first horizontal binding, so economy -wide regulation that the Europeans enacted, it is a regulation that seeks to protect the fundamental rights of individuals, the democratic structures of the society, and that also seeks to ensure a greater distribution of the benefits from AI revolution. So the European approach is very conscious that it wants to also share some of the benefits so they don’t all go to the large developers of these models, but individual use as society at large.

smaller companies benefit from AI as well. So there’s something I think the Europeans can teach in terms of that regulatory approach in addition to maybe then some details of how that regulation in the end was constructed. But just one word, India is a formidable economy that doesn’t need to take a template and plug it into the economy as such. I think India is in a very good position to take the lessons that serves its needs yet make the kind of local modification and variations that are more reflecting the distinct priorities of this country.

Jeanette Rodrigues

Anu, before I turn to Iqbal, a quick follow -up question to you. As India makes its own rules, where does the trade -off lie between regulation and innovation?

Anu Bradford

So this is very interesting because often I am based in the U .S., but I’m initially from Europe, and these two jurisdictions are described as the U .S. develops technologies and the Europeans regulate those technologies. many ways does India want the innovation path or the regulation path? And I think there are many votes who would go for innovation. But I really would like to debunk this myth that to me it’s a false choice to say that the reason we don’t see these large language models being developed in Europe is not because there’s a GDPR, the General Data Protection Regulation. It’s not because there is AI Act. So the reason there is a perceived innovation gap between the United States and Europe is, I think, four things.

So first, there is no digital single market in Europe. It’s very hard for these AI companies to scale across 27 distinct markets. Second, there’s no deep, robust capital markets union. 5 % of the global venture capital is in Europe, over 50 % in the United States. That explains why the U .S. has been able to take much greater steps in developing AI technologies. Third, there are legal frameworks and cultural attitudes to risk -taking. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone.

You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. I wouldn’t encourage you to replicate that because it’s very hard to innovate on the frontier of technological innovation because sometimes you fail. But you need to be then given the second chance.

And the fourth, I think, the sort of foundational pillar of the robust U .S. tech ecosystem is that the U .S. has been spectacularly successful in harnessing the global talent that has chosen to come to the U .S., including many Indian data scientists, engineers, who think that U .S. is the place where they can start their companies, scale their companies, fund their companies, U .S. universities can attract them. So the idea that choosing to follow… Or imitate aspects of the European rights protective regulation would come at the cost of innovation, we need to understand better what drives the technological innovation and whether regulation should

Jeanette Rodrigues

Thank you, Anu. Iqbal, turning to you. You’re working in an area of the world, South Asia, where what is regulation? What is enforcement? At the risk of sounding like a provocateur, it’s the Wild West a little bit. And therefore, we talk a lot in our part of the world about small AI, about targeted AI. My question to you is that what should policymakers keep in mind when designing AI -enabled interventions, especially when it comes to small AI and the targeted use cases?

Iqbal Dhaliwal

vulnerable public schools all the way from 11th to becoming the second best performing state in just a matter of two or three years. Phenomenal results, right? But then you start saying, let’s unpack this. What was this thing doing? The first thing that they find out is that a lot of people are like, oh, does this mean that I don’t need teachers anymore? No, you still need the teachers. What it replaces is the road task of the teacher having to correct spelling mistakes, calling you to the room and saying, hey, you forgot your comma, you forgot to capitalize. Instead, AI takes care of all of that. And now the teacher can sit with you in the free time and say, how did you set up the structure of this essay?

Did you think about this analytically or not? And that’s the first insight that comes from evaluation. It frees up the teacher time. Everything that we do in the field ends up adding to teacher’s time, adding to the nurse’s time, adding to the Anganwadi worker’s time. Very few teachers do that. Free up time. So if your AI application can free up the time of the health frontline workers, first of all, that’s a winner. The second thing that is really important here was that this is a demand -driven thing, right? Like there was a demand by the kids to improve their essays. There was a demand by the teachers to free up their time. But most importantly, there was a demand by the school districts to show progress.

So I think those is kind of a great example of how everything comes together if you think about it ahead of time.

Jeanette Rodrigues

Ladies and gentlemen, a topper of India’s notoriously difficult civil services exam. So take Iqbal more seriously than you would as just a normal.

Iqbal Dhaliwal

Thank you. I thought that was history now.

Jeanette Rodrigues

It’s never history in India, Iqbal. Michael, turning to you, almost as equal in accomplishment by winning a Nobel. What risks should multilaterals like the World Bank keep in mind? Or let me rephrase that actually. Is there a risk that multilaterals are moving too slowly relative to the technology?

Michael Kremer

I think there certainly is. As I noted before, there are certain areas where the private sector is going to move, but there are other areas where they’re not going to move quickly, and it’s going to be very important for governments and for multilateral development banks and for philanthropy to move. I think there are a number of approaches to this. One way is by encouraging innovation by setting up institutions like innovation funds, particularly evidence -based, to echo Iqbal, I think evidence -based innovation funds. So I’ll give you one example of something that I’m involved in. Development Innovation Ventures, that was initially set up in the U .S. government, but it’s now been relaunched independently. It has tiered funding, so there’s initially very small…

grants to pilot new ideas. Then there’s somewhat larger grants to rigorously test them as Iqbal emphasized and then for those that are most successful there’s funds to help transition them to scale up. I think why is that important? Well that’s important because if we’re thinking about the services that public services and there are other sectors where this is needed but there’s probably going to be insufficient competition. Private developers are going to come up with innovations but then there if they have to sell them to the government they’re facing a monopsonistic buyer. They’re not going to probably not going to get rich doing that. So some support to generate more in that market, generate more entrance in that market, well I think is very important.

It’ll also mean that prices will go down and quality will go up when the government does that thing. Does that. Let me, I’ll just again let me give a example of the potential of how you know we we tend to focus on certain examples time after time here let me give another another example that is you know something that I doubt many people here are thinking of when they think of AI you know one of the things that you know traffic safety and we’ve all been exposed to traffic in the past few days you know traffic is a real problem interfering with urbanization which may drive growth there are a lot of deaths from from traffic a lot of citizens around the world have very difficult and painful experiences with traffic enforcement well you know you can have automated traffic cameras that have the opportunity to improve improve traffic outcomes but also improve people’s perception of fairness in government India’s moving in this let me mention another thing that within traffic safety that’s being done Microsoft Research India developed a program called the India Research Program and it’s a program that’s been developed by the government and it’s a program called HAB that is for driver’s licenses and that it automatically uses AI to test are that are the drivers until they actually pass in their exams they when this was introduced it’s been introduced I believe in 56 sites across India hundreds of thousands of people have taken tests this way we took a leaf from a false book we followed up the we’ve got information from Ola on ratings on and the number of drivers who were rated as driving unsafely that went down 20 to 30 percent where hams had been installed so you know that’s something that was developed not by Microsoft’s main business but by Microsoft research we can just create some support for more ideas like that to be developed to be rigorously tested that can benefit India can benefit the whole world we are we are running out of time probably this is this is one place in in India where time is really respected and we have to end in time.

So I had a list of wonderful questions, but if I could now move to a space where we are really giving shorter answers and quick answers and the deeply, deeply interesting ones about who’s winning and who’s losing. Michael, if I could start with you, actually. We’ve seen many promising technologies fail to live up to their promise. How should we think when we are evaluating AI interventions? How should we think about it? What should be the metrics that we use? Okay. First, model evaluation. So AI companies typically do that part. How good is the model output for specific tasks? You know, forecasting the weather. Does it do a good job? Does it match your local language well?

Second, user impact. Here, I think there’s a role both for sort of initial pilots akin to a medical efficacy trial. If you put the work into trying it, does it lead to improvements and outcomes for the users? Second… scalability and usage at scale that’s more like an effectiveness trial in medicine that it’s important to think not just about the tech but also about the human systems are the teachers actually going to use the product I think is it is an example how can you get the teachers to use the product and then the fourth area is continuous improvement you want a system that improves the underlying models so I think in procurement we might want to think about requiring continuous a B test publicity about what the what the impact usages and impact is and perhaps even thinking about requiring open access as part of the procurement package

Jeanette Rodrigues

thank you Michael. Iqbal, I want to flip that question to you where do you see where do you see hype in the promises of AI that you don’t think will play out

Iqbal Dhaliwal

I think hype is natural because the technology is exciting. It’s a general -purpose technology. It’s evolving so quickly. The marginal cost of deployment for the next users is very low. It’s multimodal. Today you are doing it in text. Tomorrow you’re doing it in video. Day after tomorrow you’re doing it on audio. Everybody who has a smartphone has it. So I can understand the hype, right, like where it is coming from. But I think what we really need to do is separate the hype from the reality on the ground. And the reality on the ground is that many of these technologies are not having the final impact that we are having. And I see kind of two, you know, like once again my job at J -PAL always, you know, sitting at the top is like to say not worry about one professor’s evaluation or one researcher’s evaluation, but say when I connect all these dots, what am I seeing?

And I’m seeing two patterns. One is about trust in technology, and the second part is about the reality of the policy world. Let me elaborate quickly on both. Trust in technology. There are studies which found that even if you give doctors and frontline health care workers access to diagnostic tools, including radiology, tools, using AI, AI enabled prediction of the diseases, oftentimes it doesn’t lead to an improvement in results. And when you try and unpack that, even though this technology worked even better than the human intervention in the lab, right? So some of these diagnostic things can work, have better predictability in the lab, but in the field, they end up decreasing, not only is their efficiency lower, but it lowers the efficiency of the doctors, because we have not trained them enough important.

And the second thing is the enabling mechanism, the world around us. We just assume that just because the technology works, even if it works in the field, the rest of the system will adapt to it. No, you have to adapt the system to the rest of the world. So this example quickly comes from India, where, you know, we have a with one particular state government, we try to improve the collection of value added taxes, it’s called GST in India, there is a whole worry about bogus firms that are created to get these GST or value added tax thing. The machine learning algorithm is able to increase the probability of predicting a bogus firm from 38 % to 55 % in one shot at a very, very low cost.

When it came time to scale up this program by the government, they refused to scale it up because you think about it, you have taken away the discretion of the human to decide whether they should raid Michael’s firm or they should raid Iqbal’s firm. That is power. And if you haven’t thought through that point, what is the point of technology?

Jeanette Rodrigues

I won’t terrify anyone in the room by asking why they didn’t want to scale up this tech. But talking about weeding out the bad actors, talking about firm -level decisions, moving on to UFOOC, does the firm -level evidence show productivity gains diffusing evenly across?

Iqbal Dhaliwal

So just going back quickly to the question of the firm. In the earlier model that I highlighted, I think it’s important to understand what’s happening at the upstream. so that we can then understand where things will be going in the future. And the evidence there, the early signs, is a bit worrying. So first of all, when we look at, for instance, the dynamism or market concentration in the U .S., market concentration has been increasing since 1980 but in an accelerating way after 2000. So that’s the first set of evidence. The second set of evidence comes from how innovative resources are allocated across firms. And when we look at the inventors who are creating the creative destruction and technologies, there’s a massive shift towards market incumbents.

And when I say incumbents, those firms that have more than 1 ,000 employees. In around 2000, 50 % of employees used to work for incumbent firms in just 10 years. That shifted. To more than 60%. A massive reallocation of innovative resources. And the final piece of evidence, and we are going to release this study next week, we looked at the universities, how AI is impacting universities, and we look at the AI publishing scientists. And AI publishing scientists in academia, the top 1%, used to make around $300 ,000 in 2000. It went up to $390 ,000 over two decades. Similar people in industry used to make around $550 ,000. Now it went up to $2 million. And there has been two breakpoints. One of them was in 2012. The other one was in 2017.

Of course, image processing and then the foundational model revolution in 2017. The more worrying part about this, which brings me back to the foundational model side of things, is that this created a massive out -migration from academia to industry.

Ufuk Akcigit

And after 2017 especially, B2B. When the compute and infrastructure became so important. And then we saw the rise of AI. The target or the destination is large incumbent information companies, which again highlights where things are going in terms of the concentration. And the worrying part also is that when people are moving to industry from academia, their publication record goes down by 50%. They start patenting by 600 % more after they move, which means that we are moving from open science to more protected science. Now, spillover is extremely important for creative destruction, for the future of innovation. So that’s why, and if we will keep the foundational layer contestable, I think that the fundamental players there will be universities.

And keeping universities in a healthy way is extremely important, but there is very little discussion on this, which I think before it gets too late. Because once you start buttoning the wrong button, and then the rest will follow wrong as well. So that’s why I think we have to have this frank conversation early on in the game, otherwise it might… too late.

Jeanette Rodrigues

Ufuk, what you spoke about boils down to something Iqbal mentioned as well, power. Because power still makes decisions in this world today. So Anu, before I move to the final section of this panel, if I could ask you if the finance minister of a developing country let’s say India, comes to you and asks you, Anu, how should I think? What would you tell her?

Anu Bradford

So today if you think about how much political power but also geopolitical power is shaping our conversations around AI it is something where I think each country is now pushed towards greater techno -nationalism, techno -protectionism AI sovereignty has become almost a sort of uniformly goal for everyone. But I would remind even when encountering players like the United States and China that nobody in today’s world will be completely sovereign when it comes to AI space. If I just take one layer of the AI stack as an example. What is now driving a lot of the global AI race is this idea that we want to do frontier AI we want to have these powerful foundation models.

That means you need to have a lot of computers. You can’t have a lot of compute unless you have access to the high -end semiconductors. The U .S. is well positioned there. It is hosting companies like NVIDIA. The U .S. leads in the design of semiconductors. But who is manufacturing them? We really need to think about the role of Taiwan there. But then the Europeans have ASML in the Netherlands that leads in the high -end manufacturing with the equipment needed for manufacturing. But that is dependent on chemicals where Japan is leading. And the entire supply chain relies on raw materials from China. So ultimately, all these choke points can in principle be weaponized, but that is not ultimately a sustainable strategy.

Even President Trump had to walk back some of the export controls to China because Chinese were saying, okay, then the raw materials are not coming your way. So there are the potential ways to weaponize these interdependencies that ultimately make us all poorer. So as a finance minister of India, when approaching other middle powers, the great powers,

Jeanette Rodrigues

Easily said than done. Our final, final section is, of course, the rapid fire round. We all love this in this room. In one sentence, in one sentence, if I could ask all of you, and Johannes, you’re not getting away easily, you’re going to answer this as well. So in one. if I could ask you, we’re sitting in New Delhi 2035. Could you predict one development outcome that will have dramatically improved with the use of AI and one risk we’ll regret not addressing now? I guess you already know my second answer.

Iqbal Dhaliwal

I think the concentration, the future of market concentration is something that we should be concerned about and we might regret not having discussed this sufficiently in 10 years. On what will change in a positive direction, clearly health care and education, I think. It’s a no -brainer.

Jeanette Rodrigues

Anu?

Anu Bradford

So first of all, it’s so inspiring to hear all the use case examples, whether we talk about traffic or agriculture or education, because I often talk about the risks and the downsides, so it’s a really good reminder. I’m personally very excited, especially what happens in the education space but also in the health space. In terms of the risks, I think one thing that we are not paying attention to, and what I would even call a systemic risk, is the idea that many worry about AI getting almost too smart. But I am more worried about us getting dumber as a humanity. There is a temptation to start skipping steps, outsourcing your thinking and your creativity to these models.

And as an educator, when I think about how I will teach my students to use generative AI to enhance but not substitute their capabilities, we will just make a tremendous mistake if we just forewent that hard work, that beautiful moment of thinking hard problems and creating and investing in our own capabilities. And all that just cannot be so outsourced, because otherwise we don’t even know what kind of questions we should be asking the AI going forward.

Jeanette Rodrigues

Michael.

Michael Kremer

I agree that there is huge potential in health. and education. I think we’ll see big improvements in that, but the risk is that the public sector won’t adopt these, and therefore the poor won’t have access to them. And that’s because the public sector, as Iqbal indicated, the government systems and the government workers may not adapt to use these. There’s also risks of copycat regulation that are over -focused on certain problems that other countries may be worrying about, but might not be relevant for emerging economies. And then final risk is that the procurement systems are just set up in such a way that we don’t get sufficient competition, we get lock -in, and then we just don’t wind up with good quality.

Jeanette Rodrigues

Thank you, Michael. The buzzer’s down, but I’ll take a risk and quickly run through the other.

Ufuk Akcigit

Yes. I think I am much more optimistic about the government actually adopting this thing. Whether it is when you call 100, your call is going to get answered very quickly. The PCR van is going to be at your house much faster. The hospitals are able to be able to link your health record. So I think the government sector productivity is going to improve leapfrogs. The biggest risk, I think, is definitely the labor market. If there was a dial where I could slow down the adaptation and give time to the labor market to catch up, that’s my biggest worry. You talked about entry -level jobs. An entry -level coding job might be an entry -level job in the United States.

It’s the aspirational job that created Gurgaon’s and Noida’s and Mohali’s of this country. And those people are going to be running out of jobs very, quickly. And I think the labor market, whether it is ESI, Provident Fund, Gratuity, we are piling on and making it harder and harder to hire labor. when, on the other hand, capital is not taxed. We are giving incentives to people to use AI, and we are taxing them through provident fund and labor market regulations to hire labor. And I think that, for me, is the biggest risk, actually.

Johannes Zutt

So I think that for the first time in human history, we may actually have the tools available to enable us to target poverty reduction, poverty elimination initiatives on individuals. And that could be tremendously transforming. But at the same time, I do worry that we will not get the governance right or we won’t be able to make that governance sufficiently robust to prevent abuses.

Jeanette Rodrigues

Thank you very much to all of our panelists and to you for your time and attention once again. I had the very rare fortune of being able to peek into Michael’s screen while he was speaking, and I saw all the messy human notes. Our panelists are definitely not outsourcing their thinking anytime soon, and thank God for that. Thank you, ladies and gentlemen

Related ResourcesKnowledge base sources related to the discussion topics (28)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“This was the fourth AI summit (the first having been held in the UK) and participants repeatedly described the first session as “full of fear” about AI.”

The transcript excerpt S12 explicitly states that this is the fourth AI summit, the first was in the UK, and every previous session was described as “full of fear.”

Confirmedlow

“Jeanette Rodrigues is the moderator/host of the panel discussion.”

Source S1 lists Jeanette Rodrigues as the moderator/host of the discussion.

Confirmedmedium

“Many low‑income countries face unreliable electricity, weak broadband, low literacy, and rely on very simple devices.”

Infrastructure challenges such as unreliable electricity and limited internet access in low‑income settings are documented in S93 and S92.

Additional Contextmedium

“The World Bank helps governments create AI sandbox environments for experimentation.”

AI sandboxes are discussed as a mechanism for responsible innovation in developing countries in S90, though the source does not specifically name the World Bank.

Confirmedmedium

“AI‑enabled services in education and health can fill skill gaps for teachers and frontline workers.”

S24 describes AI decision‑support tools for frontline health workers and assessment tools for teachers, confirming the claim.

Additional Contextmedium

“India’s digital identity programme and farmer‑focused phone applications illustrate how small AI can be deployed at scale when supported by government standards and private‑sector innovation.”

S95 references India’s digital public infrastructure—including identity, payments, and UPI—showing government‑led platforms that enable large‑scale digital services, providing context for the claim.

External Sources (97)
S1
How AI Drives Innovation and Economic Growth — -Jeanette Rodrigues: Moderator/Host of the panel discussion This comprehensive discussion at the Bharat Mandapam, moder…
S2
Extreme poverty and human rights * — 16 Jeanette Rodrigues, ‘India ID program wins World Bank praise despite ‘Big Brother’ fears’, Bloomberg, 16 March 201…
S3
DIGITAL DIVIDENDS — – Cantijoch, Marta, Silvia Galandini, and Rachel Gobson. 2014. ‘Civic Websites and Community Engagement: A Mixed Metho…
S4
Rights and Permissions — – Aboud, Frances E., and Kamal Hossain. 2011. ‘The Impact of Preprimary School on Primary School Achievement in Banglade…
S5
How AI Drives Innovation and Economic Growth — – Johannes Zutt- Michael Kremer – Michael Kremer- Iqbal Dhaliwal
S6
AI Meets Agriculture Building Food Security and Climate Resilien — -Johannes Zutt- Regional Vice President, World Bank
S7
AI Meets Agriculture Building Food Security and Climate Resilien — This discussion focused on using artificial intelligence to enhance food security and climate resilience in agriculture,…
S8
How AI Drives Innovation and Economic Growth — -Johannes Zutt: World Bank representative (referred to as “John” in the discussion)
S9
New Development Actors for the 21st Century / DAVOS 2025 — – Iqbal Dhaliwal – Global Director of J-PAL at MIT
S10
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — – Iqbal Dhaliwal- Ronnie Chatterji – Iqbal Dhaliwal- Sanjiv Bikhchandani
S11
How AI Drives Innovation and Economic Growth — – Johannes Zutt- Ufuk Akcigit- Anu Bradford – Ufuk Akcigit- Johannes Zutt
S12
How AI Drives Innovation and Economic Growth — – Ufuk Akcigit- Johannes Zutt
S13
Keynotes — Michael O’Flaherty: EuroDIG, dear friends. Last Saturday, we watched as the newly elected Pope explained why he had ch…
S14
How AI Drives Innovation and Economic Growth — – Johannes Zutt- Ufuk Akcigit- Anu Bradford
S15
How AI Drives Innovation and Economic Growth — Evidence:Examples include bespoke applications that help farmers investigate issues using their phones to analyze proble…
S16
Artificial Intelligence &amp; Emerging Tech — The analysis explores multiple aspects of the relationship between artificial intelligence (AI) and developing countries…
S17
How AI Drives Innovation and Economic Growth — Zutt advocates for a focus on ‘small AI’ rather than large-scale AI solutions, emphasizing practical applications that c…
S18
9821st meeting — The Secretary-General emphasizes the importance of maintaining human control over AI systems. This is crucial to ensure …
S19
From principles to practice: Governing advanced AI in action — Brian Tse: right now? First of all, it’s a great honor to be on this panel today. To ensure that AI could be used as a f…
S20
WS #283 AI Agents: Ensuring Responsible Deployment — As the session reached its time limit (with Prendergast noting the final 10 minutes), the discussion revealed both the p…
S21
AI for agriculture Scaling Intelegence for food and climate resiliance — And there’s still a lot of mechanization which is absent completely. It is all still very much done using traditional me…
S22
How Small AI Solutions Are Creating Big Social Change — Critical for transition of emerging economies to advanced economies, requires ecosystem development and business process…
S23
Education meets AI — The speakers also highlighted the importance of integrating AI into the education system. They argued that such integrat…
S24
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion — And productivity, which will translate essentially also in them being able to have more income and getting out of povert…
S25
How AI Is Transforming Indias Workforce for Global Competitivene — “Because kind of when we have a small set of institutions or companies or talent pools pull ahead disproportionately bec…
S26
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Summary:All speakers agree that while some level of AI governance is necessary, excessive or premature regulation can st…
S27
Responsible AI for Children Safe Playful and Empowering Learning — “Education systems are facing massive learning challenges for which governments are seeking equitable, scalable and evid…
S28
AI &amp; Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — Furthermore, it explores the potential of AI evaluation in ensuring fairness in education while cautioning about the nee…
S29
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S30
Scaling Innovation Building a Robust AI Startup Ecosystem — Overall Tone:The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with t…
S31
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S32
Media Briefing: Unlocking the North Star for AI Adoption, Scaling and Global Impact / DAVOS 2025 — The overall tone was optimistic and forward-looking. Panelists expressed excitement about AI’s capabilities and potentia…
S33
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Policy frameworks and public vs private sector dynamics
S34
Responsible AI for Shared Prosperity — Disagreement level:Very low disagreement level. All speakers aligned on core issues: the need for multilingual AI, the i…
S35
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Evidence-Based Policymaking and Research Integration Legal and regulatory | Economic The role of policy researchers is…
S36
How AI Drives Innovation and Economic Growth — I think there certainly is. As I noted before, there are certain areas where the private sector is going to move, but th…
S37
AI as critical infrastructure for continuity in public services — These key comments fundamentally shifted the discussion from a technical and regulatory focus to a human-centered perspe…
S38
WS #97 Interoperability of AI Governance: Scope and Mechanism — Yik Chan Chin: Thank you, Olga. So, I speak on behalf of the PNAI because I’m the co-leader of the subgroup on the inte…
S39
Laying the foundations for AI governance — – The four fundamental obstacles identified by the moderator: time, uncertainty, geopolitics, and power concentration A…
S40
Safe and Responsible AI at Scale Practical Pathways — The panel revealed that making data AI-ready is fundamentally a governance challenge rather than merely technical. The a…
S41
Discussion Report: Sovereign AI in Defence and National Security — Faisal responds to concerns about competing global AI policies by arguing that the sovereign AI framework is adaptable t…
S42
Global AI Policy Framework: International Cooperation and Historical Perspectives — Mirlesse outlines practical steps for implementing open sovereignty, emphasizing domestic AI deployment in key sectors w…
S43
AI Algorithms and the Future of Global Diplomacy — I just want to kind of contextualize the sovereignty thing as well.
S44
How Trust and Safety Drive Innovation and Sustainable Growth — Explanation:Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was une…
S45
Unveiling Trade Secrets: Exploring the Implications of trade agreements for AI Regulation in the Global South — Overall, the analysis highlights the contrasting perspectives and approaches to regulation, specifically the comparison …
S46
Risks and opportunities of a new UN cybercrime treaty | IGF 2023 WS #225 — A rights-based approach is crucial in designing regulation policies. It is essential to ensure that the rights of childr…
S47
How Trust and Safety Drive Innovation and Sustainable Growth — Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was unexpected cons…
S48
WSIS at 20: successes, failures and future expectations | IGF 2023 Open Forum #100 — The analysis recognises that public investment is vital to foster innovation, particularly in areas where the private se…
S49
Open Forum #33 Building an International AI Cooperation Ecosystem — Development | Economic | Capacity development Innovation Ecosystems and Practical Implementation The speaker argues th…
S50
How AI Drives Innovation and Economic Growth — Private firms develop profitable applications, but public goods applications need government and multilateral support
S51
Hard power of AI — In the context of AI governance, the World Economic Forum emphasizes the significance of looking beyond regulation alone…
S52
Rethinking Africa’s digital trade: Entrepreneurship, innovation, &amp; value creation in the age of Generative AI (depHub) — While recognizing the positive impacts of AI, Shamira Ahmed cautioned against the risks that might contribute to inequal…
S53
From principles to practice: Governing advanced AI in action — Brian Tse: right now? First of all, it’s a great honor to be on this panel today. To ensure that AI could be used as a f…
S54
AI for Social Empowerment_ Driving Change and Inclusion — She argues that immediate policy action is required across competition, tax, labour and social protection to mitigate AI…
S55
AI for Social Empowerment_ Driving Change and Inclusion — Effective governance of AI’s labor market effects requires robust institutional infrastructure including regulatory bodi…
S56
Labour market stability persists despite the rise of AI — Public fears of AI rapidly displacing workershave not yet materialisedin the US labour market. A new study finds that th…
S57
How to make AI governance fit for purpose? — Focus needed on job disruption mitigation through training, skilling, and upskilling programs Legal and regulatory | Ec…
S58
S59
How AI Drives Innovation and Economic Growth — Zutt advocates for a focus on ‘small AI’ rather than large-scale AI solutions, emphasizing practical applications that c…
S60
How AI Drives Innovation and Economic Growth — Appropriate technology solutions for developing countries Zutt advocates for a focus on ‘small AI’ rather than large-sc…
S61
AI Meets Agriculture Building Food Security and Climate Resilien — Thank you for that additional question. I mean, obviously, India is in a great position to lead the development of AI, p…
S63
How AI Drives Innovation and Economic Growth — Speakers:Johannes Zutt, Michael Kremer Speakers:Johannes Zutt, Michael Kremer, Iqbal Singh Dhaliwal
S64
How AI Is Transforming Indias Workforce for Global Competitivene — “Because kind of when we have a small set of institutions or companies or talent pools pull ahead disproportionately bec…
S65
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — -Policy and Regulatory Framework Challenges: Speakers identified the need for better coordination between central and st…
S66
How AI Drives Innovation and Economic Growth — “First, model evaluation.”[124]. “Second, user impact.”[134]. “Second… scalability and usage at scale that’s more like…
S67
How nonprofits are using AI-based innovations to scale their impact — Four-level evaluation framework includes user experience, user behavior, user evaluation, and impact evaluation
S68
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S69
AI for food systems — The tone throughout the discussion was consistently formal, optimistic, and collaborative. It maintained a ceremonial qu…
S70
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S71
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — The conversation maintains a consistently optimistic and enthusiastic tone throughout. Both speakers demonstrate genuine…
S72
Building the AI-Ready Future From Infrastructure to Skills — The tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potent…
S73
Agenda item 6: other matters/OEWG 2025 — The overall tone was constructive and diplomatic, with most delegations expressing willingness to compromise and find co…
S74
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S75
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — High level of consensus on implementation approach and timeline, with moderate consensus on regulatory strategies. The a…
S76
Transforming Health Systems with AI From Lab to Last Mile — The discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s…
S77
WS #144 Bridging the Digital Divide Language Inclusion As a Pillar — This shifted the conversation’s tone from problem-solving to crisis response, and subsequent speakers began incorporatin…
S78
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The discussion maintained a collaborative and constructive tone throughout, characterized by academic rigor combined wit…
S79
Panel 4 – Resilient Subsea Infrastructure for Underserved Regions  — The discussion maintained a professional, collaborative tone throughout, with panelists building on each other’s insight…
S80
WS #462 Bridging the Compute Divide a Global Alliance for AI — The discussion maintained a constructive and collaborative tone throughout, with participants building on each other’s i…
S81
Open Forum #47 Demystifying WSis+20 — Focus on concrete, evidence-based examples of what works rather than abstract declarations
S82
Opening of the session — Focusing on practical, action-oriented measures that can benefit both developed and developing countries
S83
Any other business /Adoption of the report/ Closure of the session — Significant steps taken towards consensus The country had hoped for a different ending to the session but acknowledges …
S84
Keynote-Nikesh Arora — Overall Tone:The tone begins optimistically, celebrating AI’s rapid progress and potential, then shifts to a more cautio…
S85
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S86
Keynote-Dario Amodei — Overall Tone:The tone is consistently optimistic yet measured throughout. Amodei maintains an enthusiastic and respectfu…
S87
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — There is a lot of fear-mongering going on as well
S88
Global Digital Compact: AI solutions for a digital economy inclusive and beneficial for all — Ciyong Zou: Thank you. Thank you very much, moderator. Distinguished representatives, ladies and gentlemen, good afterno…
S89
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — In the last 20 years, we all got access to supercomputers in our pockets, billions of devices that became fundamental to…
S90
AI sandboxes pave path for responsible innovation in developing countries — At theInternet Governance Forum 2025in Lillestrøm, Norway, experts from around the worldgatheredto examine how AI sandbo…
S91
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — He positioned sandboxes as “one of the tools that brings the capacity of dialogue, particularly when the discussions are…
S92
WSIS High-Level Dialogue: Multistakeholder Partnerships Driving Digital Transformation — Some low-income countries have limited internet access
S93
Internet Governance Forum 2024 — To this end,speakers highlighted significant infrastructure challengesfacing many African countries, including unreliabl…
S94
Conversational AI in low income &amp; resource settings | IGF 2023 — Additionally, the potential of AI and chatbots in low-resource settings is acknowledged. The analysis suggests that thes…
S95
Building the Workforce_ AI for Viksit Bharat 2047 — We know we have 5 .8 million professionals. For example, the Tata AI Saki Immersion Programme is empowering rural women …
S96
AI for agriculture Scaling Intelegence for food and climate resiliance — to be here today. So we’re on the cusp of a major revolution in how support to farmers and agriculture is happening. I a…
S97
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Abhishek Singh:Thank you, thank you Inma. I must straightaway mention that one key value that we get as being part of th…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
J
Johannes Zutt
3 arguments141 words per minute1450 words612 seconds
Argument 1
AI can be a game‑changer for emerging markets, offering productivity gains in agriculture, health, and finance, yet faces basic constraints such as unreliable electricity, weak internet, and low literacy – (Johannes Zutt)
EXPLANATION
Johannes argues that AI holds transformative potential for emerging economies by improving productivity in key sectors like agriculture, health, and finance. However, he cautions that without reliable electricity, robust internet infrastructure, and basic literacy, these benefits cannot be fully realized.
EVIDENCE
He notes that AI can be a game-changer for all countries, especially emerging markets, providing opportunities to leapfrog development challenges and enhance growth and productivity [7-10]. He then lists fundamental constraints: unreliable electricity [26], weak internet backbone [28], low literacy and numeracy skills [29], and reliance on very basic devices rather than smartphones [30-31].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Constraints such as unreliable electricity, weak internet backbone, and low literacy are documented as major barriers to AI adoption in low-income settings [S1][S15][S16].
MAJOR DISCUSSION POINT
Infrastructure constraints limiting AI impact
AGREED WITH
Ufuk Akcigit, Anu Bradford, Jeanette Rodrigues
Argument 2
The World Bank promotes “small AI”: affordable, locally relevant solutions that operate with limited connectivity, requiring joint effort from governments and private innovators – (Johannes Zutt)
EXPLANATION
Johannes describes the World Bank’s focus on “small AI,” which are practical, low‑cost applications designed for environments with limited data, connectivity, and skills. Successful deployment requires collaboration between governments, who provide the necessary infrastructure, and private innovators who develop the applications.
EVIDENCE
He defines small AI as practical, affordable, locally relevant AI that works where connectivity, data, skills, and infrastructure are limited [34-36]. He cites examples in India, where the Bank works with multiple states and private sector investors to develop such tools, emphasizing the need for both public-facing standards and private-sector innovation [39-52].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of “small AI” – practical, low-cost applications designed for limited connectivity, data, and skill environments – is described in several studies of AI for development [S15][S17][S22].
MAJOR DISCUSSION POINT
Public‑private collaboration for small AI
DISAGREED WITH
Ufuk Akcigit
Argument 3
Robust governance is essential to prevent misuse of AI and to ensure responsible deployment, especially in high‑impact domains – (Johannes Zutt)
EXPLANATION
Johannes stresses that alongside the opportunities AI presents, there are significant governance challenges that must be addressed to avoid harmful outcomes. Effective regulatory safeguards are needed, particularly for applications with large societal impact.
EVIDENCE
He acknowledges that AI creates challenges such as job losses and infrastructure gaps, and adds that governance and regulatory safeguards are crucial considerations [21-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recent AI governance reports stress the need for strong safeguards, human oversight, and rights-protective frameworks for high-impact AI systems [S18][S19][S20].
MAJOR DISCUSSION POINT
Need for strong AI governance
AGREED WITH
Anu Bradford, Jeanette Rodrigues, Michael Kremer
DISAGREED WITH
Iqbal Dhaliwal
M
Michael Kremer
3 arguments160 words per minute1592 words593 seconds
Argument 1
Government‑backed AI weather forecasts can dramatically improve farmers’ planting decisions and yields, illustrating the need for public investment in AI public goods – (Michael Kremer)
EXPLANATION
Michael highlights AI‑driven weather forecasting as a public good that can help farmers make better planting decisions, leading to higher yields. He argues that governments, possibly supported by multilateral development banks, should invest in producing and disseminating such forecasts.
EVIDENCE
He cites India’s AI weather forecasts reaching 38 million farmers, noting that the forecasts correctly predicted an early monsoon in Kerala and helped farmers adjust planting and seed choices, with survey evidence showing increased transplanting and hybrid seed use [133-155].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-driven weather forecasting for millions of farmers is highlighted as a public-good application that improves planting decisions and yields [S15][S21].
MAJOR DISCUSSION POINT
Public investment in AI for agriculture
AGREED WITH
Johannes Zutt, Iqbal Dhaliwal
Argument 2
Multilateral development banks should create evidence‑based innovation funds that pilot, rigorously test, and scale AI applications to overcome market failures and accelerate adoption – (Michael Kremer)
EXPLANATION
Michael proposes that institutions like the World Bank set up tiered, evidence‑based innovation funds to support AI pilots, rigorous testing, and scaling. Such funds would address market failures where private firms lack incentives to develop public‑good AI solutions.
EVIDENCE
He describes Development Innovation Ventures, which provides small grants for pilots, larger grants for rigorous testing, and further funding for scaling successful projects, emphasizing the need for evidence-based approaches [266-271].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Development Innovation Ventures model, with tiered, evidence-based grants for pilots, testing, and scaling, is presented as a template for such funds [S12][S15].
MAJOR DISCUSSION POINT
Evidence‑based AI funding mechanisms
DISAGREED WITH
Johannes Zutt
Argument 3
AI projects should be evaluated like medical trials: assess model accuracy, user impact, scalability, and establish mechanisms for continuous improvement and transparent reporting – (Michael Kremer)
EXPLANATION
Michael suggests a four‑stage evaluation framework for AI interventions, analogous to clinical trials: model performance, user impact, scalability/effectiveness, and continuous improvement with transparent reporting. This approach ensures that AI tools deliver real benefits at scale.
EVIDENCE
He outlines the steps: model evaluation for task performance, assessing user impact through pilot-like trials, testing scalability akin to effectiveness trials, and requiring continuous improvement and open reporting in procurement contracts [284-292].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A four-stage, trial-like evaluation framework for AI interventions (model performance, user impact, scalability, continuous improvement) is outlined in the evidence-based evaluation guidelines [S12].
MAJOR DISCUSSION POINT
Rigorous evaluation of AI interventions
U
Ufuk Akcigit
2 arguments163 words per minute1041 words382 seconds
Argument 1
Realizing AI’s benefits requires fixing fundamental business‑environment issues (e.g., firm size determinants, entrepreneurship climate) in developing economies – (Ufuk Akcigit)
EXPLANATION
Ufuk argues that AI alone cannot spur entrepreneurship in emerging economies unless underlying business‑environment problems—such as the influence of family size on firm size—are addressed. A supportive environment is essential for AI to translate into genuine dynamism.
EVIDENCE
He questions why, before AI, firm size in emerging economies was determined by family size or number of male children, and stresses that without fixing the business environment, AI will not magically create entrepreneurship [111-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of AI-driven development stress that ecosystem development and business-process reforms are prerequisites for AI impact in emerging economies [S22].
MAJOR DISCUSSION POINT
Business environment as prerequisite for AI benefits
AGREED WITH
Johannes Zutt, Anu Bradford, Jeanette Rodrigues
Argument 2
The foundational AI layer has high entry barriers (compute, data, talent) leading to market concentration, while the application layer remains low‑barrier and more conducive to creative destruction – (Ufuk Akcigit)
EXPLANATION
Ufuk distinguishes between the foundation layer of AI, which requires heavy compute, data, and talent and thus favors concentration, and the application layer, where entry barriers are low and creative destruction can thrive. He warns that concentration at the foundational level may spill over to downstream applications.
EVIDENCE
He notes that the foundation layer is compute-heavy, data-heavy, and talent-heavy, making it prone to concentration, whereas the application layer has low entry barriers and encourages creative destruction [94-98].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Research distinguishes a compute-heavy, data-intensive foundation layer that tends toward concentration from a low-barrier application layer that enables creative destruction [S17][S1].
MAJOR DISCUSSION POINT
Layered AI structure and concentration risk
AGREED WITH
Iqbal Dhaliwal, Michael Kremer
DISAGREED WITH
Johannes Zutt
I
Iqbal Dhaliwal
3 arguments183 words per minute1151 words375 seconds
Argument 1
Targeted “small AI” tools can free teachers’ time and enhance education outcomes when they are demand‑driven and integrated into existing workflows – (Iqbal Dhaliwal)
EXPLANATION
Iqbal explains that small AI applications can automate routine tasks such as spelling correction, allowing teachers to focus on higher‑order instruction. When these tools are driven by demand from students, teachers, and districts, they can improve educational performance.
EVIDENCE
He describes an AI system that takes over routine correction tasks, freeing teachers to engage with students on essay structure, and notes that the demand came from students, teachers, and school districts seeking progress, leading to measurable improvements [236-247].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Studies of AI in education report demand-driven tools such as automated spelling correction that free teachers for higher-order instruction [S23][S15].
MAJOR DISCUSSION POINT
Demand‑driven AI to augment teaching
AGREED WITH
Johannes Zutt
Argument 2
AI is accelerating market concentration, shifting innovative resources to large incumbents and prompting a talent drain from academia to industry, raising concerns about unequal benefit distribution – (Iqbal Dhaliwal)
EXPLANATION
Iqbal points out that AI is concentrating innovation within large incumbent firms and drawing talent away from universities, which may reduce competition and widen inequality. He presents evidence of increasing market concentration and higher earnings for AI scientists in industry.
EVIDENCE
He cites rising market concentration in the U.S. since 1980, a shift of innovative resources toward firms with over 1,000 employees, and data showing AI scientists’ earnings rising from $300 k to $390 k in academia and from $550 k to $2 M in industry, along with a talent migration from academia to industry [324-340].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Empirical work documents rising market concentration in AI and a migration of talent from academia to industry, especially toward large incumbent firms [S1][S17].
MAJOR DISCUSSION POINT
Concentration and talent migration
AGREED WITH
Ufuk Akcigit, Michael Kremer
Argument 3
Trust in technology and system adaptation are critical; even highly accurate AI tools can fail to deliver benefits if users are not trained or institutional processes are not adjusted – (Iqbal Dhaliwal)
EXPLANATION
Iqbal stresses that the effectiveness of AI depends on user trust and the surrounding institutional context. Without proper training and system redesign, even superior AI diagnostics may not improve outcomes and can even reduce efficiency.
EVIDENCE
He references studies where AI diagnostic tools performed better than humans in labs but did not improve field outcomes due to insufficient training of health workers [309-315], and an example where an AI system for detecting bogus firms in India was not scaled because it removed human discretion, highlighting the need for system adaptation [316-322].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Implementation studies highlight that user trust, adequate training, and alignment of institutional processes are essential for AI tools to achieve intended outcomes [S24][S18].
MAJOR DISCUSSION POINT
Importance of trust and system alignment
DISAGREED WITH
Johannes Zutt
A
Anu Bradford
2 arguments199 words per minute1374 words412 seconds
Argument 1
Effective AI regulation should be rights‑driven yet adaptable to local priorities, allowing India and other Global South nations to tailor frameworks without merely copying external models – (Anu Bradford)
EXPLANATION
Anu argues that AI regulation must protect fundamental rights while being flexible enough for local contexts. She suggests learning from the EU’s rights‑based approach but adapting it to national priorities rather than adopting it wholesale.
EVIDENCE
She describes the EU’s rights-driven regulation that protects individual rights, democratic structures, and seeks broader benefit distribution, and recommends that India take lessons from this approach while customizing it to its own needs [172-176].
MAJOR DISCUSSION POINT
Rights‑based yet locally adapted AI regulation
AGREED WITH
Johannes Zutt, Jeanette Rodrigues, Michael Kremer
DISAGREED WITH
Jeanette Rodrigues
Argument 2
The Global South must develop its own AI regulatory sovereignty, drawing lessons from the EU’s rights‑based approach but customizing rules to national contexts – (Anu Bradford)
EXPLANATION
Anu emphasizes the need for the Global South to assert AI regulatory sovereignty, creating rules that suit their economies and societies. While acknowledging the difficulty of regulation, she advocates for tailored frameworks rather than reliance on external models.
EVIDENCE
She states that the Global South has incentives for AI sovereignty, including regulatory sovereignty, and that they should design rules fitting their economies and public interests, while learning from jurisdictions like the EU [167-172].
MAJOR DISCUSSION POINT
AI regulatory sovereignty for the Global South
J
Jeanette Rodrigues
2 arguments174 words per minute1039 words356 seconds
Argument 1
Policymakers must balance hope and fear, ensuring AI narrows rather than widens development gaps – (Jeanette Rodrigues)
EXPLANATION
Jeanette calls for policymakers to navigate between the optimism surrounding AI’s potential and the fears of job loss and inequality. She stresses that policies should aim to ensure AI reduces, not widens, development disparities.
EVIDENCE
She notes that AI innovation does not diffuse equally, and the panel’s purpose is to explore what determines whether AI narrows or widens the development gap, emphasizing the need to balance hope and concern [61-71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Literature notes the natural hype surrounding AI and stresses the need for balanced policy that mitigates risks while leveraging benefits, to avoid widening inequality [S15][S24].
MAJOR DISCUSSION POINT
Balancing optimism and risk in AI policy
AGREED WITH
Johannes Zutt, Ufuk Akcigit, Anu Bradford
Argument 2
There is anxiety that AI rule‑making is dominated by the US and China, prompting questions about how developing countries can assert sovereign policy control – (Jeanette Rodrigues)
EXPLANATION
Jeanette raises concerns that large language models are concentrated in the United States and China, which may allow these powers to set global AI rules. She asks who will set AI regulations for the Global South and whether sovereign policy is possible.
EVIDENCE
She points out that large language models are concentrated in the US and China, mentions the EU as another player, and asks who sets AI rules for the Global South and if sovereignty is possible [162-166].
MAJOR DISCUSSION POINT
AI governance dominance and sovereignty concerns
AGREED WITH
Johannes Zutt, Anu Bradford, Michael Kremer
DISAGREED WITH
Anu Bradford
Agreements
Agreement Points
AI can be a transformative game‑changer for emerging markets but requires basic infrastructure such as reliable electricity, strong internet, and basic literacy to realise its potential.
Speakers: Johannes Zutt, Ufuk Akcigit, Anu Bradford, Jeanette Rodrigues
AI can be a game‑changer for emerging markets, offering productivity gains in agriculture, health, and finance, yet faces basic constraints such as unreliable electricity, weak internet, and low literacy – (Johannes Zutt) Realizing AI’s benefits requires fixing fundamental business‑environment issues (e.g., firm size determinants, entrepreneurship climate) in developing economies – (Ufuk Akcigit) Effective AI regulation should be rights‑driven yet adaptable to local priorities, allowing India and other Global South nations to tailor frameworks without merely copying external models – (Anu Bradford) Policymakers must balance hope and fear, ensuring AI narrows rather than widens development gaps – (Jeanette Rodrigues)
All four speakers stress that while AI holds great promise for emerging economies, its impact will be limited unless foundational infrastructure and a supportive business-environment are put in place, and policies are crafted to balance optimism with realistic constraints. [7-10][26-31][84-85][111-115][167-176][61-71]
POLICY CONTEXT (KNOWLEDGE BASE)
This view mirrors policy emphasis on foundational digital infrastructure for AI development in low-income settings, as highlighted in discussions on computing infrastructure needs and multilateral support for emerging economies [S34][S36].
Promotion of “small AI” – affordable, locally relevant AI solutions that operate with limited connectivity and data – is essential for developing contexts.
Speakers: Johannes Zutt, Iqbal Dhaliwal
For us in the World Bank group, we’ve been very, very focused on focused recently on basically small AI. Small AI meaning practical, affordable, locally relevant AI that addresses specific problems and also works where connectivity, data, skills, infrastructure are fairly limited – (Johannes Zutt) Targeted “small AI” tools can free teachers’ time and enhance education outcomes when they are demand‑driven and integrated into existing workflows – (Iqbal Dhaliwal)
Both speakers advocate for low-cost, context-specific AI applications that can function despite weak connectivity or limited data, highlighting education and agriculture as key sectors. [34-36][236-247]
POLICY CONTEXT (KNOWLEDGE BASE)
The call for affordable, locally-tailored AI aligns with calls for multilingual, low-resource AI solutions and a human-centered implementation focus in development forums [S34][S37].
Strong governance and regulatory frameworks are crucial to ensure responsible AI deployment and to mitigate risks such as job losses, misuse, and concentration of power.
Speakers: Johannes Zutt, Anu Bradford, Jeanette Rodrigues, Michael Kremer
Robust governance is essential to prevent misuse of AI and to ensure responsible deployment, especially in high‑impact domains – (Johannes Zutt) Effective AI regulation should be rights‑driven yet adaptable to local priorities, allowing India and other Global South nations to tailor frameworks without merely copying external models – (Anu Bradford) There is anxiety that AI rule‑making is dominated by the US and China, prompting questions about how developing countries can assert sovereign policy control – (Jeanette Rodrigues) I think there is huge potential in health and education, but the risk is that the public sector won’t adopt these, and that procurement systems may lock‑in and limit competition – (Michael Kremer)
All four emphasize the need for robust, rights-based, and locally adapted governance structures to manage AI’s societal impacts, prevent concentration, and ensure public-sector uptake. [21-33][167-176][162-166][397-398]
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple policy papers stress robust AI governance as a primary safeguard, citing governance challenges, labor-impact regulation, and broader AI governance obstacles [S40][S54][S55][S39].
Public sector investment and evidence‑based innovation funds are needed to develop AI public goods (e.g., weather forecasts, health and education tools) that the private sector will not provide on its own.
Speakers: Michael Kremer, Johannes Zutt, Iqbal Dhaliwal
Government‑backed AI weather forecasts can dramatically improve farmers’ planting decisions and yields, illustrating the need for public investment in AI public goods – (Michael Kremer) We’re doing a little bit on bigger AI… Small AI will also be very, very important for uptake – (Johannes Zutt) Free up time. So if your AI application can free up the time of the health frontline workers, first of all, that’s a winner – (Iqbal Dhaliwal)
The speakers agree that governments and multilateral institutions must fund and pilot AI solutions that serve public-good functions, such as weather forecasting for farmers or tools that free health and education workers, because market incentives are insufficient. [133-155][34-36][236-247]
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence-based AI policy roadmaps and public-investment analyses underscore the need for government-funded AI public-goods programmes where market incentives fall short [S35][S48][S50].
AI development is leading to increasing market concentration and talent migration toward large incumbents, raising concerns about unequal benefit distribution and the need to keep foundational AI layers contestable.
Speakers: Ufuk Akcigit, Iqbal Dhaliwal, Michael Kremer
The foundational AI layer has high entry barriers (compute, data, talent) leading to market concentration, while the application layer remains low‑barrier and more conducive to creative destruction – (Ufuk Akcigit) AI is accelerating market concentration, shifting innovative resources to large incumbents and prompting a talent drain from academia to industry, raising concerns about unequal benefit distribution – (Iqbal Dhaliwal) There is a risk that the public sector won’t adopt these, and that procurement systems may lock‑in and limit competition – (Michael Kremer)
All three highlight that AI’s compute-intensive foundation favors a few large players, causing concentration of innovation and talent, and warn that without careful policy (e.g., competition-friendly procurement) the benefits may be unevenly shared. [94-98][324-340][397-398]
POLICY CONTEXT (KNOWLEDGE BASE)
Recent assessments flag rising market concentration, power concentration, and wealth inequality in AI as key risks requiring contestable foundational layers [S58][S39][S52].
Similar Viewpoints
Both stress the importance of public‑sector‑led, evidence‑based pilots and funding mechanisms to develop and scale small, locally relevant AI solutions, recognizing that private firms alone will not fill the public‑good gap. [34-36][266-271]
Speakers: Johannes Zutt, Michael Kremer
For us in the World Bank group, we’ve been very, very focused on focused recently on basically small AI… – (Johannes Zutt) Multilateral development banks should create evidence‑based innovation funds that pilot, rigorously test, and scale AI applications to overcome market failures and accelerate adoption – (Michael Kremer)
Both highlight that AI is creating concentration at the foundational level, concentrating innovation and talent in large incumbents, which threatens broader inclusive growth. [94-98][324-340]
Speakers: Ufuk Akcigit, Iqbal Dhaliwal
The foundational AI layer has high entry barriers (compute, data, talent) leading to market concentration… – (Ufuk Akcigit) AI is accelerating market concentration, shifting innovative resources to large incumbents and prompting a talent drain… – (Iqbal Dhaliwal)
Both argue that the Global South needs to assert AI regulatory sovereignty and craft rights‑based, locally adapted frameworks rather than simply follow US/China or EU models. [167-172][162-166]
Speakers: Anu Bradford, Jeanette Rodrigues
The Global South must develop its own AI regulatory sovereignty, drawing lessons from the EU’s rights‑based approach but customizing rules to national contexts – (Anu Bradford) There is anxiety that AI rule‑making is dominated by the US and China, prompting questions about how developing countries can assert sovereign policy control – (Jeanette Rodrigues)
Unexpected Consensus
Recognition by both a World Bank official and a field practitioner that system‑level trust, user training, and institutional adaptation are as critical as the technology itself for AI success.
Speakers: Johannes Zutt, Iqbal Dhaliwal
Robust governance is essential to prevent misuse of AI and to ensure responsible deployment, especially in high‑impact domains – (Johannes Zutt) Trust in technology and system adaptation are critical; even highly accurate AI tools can fail to deliver benefits if users are not trained or institutional processes are not adjusted – (Iqbal Dhaliwal)
While Johannes focuses on governance from a high-level perspective, Iqbal emphasizes on-ground trust and training. Their convergence on the necessity of aligning institutions and users with AI tools is unexpected given their different roles. [21-33][309-322]
POLICY CONTEXT (KNOWLEDGE BASE)
Human-centered AI implementation studies highlight system-level trust, training, and institutional adaptation as pivotal alongside technical deployment [S37][S44][S40].
Overall Assessment

The panel shows strong convergence on four main themes: (1) AI’s transformative potential is contingent on basic infrastructure and a supportive business‑environment; (2) “small AI” solutions that are affordable and locally relevant are vital; (3) robust, rights‑based governance and regulatory sovereignty are needed to manage risks and prevent concentration; (4) public‑sector investment and evidence‑based funding mechanisms are essential to deliver AI public goods and avoid lock‑in. Concerns about market concentration and talent migration are also widely shared.

High consensus across speakers, indicating a shared understanding that policy, infrastructure, and governance must accompany technological advances to ensure AI narrows rather than widens development gaps. This consensus suggests that future initiatives should prioritize coordinated public‑private funding, rights‑focused regulation, and capacity‑building to harness AI for inclusive development.

Differences
Different Viewpoints
Foundational AI layer concentration vs focus on small AI applications
Speakers: Ufuk Akcigit, Johannes Zutt
The foundational AI layer has high entry barriers (compute, data, talent) leading to market concentration, while the application layer remains low‑barrier and more conducive to creative destruction – (Ufuk Akcigit) The World Bank promotes “small AI”: affordable, locally relevant solutions that operate with limited connectivity, requiring joint effort from governments and private innovators – (Johannes Zutt)
Ufuk warns that the compute-, data- and talent-intensive foundation layer creates concentration that can spill over to downstream applications, suggesting the need to address these structural barriers [94-98]. Johannes, by contrast, concentrates on deploying “small AI” that works despite limited connectivity and infrastructure, emphasizing public-private collaboration without addressing the foundational layer’s concentration risk [34-36][39-52].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between concentrating power in large AI models and promoting low-resource, small-AI solutions is reflected in market-concentration analyses and calls for affordable AI for developing contexts [S58][S34].
Approach to AI regulatory sovereignty and feasibility
Speakers: Anu Bradford, Jeanette Rodrigues
Effective AI regulation should be rights‑driven yet adaptable to local priorities, allowing India and other Global South nations to tailor frameworks without merely copying external models – (Anu Bradford) There is anxiety that AI rule‑making is dominated by the US and China, prompting questions about how developing countries can assert sovereign policy control – (Jeanette Rodrigues)
Anu advocates a rights-based, locally adapted regulatory framework, learning from the EU but customizing to national needs [172-176]. Jeanette highlights the dominance of the US and China in large-model development and questions whether the Global South can achieve true AI sovereignty [162-166]. The two differ on the feasibility and emphasis of sovereign regulation.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on sovereign AI frameworks, open-sovereignty strategies, and AI’s role in diplomacy illustrate divergent views on national regulatory autonomy and feasibility [S41][S42][S43].
Primary mechanism for scaling AI in emerging economies – public‑private collaboration vs evidence‑based funding
Speakers: Johannes Zutt, Michael Kremer
The World Bank promotes “small AI” meaning practical, affordable, locally relevant AI that addresses specific problems and also works where connectivity, data, skills, infrastructure are fairly limited – (Johannes Zutt) Multilateral development banks should create evidence‑based innovation funds that pilot, rigorously test, and scale AI applications to overcome market failures and accelerate adoption – (Michael Kremer)
Johannes focuses on delivering small-AI solutions through joint government standards and private-sector innovators, stressing practicality in low-resource settings [34-36][39-52]. Michael proposes tiered, evidence-based innovation funds (small grants, larger testing grants, scaling grants) to address market failures and ensure rigorous evaluation before scaling [266-271]. They share the goal of AI diffusion but differ on the primary driver-collaboration versus structured funding mechanisms.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions stress both the necessity of public-private partnerships for scaling AI and the importance of evidence-based funding mechanisms to target impact effectively [S49][S48][S35].
How to mitigate AI‑induced labor market disruptions
Speakers: Johannes Zutt, Ufuk Akcigit
One of them is there will be some job losses, particularly sort of entry‑level jobs that are very much knowledge or document‑based, performing relatively rote work that can be taken over by automation – (Johannes Zutt) The biggest risk, I think, is definitely the labor market. If there was a dial where I could slow down the adaptation and give time to the labor market to catch up, that’s my biggest worry – (Ufuk Akcigit)
Johannes acknowledges AI will displace entry-level, knowledge-based jobs and notes this trend within the World Bank’s own hiring [22-24]. Ufuk stresses the broader macro-economic risk, calling for a slower AI rollout to give labor markets time to adjust, especially for aspiring entry-level coding jobs [405-412]. The disagreement lies in the emphasis: Johannes notes the problem, while Ufuk proposes a policy lever (slowing adoption) as a solution.
POLICY CONTEXT (KNOWLEDGE BASE)
Recommendations include targeted training, social-protection measures, and dedicated governance bodies to monitor AI’s labor impacts, as outlined in recent policy briefs on AI-driven disruption [S54][S55][S57].
Primary barrier to successful AI deployment – governance frameworks vs trust and system adaptation
Speakers: Johannes Zutt, Iqbal Dhaliwal
Robust governance is essential to prevent misuse of AI and to ensure responsible deployment, especially in high‑impact domains – (Johannes Zutt) Trust in technology and system adaptation are critical; even highly accurate AI tools can fail to deliver benefits if users are not trained or institutional processes are not adjusted – (Iqbal Dhaliwal)
Johannes stresses the need for governance and regulatory safeguards to avoid harmful outcomes [21-33]. Iqbal argues that beyond governance, user trust, adequate training, and alignment of institutional processes are essential for AI effectiveness, citing failures of AI diagnostics and GST fraud detection when systems were not adapted [309-315]. The two focus on different levers-formal governance versus practical trust and system integration.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses identify governance coordination challenges as a barrier, while other strands argue that building trust and adapting institutional processes are equally decisive [S40][S37].
Unexpected Differences
Trust and system adaptation versus rights‑based regulatory approach
Speakers: Iqbal Dhaliwal, Anu Bradford
Trust in technology and system adaptation are critical; even highly accurate AI tools can fail to deliver benefits if users are not trained or institutional processes are not adjusted – (Iqbal Dhaliwal) Effective AI regulation should be rights‑driven yet adaptable to local priorities, allowing India and other Global South nations to tailor frameworks without merely copying external models – (Anu Bradford)
While both discuss how to ensure AI works for societies, Iqbal stresses on on‑the‑ground trust, training, and institutional redesign as the main hurdle, whereas Anu emphasizes a legal‑regulatory, rights‑based framework. The divergence is unexpected because both are usually aligned on the need for supportive environments, yet they prioritize very different levers (social trust vs legal rights).
POLICY CONTEXT (KNOWLEDGE BASE)
The literature contrasts rights-based regulatory models with trust-and-safety-focused interventions, highlighting the policy trade-off between protecting fundamental rights and fostering user confidence [S45][S46][S44].
Overall Assessment

The panel shows broad consensus that AI can help narrow development gaps, but disagreements arise around how to manage structural concentration, the balance between rights‑based regulation and sovereign policy, the primary mechanisms for scaling AI (public‑private collaboration vs evidence‑based funding), and the most effective way to protect labor markets and build trust. These divergences reflect differing priorities among economists, development practitioners, and policy experts.

Moderate to high – while there is shared optimism, the participants differ substantially on the pathways and institutional levers needed, implying that coordinated policy design will require reconciling these perspectives to avoid fragmented or counter‑productive AI strategies.

Partial Agreements
Both agree that AI should be used to narrow development gaps and benefit the poor. Jeanette emphasizes a balanced policy approach to manage optimism and fear [61-71], while Michael points to concrete public‑sector investment (AI weather forecasts) as a way to achieve that goal [133-155]. They share the same objective but propose different pathways—policy balance versus targeted public investment.
Speakers: Jeanette Rodrigues, Michael Kremer
Policymakers must balance hope and fear, ensuring AI narrows rather than widens development gaps – (Jeanette Rodrigues) Government‑backed AI weather forecasts can dramatically improve farmers’ planting decisions and yields, illustrating the need for public investment in AI public goods – (Michael Kremer)
Both seek mechanisms that ensure AI benefits are widely shared and risks are managed. Anu focuses on a rights‑based, adaptable regulatory framework, while Michael proposes evidence‑based funding and procurement safeguards. They agree on the need for structured, protective measures but differ on whether regulation or funding is the primary tool.
Speakers: Anu Bradford, Michael Kremer
Effective AI regulation should be rights‑driven yet adaptable to local priorities, allowing India and other Global South nations to tailor frameworks without merely copying external models – (Anu Bradford) Multilateral development banks should create evidence‑based innovation funds that pilot, rigorously test, and scale AI applications to overcome market failures and accelerate adoption – (Michael Kremer)
Takeaways
Key takeaways
AI can be a powerful development catalyst for emerging markets, offering productivity gains in agriculture, health, finance and education, but its impact is limited by basic infrastructure gaps such as unreliable electricity, weak internet connectivity, and low literacy. The World Bank’s “small AI” approach—affordable, locally‑relevant tools that work offline or with limited data—highlights the need for public‑private collaboration to create AI applications that match on‑the‑ground needs. Foundational AI models have high entry barriers (compute, data, talent) leading to market concentration, while the application layer remains low‑barrier and more conducive to creative destruction; this concentration raises concerns about unequal benefit distribution and talent drain from academia to industry. Effective AI governance requires a rights‑driven yet locally adaptable regulatory framework; the Global South must develop its own AI sovereignty rather than simply copying US, China or EU models. Evaluation of AI interventions should follow a rigorous, multi‑stage process (model accuracy, user impact, scalability, continuous improvement) similar to medical trials, and must address trust, user training, and system‑level adaptation.
Resolutions and action items
World Bank to continue promoting and scaling “small AI” solutions in South Asia, including AI sandboxes for experimentation with governments. Creation/expansion of evidence‑based innovation funds (e.g., Development Innovation Ventures) to pilot, rigorously test, and scale AI applications for public‑good outcomes. Governments (e.g., India) to invest in AI‑generated public goods such as weather forecasts and to integrate them into farmer decision‑making processes. Encourage private‑sector developers to build demand‑driven AI tools that free up frontline worker time (e.g., teachers, health workers) and align with local language and offline capabilities. Policy makers to adopt a rights‑based regulatory approach that can be customized to national priorities, drawing lessons from the EU AI Act while avoiding a one‑size‑fits‑all model.
Unresolved issues
How to prevent AI‑driven market concentration from entrenching incumbent firms and limiting opportunities for new entrants in developing economies. Specific mechanisms for aligning AI talent pipelines with local innovation ecosystems to avoid excessive migration from academia to industry. The optimal balance between regulation and innovation for the Global South, especially given geopolitical pressures from the US and China. Ways to ensure public‑sector adoption of AI tools at scale without creating monopsonistic procurement bottlenecks. How to design AI governance structures that protect against misuse while remaining flexible enough for rapid technological change.
Suggested compromises
Combine a public‑facing effort on standards, interoperability and offline capability with a private‑sector‑driven push for rapid application development. Adopt a rights‑driven regulatory framework that is adapted locally, allowing countries like India to tailor rules without fully replicating EU or US models. Use tiered innovation funding (small grants for pilots, larger grants for rigorous testing, and scale‑up financing) to balance speed of innovation with evidence‑based risk mitigation. Encourage AI sandboxes that permit controlled experimentation while maintaining oversight, thereby reconciling the need for rapid development with governance concerns.
Thought Provoking Comments
AI can be a game changer… but at the same time, AI also creates a number of challenges. … many developing countries lack reliable electricity, internet backbone, basic literacy and numeracy, and may need to use very basic devices. We need to focus on "small AI" – practical, affordable, locally relevant AI that works where connectivity, data, skills, infrastructure are limited.
He simultaneously highlighted AI’s transformative potential and the concrete infrastructural and governance constraints in emerging economies, introducing the concept of “small AI” as a pragmatic solution.
Set the agenda for the panel by framing the discussion around both opportunities and systemic barriers, prompting other speakers to address feasibility, policy, and implementation challenges specific to developing contexts.
Speaker: Johannes Zutt
When we look at the application layer, entry barriers are low and small businesses can do what only large businesses could do before. But the foundational layer has very high entry barriers – compute‑heavy, data‑heavy, talent‑heavy – leading to concentration.
He provided a clear two‑tier framework (foundational vs. application) that explains why AI could both democratize entrepreneurship and simultaneously reinforce market concentration.
Shifted the conversation toward structural market dynamics, influencing later remarks on concentration, incumbency, and the need to keep the foundational layer contestable (referenced by Iqbal and later by Ufuk himself).
Speaker: Ufuk Akcigit
AI weather forecasts are a public good – non‑rival and non‑excludable. India’s AI‑generated forecasts reached 38 million farmers last year, leading to better planting decisions and higher adoption of hybrid seeds.
He gave a concrete, data‑driven example of AI delivering public‑good benefits at scale, illustrating how multilateral institutions can catalyze such interventions.
Introduced a tangible success story that anchored the abstract discussion, prompting further dialogue on scaling, government involvement, and the risk of slow adoption by public sectors.
Speaker: Michael Kremer
The myth that regulation kills innovation is false. Europe’s slower AI rollout is due to lack of a digital single market, a shallow capital‑markets union, risk‑averse culture, and talent pipelines, not because of the AI Act or GDPR.
She challenged a common narrative that stringent regulation hampers AI development, providing a nuanced analysis of structural factors behind regional innovation gaps.
Redirected the debate on regulatory design, encouraging participants to consider how policy can be crafted without sacrificing innovation, and influencing the later discussion on AI sovereignty and regulatory balance.
Speaker: Anu Bradford
AI can free teachers from routine tasks like correcting spelling, allowing them to focus on deeper learning. The key is demand‑driven design: teachers, students, and districts all asked for it, and it delivered measurable gains.
He linked AI deployment to real‑world educational outcomes, emphasizing the importance of freeing human capacity and aligning technology with user demand.
Grounded the conversation in field experience, reinforcing the theme of human‑AI collaboration and prompting others to discuss evaluation metrics and scalability.
Speaker: Iqbal Dhaliwal
Even when AI diagnostics outperform humans in the lab, they can reduce doctors’ efficiency in the field because the surrounding system isn’t adapted. Example: a GST fraud‑detection model was not scaled because it removed human discretion, a source of power.
He highlighted the sociopolitical dimension of AI adoption—trust, power, and institutional inertia—showing that technical superiority alone doesn’t guarantee deployment.
Introduced a cautionary perspective on governance and power dynamics, steering the panel toward discussing regulatory safeguards, stakeholder buy‑in, and the risk of technology being blocked by non‑technical considerations.
Speaker: Iqbal Dhaliwal
Evidence shows market concentration is rising, innovative resources are shifting to incumbents, and top AI scientists are moving from academia to industry, reducing open science. This could undermine creative destruction.
He supplied empirical data on concentration trends and the migration of talent, warning that the foundational AI layer may become increasingly closed and monopolized.
Deepened the earlier foundational‑layer argument, prompting the panel to consider policies that preserve competition, support universities, and maintain open research ecosystems.
Speaker: Ufuk Akcigit
The biggest systemic risk is that humanity becomes dumber by outsourcing thinking to AI. As educators, we must teach students to use generative AI to augment—not replace—their own reasoning.
She raised a philosophical and societal risk that goes beyond economics or regulation, questioning the long‑term cognitive effects of pervasive AI assistance.
Expanded the scope of the discussion to include human capital and education quality, influencing the rapid‑fire round and reinforcing the need for thoughtful AI integration.
Speaker: Anu Bradford
For the first time we may have tools to target poverty reduction at the individual level, but we risk not having robust governance to prevent abuses.
He combined optimism about AI’s precision in poverty alleviation with a sober warning about governance gaps, encapsulating the panel’s central tension.
Served as a concise summary of the panel’s dual narrative, prompting final reflections on both the transformative promise and the regulatory/ethical challenges.
Speaker: Johannes Zutt
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved it from a broad, hopeful overview of AI’s potential to a nuanced examination of structural, institutional, and societal constraints. Johannes’s framing of “small AI” and the infrastructural gaps set the stage, while Ufuk’s two‑layer model introduced a structural lens that underpinned later concerns about concentration. Michael’s concrete public‑good example and Iqbal’s field‑level successes grounded the debate in real impact, whereas Anu’s deconstruction of the regulation‑innovation myth and her warning about cognitive atrophy broadened the policy conversation. The recurring theme of power—whether in the GST model or in market concentration—highlighted governance as a decisive factor. Collectively, these comments redirected the panel toward concrete policy levers (e.g., evidence‑based innovation funds, regulatory design, support for universities) and underscored the need to balance AI‑driven productivity gains with safeguards against inequality, concentration, and loss of human agency.

Follow-up Questions
How can emerging economies and developing markets harness the potential of AI and avoid the pitfalls?
Identifies the core challenge of translating AI opportunities into real benefits while addressing infrastructure, skills, and governance gaps in low‑resource settings.
Speaker: Johannes Zutt
Why was there historically low entrepreneurship and dynamism in emerging economies before AI, and what business‑environment reforms are needed to enable AI‑driven entrepreneurship?
Seeks to uncover structural constraints (e.g., family‑based firm size, regulatory environment) that limit firm dynamism, a prerequisite for AI to generate inclusive growth.
Speaker: Ufuk Akcigit
What are the likely impacts of AI on entry‑level job losses in developing countries, and what policies can mitigate labor‑market disruption?
Highlights the need for data‑driven analysis and policy design to protect vulnerable workers as automation spreads.
Speaker: Johannes Zutt
How effective are small‑AI applications in low‑connectivity, low‑literacy environments, and what design features (offline mode, local‑language support) are essential?
Calls for empirical research on the usability and impact of lightweight AI tools for farmers, nurses, teachers, etc., where infrastructure is limited.
Speaker: Johannes Zutt
What evaluation frameworks and metrics should be used to assess AI interventions (model performance, user impact, scalability, continuous improvement)?
Emphasizes the need for rigorous, evidence‑based assessment methods to ensure AI projects deliver real‑world benefits and can be iteratively improved.
Speaker: Michael Kremer
What role should multilateral development banks play in financing AI for public‑good applications (e.g., AI‑driven weather forecasts), and how can they accelerate adoption?
Points to a research gap on institutional mechanisms that can fund and scale AI solutions that lack private‑sector profit incentives.
Speaker: Michael Kremer
Where does the trade‑off lie between AI regulation and innovation, particularly for India’s emerging AI ecosystem?
Seeks guidance on balancing safeguards with a vibrant innovation climate, a key policy dilemma for many developing economies.
Speaker: Anu Bradford
How will concentration in the foundational AI layer affect downstream application markets, and what policies can keep the foundational layer contestable?
Raises concerns that high barriers to compute, data, and talent may entrench a few incumbents, limiting competition and creative destruction.
Speaker: Ufuk Akcigit
How can trust in AI technologies be built among frontline workers (doctors, teachers, health workers), and what training or system‑design interventions are needed?
Identifies a gap between technical performance and real‑world adoption, requiring research on user trust, workflow integration, and capacity building.
Speaker: Iqbal Dhaliwal
What governance and institutional barriers impede scaling of AI solutions (e.g., GST fraud detection), and how can policy address power dynamics that resist automation?
Highlights the need to study why governments may reject effective AI tools due to concerns over discretionary power, informing design of acceptable implementation pathways.
Speaker: Iqbal Dhaliwal
What are the systemic risks of AI causing human cognitive atrophy (outsourcing thinking), and how should education systems adapt to ensure AI augments rather than replaces critical thinking?
Calls for research on long‑term societal impacts of over‑reliance on generative AI and curriculum reforms to preserve human creativity.
Speaker: Anu Bradford
What should finance ministers of developing countries consider regarding AI sovereignty, supply‑chain dependencies, and geopolitical risks in the AI stack?
Seeks a strategic framework for policymakers to navigate techno‑nationalism, semiconductor dependencies, and potential weaponization of AI supply chains.
Speaker: Anu Bradford
How can evidence‑based innovation funds with tiered financing (pilot grants, rigorous testing, scale‑up) be structured to bridge the speed gap between private AI developers and public‑sector adoption?
Proposes a research agenda on financing mechanisms that de‑risk AI pilots and promote competitive, high‑quality solutions for public services.
Speaker: Michael Kremer
How can universities remain healthy custodians of foundational AI research to prevent a shift toward closed, industry‑driven science, and what policies support open science in the AI era?
Identifies a need to study institutional policies that keep foundational AI research contestable and publicly accessible, preserving spillovers.
Speaker: Ufuk Akcigit
What are the scalable impacts of AI on health and education outcomes in the public sector of low‑ and middle‑income countries, and what implementation research is needed to realize these gains?
Calls for systematic evaluation of AI‑driven interventions in health and education to determine effectiveness, equity, and pathways for large‑scale rollout.
Speaker: Michael Kremer

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Inclusive AI_ Why Linguistic Diversity Matters

Inclusive AI_ Why Linguistic Diversity Matters

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened by Sushant Kumar framed the discussion around creating “personal, local and multilingual AI” that can serve everyone, introducing a collaborative open-source hardware device developed by Bhashani and Current AI under Kalpa Impact [1-4]. The device is described as multilingual, handheld, privacy-preserving and capable of operating without connectivity, aiming to make AI work for all populations [4][12-15].


Aya Bdeir introduced the demo team and highlighted that the six-week partnership leveraged Bhashani’s work on linguistic diversity and aimed to let users build AI for their own languages and communities [34-38][41-45]. Andrew Tergis demonstrated that the prototype can run inference locally for any user-defined application, exemplified by a vision-impaired use case that translates spoken queries through ASR, machine translation, a large language model and TTS entirely on-device [53-62][66-69]. Shalindra Pal Singh explained that model quantization allowed the full-fidelity LLM to fit on the device without accuracy loss, and the system runs offline on an Intel Jetson platform with plans for broader model support [70-72][88-91]. Sushant later emphasized that the offline operation and four to five operational models constitute a significant engineering achievement [98-102].


Amitabh Nag recounted that Bhashani began in 2023 to address the difficulty of building AI models for non-native speakers, collecting data through brute-force translation efforts and now processes about 15 million inferences daily on a 200-GPU cluster [108-118][128-129]. He outlined future goals of expanding the device’s form factor, increasing language coverage to 36 languages including tribal Bheeli, and enriching models with geographic glossaries and contextualization [163-176]. Current AI’s CEO described the organization as a public-private partnership created at the AI Action Summit to develop AI for the public interest, emphasizing collaboration, open-source release, and concern over proprietary embodied AI that records data and favors Western languages [135-144][194-206]. She expressed hope that open hardware can be made cheaper, smaller, battery-efficient, and networked in meshes or micro-data-centers, enabling diverse applications such as farming tools, privacy-preserving toys, or tourism assistants [215-232].


The panel then debated data sovereignty, with Abhishek Singh stressing the need for community-driven standards, context-specific sharing, and privacy-preserving mechanisms to ensure benefits without harming individuals [299-306]. Anne Bouverot highlighted the tension between cultural preservation and artists’ rights, proposing opt-out mechanisms and trusted third-party stewardship to balance open-source training data with compensation [314-326]. The discussion concluded with a joint vision for India-France cooperation on multilingual AI, resilient design, and alternative sovereign solutions, and the launch of the India AI Innovation Challenge to invite open-source contributions to the prototype [391-398][409-422].


Overall, participants agreed that the open-source, offline, multilingual device represents a concrete step toward democratizing AI, while ongoing work on language expansion, data governance, and international collaboration will shape its future impact [98-102][163-176][391-398][409-422].


Keypoints


Major discussion points


A multilingual, offline, open-source AI hardware prototype – The session introduced a “seminal open source AI hardware device” that is handheld, privacy-preserving and works without connectivity, supporting 22+ languages and running full inference locally (e.g., ASR → translation → LLM → TTS) [4][53-62][70-72][98-102][163-176].


Collaboration model between Bhashini and Current AI – The device resulted from a rapid six-week partnership orchestrated by Kalpa Impact, framed as a public-good effort where partners co-design, build, and release technology together, emphasizing open hardware and community-driven development [34-45][41-45][135-142].


Linguistic and cultural inclusion as a personal imperative – Speakers highlighted the difficulty of operating in non-native languages, the loss of cultural nuance, and the need to preserve tribal languages (e.g., Bheeli) while preventing dominant, Western-centric AI from marginalising local speech communities [108-114][145-154][163-176][194-206].


Future vision for scaling and diverse applications – Plans include shrinking the form-factor, expanding language breadth and depth, enabling mesh-networked devices, and tailoring use-cases such as farming assistants, privacy-first toys, tourism guides, and health or agricultural data services [166-176][215-232][263-270].


Governance, data sovereignty and reciprocity concerns – The panel debated risks of embodied AI, privacy-preserving inference, community rights over cultural data, and the tension between open-source innovation and controlled data governance, calling for standards that balance public benefit with individual/collective ownership [194-206][300-306][314-324][327-334].


Overall purpose / goal


The discussion aimed to make the case for “personal, local and multilingual AI” by showcasing a concrete open-source hardware prototype, illustrating how cross-sector collaboration can produce public-good technology, and outlining a roadmap for expanding inclusive, offline AI that serves diverse linguistic and cultural communities worldwide.


Overall tone and its evolution


Opening (0:00-5:00): Optimistic and rallying, emphasizing a shared mission to “make AI work for everyone” [1-4].


Demo segment (5:07-12:27): Excited and demonstrative, highlighting technical achievements and user-centric possibilities [53-62][70-72].


Personal narratives (14:08-20:13): Reflective and earnest, speakers shared personal language experiences and motivations [108-114][145-154].


Future-looking vision (20:47-26:34): Hopeful and expansive, describing scaling, new form-factors, and limitless applications [166-176][215-232].


Governance & concerns (23:16-41:43): Cautious and critical, raising worries about embodied AI, data privacy, and cultural sovereignty [194-206][300-306][314-324][327-334].


Closing (48:36-53:36): Inspirational and call-to-action, announcing the India AI Innovation Challenge and urging the community to build on the open platform [409-422].


The tone therefore moves from enthusiastic promotion to thoughtful reflection, then to cautious deliberation, and finally to a unifying, forward-looking rallying cry.


Speakers


Shalindra Pal Singh


– Expertise: Technical implementation and integration of translation solutions for Bhashini.


– Role/Title: Senior General Manager, Bhashini; co-presenter/expert on Bhashini translation plugin technology. [S3]


Andrew Tergis


– Expertise: Prototype hardware development and AI inference device engineering.


– Role/Title: Lead engineer on the prototype device (Current AI team). (derived from transcript)


Martin Tisne


– Expertise: AI governance, democratic values, and collaborative AI initiatives.


– Role/Title: Chair of Current AI; leads the AI Collaborative organization. (derived from transcript)


Sushant Kumar


– Expertise: Session moderation and facilitation of AI discussions.


– Role/Title: Session moderator/host. [S8]


Device


– Expertise: Provides on-device AI inference capabilities (hardware component).


– Role/Title: Physical AI inference device used in the demo. (derived from transcript)


Ayah Bdeir


– Expertise: Open-source AI hardware, multilingual AI, public-interest AI partnerships.


– Role/Title: CEO of Current AI; founder of the AI Action Summit. (derived from transcript)


Amitabh Nag


– Expertise: Linguistic diversity, multilingual AI models, large-scale inference infrastructure.


– Role/Title: CEO of Bhashini. [S14]


Abhishek Singh


– Expertise: AI policy, public-interest AI, data sovereignty, governmental AI initiatives.


– Role/Title: Under-Secretary, Ministry of Electronics and Information Technology, India. [S16]


Announcer


– Expertise: Event announcing and moderation.


– Role/Title: Event announcer/moderator. [S19]


Anne Bouverot


– Expertise: AI policy, cultural AI governance, international AI collaboration.


– Role/Title: Special Envoy for Artificial Intelligence, France; former Director General of the GSMA; Chair of the board of École Normale Supérieure. [S22]


Additional speakers:


Aya Bhadel – Referred to as the CEO of Current AI in the opening segment (likely the same person as Ayah Bdeir).


Andrew Tergis – Already listed above; appears again as “Andrew Tergis” (same individual).


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


Andrew Tergis – Duplicate mention; no new information.


– **Andrew T


Full session reportComprehensive analysis and detailed insights

The session opened with Sushant Kumar asking how a paradigm can be built that makes artificial intelligence work for everyone and stating that this was the purpose of the gathering [1-2]. He introduced the theme – “The case for personal, local and multilingual AI” [3] and announced a joint effort between Bhashani and Current AI, coordinated by Kalpa Impact, to showcase a “seminal open-source AI hardware device, one that is multilingual, handheld, privacy-preserving and works in zero-connectivity settings” [4].


A short video followed, meant to spark imagination about what such a device could achieve; it emphasized real-world impact, last-mile delivery and a vision of AI that is not governed by any single country or corporation [6-9][12-15]. Sushant then noted a logistical hurdle – the prototype had to clear customs before arriving [104-107].


Demo team introduction – Aya Bdeir paused for a group photo [25-28] and introduced the demo team: Andrew Tergis, lead engineer from Current AI, and Shalindra Pal Singh, general manager at Bhashani who had worked closely on integrating Bhashani models [30-33]. She highlighted that the partnership was forged in a six-week sprint (actually five weeks) and that her recent arrival at Current AI was motivated by Bhashani’s work on linguistic diversity and the 250-plus language models [34-38]. Aya positioned the collaboration as a public-good model in which partners co-design, build and then release technology openly [41-45].


Prototype demonstration – Andrew described the prototype as an “open AI inference device” that differs from specialised conference products by being usable by any user for any application [52-55]. He explained the flagship Bosch use-case: a vision-impaired user presses a button, asks a question in their native language, the speech is converted to text (ASR), translated to English (MMT), processed by a large language model together with image data, then translated back and rendered as speech (TTS) – all on-device [58-62][66-69]. The Hindi query “What is on this table?” and an English query about candy-bar brands were both answered correctly; the device identified the candy-wrapper brands (Twix, Milky Way, KitKat) and responded in natural language [79-82]. Andrew also noted that the hardware currently runs on an Intel Jetson platform but is deliberately platform-agnostic, allowing any model to be deployed once appropriate optimisation is performed [88-91].


Technical highlights – Shalindra elaborated on the achievement that made the on-device large-fidelity LLM possible: the models were quantised to fit the limited memory without any measurable loss of accuracy, a trade-off the team managed to eliminate [70-73]. She stressed that the device operates entirely offline, with four to five models running concurrently, a notable engineering milestone given the constraints of handheld hardware [98-102].


Bhashani background – Amitabh Nag provided historical context, noting that Bhashani was founded in 2023 to address the difficulty non-native speakers face when using AI [108-118]. Early work involved “brute-force” data collection through translators to build a digital corpus for low-resource languages [124-126]. Today Bhashani processes roughly 15 million inferences per day on a 200-GPU cluster, with dashboards that monitor latency and usage in real time [128-129].


Language coverage & enrichment – The current prototype supports 22 languages on-device, with a roadmap to 36 languages (including 14 text-only) [170-173][176-179]. Amitabh highlighted the recent digitisation of the tribal language Bheeli, which previously had no script, as an example of the “no language left behind” commitment [173-175]. Ongoing work includes integrating ~1.6 million place-names from the Survey of India into geographic glossaries and adding contextual information to deepen model understanding [182-186].


Current AI mission & personal motivation – Aya explained that Current AI emerged from the AI Action Summit in Paris as a public-private-philanthropic partnership involving philanthropy, government and the private sector [135-137]. Its mission is to build a vertically integrated, open-source AI stack that can compete with dominant profit-driven companies [138-141]. She warned that the new wave of embodied AI devices (glasses, robots, voice assistants such as Alexa) often record continuously, are trained primarily on Western languages and create a hardware lock-in that threatens privacy and cultural representation [194-206].


Hopeful roadmap – Aya outlined a roadmap for the hardware: reducing cost, improving battery life, shrinking size and making the device aesthetically appealing [215-220]; connecting multiple units in a mesh network to enable distributed inference [221-224]; developing stationary, solar-powered micro-data-centres for larger workloads [225-227]; and creating specialised applications such as farming assistants, privacy-first toys and tourism guides [228-232]. These possibilities illustrate how an open, modular platform can be adapted to diverse local needs.


Data sharing & reciprocity – Abhishek Singh argued that data sharing must be governed by community-driven standards that protect privacy while delivering public-interest benefits, and that reciprocity should be context-specific (e.g., agricultural data can be shared for public benefit, health data may need stricter safeguards) [299-306][307-310]. Anne Bouverot highlighted the tension between cultural preservation and artists’ rights, proposing opt-out mechanisms and compensation for creators while recognising the value of historical, non-commercial cultural data [318-325]. She referenced existing French laws that mandate a percentage of music and film content be in French and asked whether similar quotas might be needed for AI [266-270][267-274].


AI sovereigntyAbhishek Singh defined AI sovereignty as full national control over the five layers of the AI stack – energy, data centres, chips, models and applications – and noted that India already possesses many of these capabilities but still lacks a domestic chip-fabrication facility [350-382].


France-India collaboration – Anne noted existing joint research on resilient, sustainable AI and suggested that both countries could co-develop alternative solutions that respect sovereignty [391-403][397-400]. Abhishek echoed this, pointing to a history of cooperation (e.g., the France Action Summit) and upcoming joint activities at university, research, business and government levels [401-403].


India AI Innovation Challenge – Abhishek announced the launch of the India AI Innovation Challenge: the open-source prototype will be made publicly available for hack-athons, with submissions opening on 25 February and a prize pool jointly funded by Bhashani and Current AI [409-424]. He stressed that the device’s offline, privacy-preserving design makes it suitable for remote or disaster-affected areas, and that participants will receive technical support for quantisation and model enrichment [419-424][424]. Aya added a call for increased prize funding, mentioning a suggested $110,000 prize [425].


Closing remarks – Sushant thanked the teams, highlighted the significance of achieving four to five operational models on an offline device, and invited a fireside chat with the CEOs of Current AI and Bhashani [92-102][104-107]. Martin Tisne introduced himself, thanked Abhishek as the “master and orchestrator of the summit”, and set up the upcoming fireside conversation [241-247]. He wrapped up by reiterating the importance of democratic, people-centred AI and inviting continued collaboration across sectors [241-247][245-247].


Overall outcome – The prototype demonstrates that a portable, offline, multilingual AI device is feasible; the announced Innovation Challenge aims to accelerate community-driven extensions; and the broader dialogue underscored the need for inclusive, sovereign, and culturally respectful AI development built on open-source, multilingual hardware.


Session transcriptComplete transcript of the session
Sushant Kumar

And therefore, how do we develop and support a paradigm that can make AI work for everyone? And that’s what we are here today. The session today is very aptly called: The case for personal, local and multilingual AI. Through a collaboration between Bhashani and Current AI, orchestrated by Kalpa Impact, we are proud to present to you today a seminal open source AI hardware device, one that is multilingual, handheld, privacy preserving and works in zero connectivity settings. So what we are going to do today is we are going to talk about the concept of AI. What we are going to show you after this will be a video that presents the imagination of what such a device could lead to.

in terms of making AI work for everyone. And once we have done that, there’s a special treat for all of you. The maker of the device and the collaborators at Bhashani are there in the room and they will demonstrate the product to you. So why don’t I begin with playing this video, which captures, which takes some creative liberties and captures our imagination of what this product would look like. And train on what I am watching. Audio, please. Thank you. Thank you. India’s real journey is no longer about pilots or promises. It’s about populations’ reach, clear use cases, last mile delivery. This is real world impact. This is real world impact. and connected vision for AI, not one that’s governed by any one country or one company.

I think all countries have a huge amount to bring to the table and a big relief in the power of collaboration. I was ready, the cup is open, now we need you. Come innovate AI for your own language, for your own community. We want to work with as diverse a group as possible. We can’t wait to see what we do. Yes, we’re back on. And for the next segment, I would like to invite Aya Bhadel, the CEO of Current AI, to take us through the product demonstration. Aya is an engineer and an entrepreneur with 20 years of experience building open source technology infrastructure. that works at global scale. Aya, over to you.

Ayah Bdeir

I have a quick interruption. I have to ask everybody to come here to take a picture so that the picture can be read by the end of the panel. You have 90 seconds free to speak amongst yourselves. Thank you. All right. All right. Thank you so much for coming, everyone. I’d like to introduce Andrew Turgis, who was the lead engineer on this project from the current AI team, who’s going to take us through a demo. Oh, there you are. And also Shalindra Pal Singh, who is a general manager at Bashni, who was Andrew’s collaborator and worked very closely to integrate Bashni models into the device. And I just want to say a couple of things. This project was undertaken in a six -week period, I think maybe closer to five weeks, actually.

So I just joined current AI in January of this year. When I came in, the partnership with Bashni had already been in discussion, and I was very inspired by Bashni’s work on linguistic diversity and the 250 models. And we thought this was an opportunity for us to go all the way, say, to the user and create something where really people can create AI that works for themselves, for their communities, and for their languages. So this prototype is the beginning of a journey and also a platform to imagine infinite things that are possible. And so you’ll see how it works. But as it’s working, I also would like you to imagine what you could do with it and where you could take it.

And from my perspective, I’ll just say for Current AI, this is an example of how we’d like to work with partners where we learn more about their interests and their focus areas and their priorities, and we zero in on a collaboration that we can develop together. We build it together, and then we release it as a public good. So in this case, it’s a piece of hardware and a development platform. In another case, it could be something else. But we’re really proud that this collaboration with Vashni is our first collaborative build, and you get to see it kind of firsthand as you’re sitting here. So, Andrew, Shalindra, please. Please join me on stage, and I’ll let you take us away for the demo.

Andrew Tergis

All right. Perfect. Hello. I’m so pleased to be able to show you this prototype that we’ve created. Yes. Oh, thank you. In front of the table. Wonderful. So this is our prototype open AI inference device. So, you know, unlike some other products you might have seen at this conference, which might be designed for one very specific user or one very specific use case. This device is designed to be used by any number of users for any number of use cases. The hope is that anyone could feel empowered to connect up to this device, write their own application, pull any number of models onto the device and run inference locally in their hand. We have one flagship application that we’ve developed in concert with Bosch.

That demonstrates their the models that they’ve been developing over so much time. And this sample. Application we call here the world, which is. an application where a vision -impaired user can press a button, ask a question in their native language about their surrounding, and have the device read back their response again in their native language, leveraging Bosch, these 22 -plus languages. In particular, we’re leveraging an ASR, an automatic speech recognition module, to convert the audio into text in their native language. We’ll be leveraging an MMT, neural machine translation module, to convert that text into English. We’re running it through a large language model with the image data to answer the question, and then we’ll be converting it back into their native language using, again, the MMT model, and finally a TTS module to convert it back into audio.

So this device is able to run all of those modules in concert. So without further ado, let’s try and give it a test query. Shalinder, do you think you can help me out here? I guess you’ll take the photo, and then I’ll spin it around quickly so the audience can see what’s happening. We’ll ask in Hindi. Let me just triple check. Yep, you’re all good. All right.

Shalindra Pal Singh

What it has done is it has taken the image and then it has taken the automatic speech recognition model kicks in and then neural machine translation is happening and then again the response is getting from the LLM that we have embedded and the translation is happening and the text to speech which is being spoken out. We have quantized the model in such a way that it is fit in. Usually when we do the quantization there is always a trade -off that there is a hit on the accuracy but we have reached to a point where there is no hit on the accuracy fronts.

Andrew Tergis

This is a great way to a truly huge effort from your team, and we wouldn’t have been able to fit such a high -fidelity LLM on this if you didn’t do that great optimization work. So let’s see. Let’s ask another question. We have a couple of candy bars on this desk here, which we can show you. Let’s see. Let’s try it. I’m going to put this in English. What is on this table?

Device

The table has candy wrappers of Twix, Milky Way, and KitKat.

Andrew Tergis

All right. All right. It actually got the brands. And we have one more question of grave importance. But I’ll ask him in Hindi. That’s right. I got it. This is the best candy bar in the world. There we go. Would anyone like a candy bar? Anyone? Anyone? There you go. So just very briefly while we’re handing this out, this is currently based on the Intel Jetson, the NVIDIA Jetson processing platform, but we’ve used it to support other platforms as well because the processing that we’re doing does not depend on that. That just happens to be the platform we’ve chosen at the moment. And, yeah, we’re working on the ability to deploy any model that you could dream of onto this device.

Thank you.

Sushant Kumar

Thank you very much. How did everyone feel about that demonstration and the things that can be done? Thank you. Thank you. And kudos to the Bhashini team, which worked tirelessly, and, of course, Andrew and the current AI team, which worked tirelessly to make sure the hardware, software, all of that was integrated. We had to get a device through customs as well. So that took some time, but eventually it’s here. and it’s working, which is amazing. And the best part is that the device is offline. All those queries, all the AI processing was happening on the device. And there are four or five models operational. Four models operational on that particular device, no mean feat. I salute the engineers who have worked on this, and there’s more to come.

And we know we have to get in a lot in a short period of time. So I will invite Ayya Bedev, the CEO of Current AI, and Sri Amitabh Ma ‘amji, the CEO of Bhashini, to join me for a fireside chat. And we’ll try and understand what about the personal, local, multilingual AI is what they are passionate about. So this is also about what are their motivations. So why don’t we start with you, Anasabji. so we all know a lot about bhajani we have heard about it and you know it’s a superstar at this point in time in terms of what you have achieved tell us about the origins tell us about how this all started and why this is personal to

Amitabh Nag

Hey thank you see we all are born with our mother tongue right we learn our mother tongues for good 4 -5 years before we land up in a school and when we land up in a school it’s a three language formula so I am a Bengali and I talk about you know when it is Bengali everything is eaten so chol khawe is the right word so when you go to the school and you have to do Hindi and English you know how it could be for first 6 months you are going to you know, people will be laughing at you when you are translating and speaking because that’s the first way of speaking. You are not a native language speaker, so you will be translating and speaking.

That’s the linguistic nuance that you went after. So, you know, over a period of time, of course, we grew up. We were told that you have to learn English to succeed in life. So that’s another given which was there. And obviously, this opportunity came up. You know, there was already a concept which was there. And obviously, we started with, you know, one room office, first employee.

Sushant Kumar

When was this? Which year?

Amitabh Nag

This was in 2023.

Sushant Kumar

Okay. That’s recent. That’s recent.

Amitabh Nag

And then obviously, we started growing up as a team looking at various use cases. People started initially looking at the first thing was what’s the accuracy, which was the first question which used to come up. But then, you know. So our models were built up in a difficult condition because we didn’t have digital data to build up the AI model and which we collected the data through a brute force. And then we built up the models which were there because we went across to multiple places with translators who actually created the corpus, which is digital corpus. We still had deficient data, but we went across to build the model and deploy it. And under deployment, we had challenges which obviously came up from all aspects.

And today, when we have actually deployed the use cases, learned from it, improved it, we are now in a situation where we are running about 15 million inferences a day with a 200 GPU system and all having dashboards which actually give you every inference timeliness, how much time it takes, et cetera, et cetera. So we are able to real -time monitor what is happening in our system, who are our customers, how they are using it.

Sushant Kumar

Fantastic. It’s wonderful to hear about your personal motivations. And I’ll move to you. How many languages do you speak?

Ayah Bdeir

My native tongue is Arabic and then I speak French, English and I’m learning Spanish

Sushant Kumar

So very apt to move this personal and multilingual I have two questions for you One, tell us a little bit about Current AI and why this interest in this open hardware and partnership with Bhashini, how does this tie back to Current AI’s strategy and second, why is this personal to you?

Ayah Bdeir

So, Current AI was actually born out of the AI Action Summit last year in Paris It’s a public -private partnership with a mission to create AI for the public interest. And so it’s a partnership between philanthropy, government and the private sector to really say, we’re going to tackle public interest AI at scale. And the reason we’re going to do that is because the dominant companies that are governing our lives in AI operate at a scale, a financial scale, operate at an ambition level, that if we don’t match it, we don’t really have a chance to be a real alternative. And so Current AI was born out of that desire. The goal is to rally a global community and collaboratively and collectively to build a public staff for OpenAI that’s completely vertically integrated.

And so the way we work is we work with partners because the core premise is collaboration. Work with partners where we’ll identify an area of common interest and a priority and a gap in technology, and then we’ll zero in on that gap, work on it together, and then develop a piece of tech and release it as a public good. And so encourage this collaboration. This creation of technology that is put back in the public good, as well as have grant making under sort of like our fund pillar in order to encourage people already doing this work. And this topic is important to me, has been important to me for many years. I’m from Lebanon, from Beirut, and like I said, my native tongue is Arabic.

For the past many years, you know, our use of WhatsApp and mobile and social and everything, a lot of us in the Arab world lost use of Arabic. You know, my family and I, my sisters, my mom and my sisters and I speak in English to each other online all day. We speak on WhatsApp in English. The voice recognition is never good enough in Arabic. You spend more time correcting it than you do doing anything else. And so now it’s improved a little bit. But really, you know, technology has had an effect on the way we communicate with each other. And so for many years, it’s been a real concern for me that, you know, technology, if it’s not made by us, it’s not for us.

And so when I joined Current AI early this year, multilingual diversity was already a topic. And I was very happy about that. And sort of really… I really wanted to expand it into this idea of not just… language diversity, but cultural diversity and cultural preservation as a whole. And so this sort of idea came about and you can tell more about it.

Sushant Kumar

Fantastic. What a story of Genesis. And of course, Silicon Valley making devices on AI for local use cases is going to be as effective as giving power in the hands of people. So on inclusivity, Amitabhji, one of the visions of Bhashani is to expand access. So when you think of this partnership with current AI, what is the future you envision in terms of expanding access and creating inclusion with Bhashani as the linchpin?

Amitabh Nag

So a few things. So, you know, when you look at the size of the device, you know, we have almost reached a form factor, which is quite significant. It’s small, right? And it can be carried through at the last mile. And since it works offline, you are in a position to actually use it anywhere or more. So that’s the first part of inclusivity. We obviously have, you know, plans to look at smaller form factor as we go forward. The second thing which is there is to look at the language coverage. We currently cover 22 languages. In our system, we already have 16 languages, 14 more languages on text, a total of 36 languages. And we would like to increase that on breadth.

And recently we have digitized one of the tribal languages, which is Bheeli, which doesn’t have script. So that also gets added to it. So that is about breadth of languages which is there, which will be continuously added. So when we are talking about form factor, second, we are talking about offline. Third, we are talking about creating a breadth of languages so that no language is left behind. Hence, no person is left behind, including the tribal languages. The fourth factor is about… So how do we, you know, enrich the models which are there, which is a continuous activity which Vashni takes over. There can be, there are multiple things where the models still have to be enriched.

Means India has got about, means we were talking to Survey of India, and they have about 16 lakh places named, which are still to be digitized. So, you know, and put into the system. So those are glossaries which we are building. There are contextualization efforts which are happening. So over the period of time, the language enrichment as far as depth is concerned, is another thing which we are looking at. So we’re looking at breadth, depth, offline form factor as the four things which will move forward in this.

Sushant Kumar

Fantastic. I can certainly see the open hardware playing a big role in that as well. I have a question to you on how you look at future. So what gives you the most hope? and the most concern about the future of language? And you started talking about how, you know, you feel like Arabic and the nuances are getting lost. So what gives you most hope or most concern about the future of language in an AI -driven world? Could you talk about that?

Ayah Bdeir

So I’ll start with the concern. I’m concerned about this new frontier of embodied AI. So over the past, you know, year or so, every big tech company has released their version of an embodied AI device that wants to enter your home, wants to enter, that wants to be close to your body, wants to, you know, enter your personal space. So whether metal is glasses or whether the butts are robots or whether Amazon Alexa. And we’re in full control of these devices, and we don’t know how they’re developed, and we don’t know how they’re trained. You know, last week or the week before, Meta announces that that the glasses are going to start doing facial recognition on every person you encounter in the street.

So now, unknowingly, you’re walking down the street. If somebody is wearing meta glasses, you are being recorded and facially recognized. So we have these devices. We don’t know how they work. They’re continuously recording our data, sending it out to the cloud. We also don’t know how they’re trained, and oftentimes they’re trained on Western languages. And so hardware is where the lockup first starts. It’s how the iPhone locked up a lot of technology innovation, because what happens is these companies will then develop, give us APIs into their devices. Startups will start forming and building on top of these devices, and then the startups start building a dependency on the device, and you start to build a whole stack on.

a core piece of hardware that you do not control. So it’s really kind of like a core, you know, building block that we have to crack before we let them sort of own the entire stack or the supply chain. I spent 15 years, you know, before current AI in open source hardware. I’ve seen how powerful it is when you develop on an open platform and people do what they want with it. It’s, you know, the same power that you get from something like Linux. And so that’s sort of a big area of concern. The area of hope for me is, you know, there are many trajectories for us to kind of improve from here. On one side, you can improve the device itself.

You lower its cost. You improve its battery life. You shrink its size. You make it more beautiful. So, you know, that’s one access. Then there’s another access that you can develop. You can have multiple of these devices together, connect them in a mesh network, now you have a distributed inference that you can use. you can run something larger on. You can have a larger version of this device that’s stationary. It can be like a micro data center. You can put a solar panel on it. Now, suddenly, it doesn’t need a battery. So you can infinitely innovate on the possibilities of this core building block. And then the third kind of track is on what you do with it.

You make a device for a farmer to identify how to deal with their crops. You make a device for a parent who wants to give their kid a toy but doesn’t want the toy to be communicating their private data back to the cloud. You create some sort of, I don’t know, tourism device that you can put around your neck and helps you move around, various sorts of things. And the opportunities are infinite.

Sushant Kumar

Fantastic. And I wish we had more time to just continue going. We’re just scratching the surface. But we’re at time. And I thank you, Amitabhji, for the great work that you and your team are doing. Thank you. And I wish you all the best and all the luck for making that vision into a reality. Thank you very much. Thank you. and we move into our next segment which is another fireside chat and for that i would now hand the floor to a long -time friend and colleague martin tisney martin tisney leads the ai collaborative an organization working on building ai grounded in democratic values and principles and he’s also the chair of current ai. Martin over to you

Martin Tisne

Thanks very much um and my first task is going to be to welcome Abhishek singh who everyone knows who is the master and orchestrator of this entire summit congratulations Abhishek and amazed you’re still standing welcome and who is the orchestrator of the Paris summit welcome special envoy to the president thank you very much please I hope that was enough I think it was the next step so we are setting something with a resource to follow so as Sushant was saying and Aya I’m extraordinarily excited by Aya’s leadership when it comes to current AI and the work in really turning this work around linguistic diversity to the question of cultural preservation it seems to me that ensuring that AI isn’t squashing all of these incredible cultures that make up the beauty of the world into a monoculture or into a small number of monocultures is one of the most important questions that we have today so my first question to both of you maybe starting with you Abhishek and then to Anne it’s the same question what is your vision?

what is the world that you would like us to live in when it comes to this intersection of AI and culture if we get it right what does it look like? whether it’s five years ten years from now what does it look like if we get it right?

Abhishek Singh

languages. He knows only his local term, his bug term. He does not even know how to key in or how to navigate a captcha or he gets lost with the hashtags and the Amazon. So for such people, if they are able to talk to the developers, put their query into the internet or bandwidth or connectivity and get a reply back, that will be empowering. And that’s what I think the ultimate objective of this summit also. Democratizing use of AI and ultimately making AI work for all. Thanks.

Martin Tisne

Thank you very much, Abhishek. Anne, What is your vision?

Anne Bouverot

So, of course, I share a lot of what Abhishek said. I also think that using AI through our phones and one way to say this is that when I get online to my phone, I mean I love San Francisco, I love Shanghai, but I’d like to have a wider choice. I don’t necessarily want to be transported to Silicon Valley. who are transported to Shanghai when I get into AI. And that’s a little bit of a joke, but if all the cultural representation, if all the legal background, if all the customs that are taken as just the de facto way you interact with people, if that’s the choice, well, that’s just such a reduction of cultural diversity.

And I think it’s just not okay. It’s not just about being able to have access to a French AI or an Indian AI. It’s even more than that. If I’m interested in music and if I come from a particular area in France, well, I’d like to be able to have that community and its culture represented there. So I think that’s part of my vision.

Martin Tisne

Thank you. And if I can stay with you just a second, Anne, from a French perspective, from France’s point of view, how do you see? Um, culture. and AI playing together? What does it look like? So when I was a kid growing up in France, from a cultural perspective, it was at a time where it was, I actually think it was a good idea in retrospect, you’ll tell us what you think, that there was a law that mandated a certain percentage of music on radio to be sung in French. There was a law that mandated a certain amount of productions, movie productions to be in French. And that’s ended up, it seems to me, with a certain amount of, you know, sort of cultural patrimoine, as we say, to exist.

So from a policy perspective in France, when it comes to artificial intelligence and culture, do you think that at some point there needs to be a sort of a set norm, like we did in sort of in movies and radio? What do you think?

Anne Bouverot

That’s a good question. I don’t know whether we need a set norm, but yes, there’s mechanisms to encourage creation in France and in Europe. That’s quite important. With every movie that you go and see, which can be from any country, you can see that there’s a set norm. And I think that’s a good thing. I think that’s a good thing. that gives a certain tax on this, a certain amount of money, goes to a fund that then helps French creators to go and prepare whatever they want as their next film. And that mechanism doesn’t make it hegemonious. I mean, of course, we love culture from all over the world, but it helps ensure that there’s an element of French cultural creation.

And that’s what we definitely want to continue to have. And we want people to have the ability to see that in France, but all over the world, just like we love to see Indian movies or listen to Indian music or some symphony or some movement. So that city needs to be maintained, needs to be ensured in including through some mechanisms to fund it. Yes.

Martin Tisne

Thank you Anne so. Thank you very much,. Abhishek, similar question to you.

Abhishek Singh

I think if AI has to be like covering all aspects, then it has to be rooted into data sets that are diverse and data sets when you talk about in any cultural context, it will include not only languages but it will also include the culture, the heritage, the music, the movies, the songs and lots of folklore. Because in fact, if you look at across India, if you go to the rural areas and all, there are lots of traditions which are not even documented well. So those things are not even available in a digital format. They are known to people. Like in fact, recently I was watching a documentary on Netflix called Human in the Moon.

It’s set up in a state of India, Jharkhand, with a lot of tribal population and there are these tribal women who are doing data annotation for an American firm. And it shows that they have to, they are seeing leaves and pests and they have to mark it whether it’s a pest or not. So this young girl is there and what she does is that she sees a pest, an image, and she marks it not a pest. Her manager comes down heavily on her and says that this is obviously a pest. How are you saying there’s not a pest? She says that this tree grows in my local forest around where I live and I know that this worm eats only leaves which are dying.

In a way, it helps the plants. It’s not a pest. So again, having this traditional knowledge built into the corpus of data sets on which we train AI models will be very, very vital if we have to ensure that AI doesn’t hassle and hallucinate if AI becomes near to what human is. So it becomes very important to capture this cultural context from all across the world, from all communities, all cultures, all traditions, and we only will be able to be something which is not a pest. Because we are human like atrocious. This just technological pursuit of AGI and all will not solve the problem that we are living.

Martin Tisne

that’s a great example thank you but then maybe staying with you abhishek for a second and i’ll come back to you on the question of reciprocity so you talked about the data sets communities cultures and all their diversity are sharing the data we want them to be sharing their data with different with different ai models what does it look like from the community perspective do you think like should they be involved in it should they be have rights over the data how do you how do you think

Abhishek Singh

It is a very interesting question because when it was about sharing of data sharing of data across uh across companies across industry we have to kind of when the frameworks which allows data for public purposes that means data in a way which does not violate the privacy or the personal identity of the person who owns the data the person who the data belongs the data principle per se so when it is data sharing the data in the community will need to be involved If you don’t do that, in the interest of business and in the interest of commercial requirements, the possibility of missing the data goes up. So it’s very important to have standards, not only technical standards, but community standards which are rooted in the culture and the belief systems of a place from where the data is coming from in order to ensure that the models and the applications

Martin Tisne

Thank you. If I can get a little bit further on that question, there’s the question about the rights of the individual and the rights of the communities towards the data. Do you think, in the way that you’re working, is there also a reciprocity in terms of if data about them is used for a particular purpose, that then the community should benefit from it? How do you think about that? They should benefit whether it’s a translation or other device? How do you think about that?

Abhishek Singh

So you need to think about like the different use cases may have different applications. Like for example, it’s data about say agriculture and if I have aggregate data about a particular area and that kind of is used to generally advise me so partners with regard to what they should show for maximum benefits at what time they should show. Then that data should be shareable and that’s in benefit of everyone. But if we say for example health data, then there in the individual might not be wanting to share that data with the lab and ecosystem. So I think it will be context specific and we cannot have general rules about sharing of data and the reciprocity principles across different sectors.

Martin Tisne

Thank you very much. Anne I have a similar question for you on this question of reciprocity. What’s your take?

Anne Bouverot

I think that’s a very profound question. Part of the reason why you want to share cultural data is so that cultures are preserved and you don’t end up with one or two or three cultures in the world, but something that is more diverse. So it is in the interest of a cultural group, of a civilization, that in the world of AI this culture is represented. And from that perspective, you have a very natural reciprocity loop. But at the same time, creators are saying, I don’t want my data to be used if I don’t have a mode of being compensated or recognized or a way to oppose. And so you have this tension between artists, for example, who say, well, I want my rights to be maintained and I want some type of compensation.

If this is being used to feed AI. models and then for people to earn money out of it. But then on a collective basis, you do want that culture to be represented. So I’m not sure I have a solution, but I very clearly see the tension. And many ways we can navigate that is to have a right of opposition by specific artists so that they can say, no, my data, my creations are not going to be used. And at the same time, you can certainly have historical information and things that are not so subject to maybe having remuneration for living artists be part of the general cultural data that you use to train AI. But beyond these two obvious things, I’m not really sure.

So we need to continue to work on this.

Martin Tisne

Thank you. And again, just to go a bit deeper on the question. it really is, it’s a fascinating question because from the perspective of the communities I would imagine whose data it is it’s data about them, as you say you want people to know about your culture, you want the culture to be preserved and at the same time you want a certain degree of agency over how the data is used. In an earlier panel I was talking about, or we were talking about indigenous data sovereignty and we were talking about the Maori community in New Zealand and the degree to which, as I understand it in Maori culture, any data, any information that pertains to Maori culture is effectively part of Maori culture so there’s a real question of agency.

My question is, in the run up and when we were working together on the Paris summit we talked quite a bit about the relation between open source AI on one hand and then the governance of the data and the governance of the data then to be controlled in different ways so how do you think about this balance because it strikes me that getting the balance right between on one hand the open source components, and I’ll come to the same question to you in a second, that is and on the other hand, a more controlled approach around data governance, that’s the special source. What do you think?

Anne Bouverot

Yeah, I completely agree. And maybe that’s, as Abhishek was saying, maybe the example of health data is a good one there because for cultural data, you want the general benefit and you want to preserve artists’ rights. I think those are the two dynamics. For health data, you do want, as an individual, as a patient, if you’re being asked the question, do you want to protect your personal data, the answer is yes. If you’re being asked the question, are you willing to share your data with other people who have or are at risk of a similar illness so that it can help them, the answer is yes. And then how do you balance the two? And so you need to find some ways to share data in a platform or in a way that you have trust into.

And so it needs. It needs to be privacy -preserving. It needs to be held by an actor you trust, even if you don’t go and look at all the terms and conditions, but you need to understand that it’s an institution or a third party that you can trust. And then you want to be able to rely on that third party to make the right decisions, like, yes, sharing the data to enable research and find new cures, but maybe to sharing it to insurance companies so that you can be charged a different rate, depending on what your personal situation is. And then when you get into sovereignty, maybe you’re happy for this to be shared with innovative startups in your country or your region that will develop cures and new foods, but maybe not with some other actors.

So you get to a number of different levels and questions. And for that, having third parties, most third parties that can vote for you, we make the right decisions, I think, is very important.

Martin Tisne

Thank you. Let me take the same question to you. How do you see that balance between, on the one hand, we’ve talked a lot about open source, open source AI over the course of the week. How do you see the balance between, on the one hand, open source, on the other hand, the question of the cultural data that we’ve been talking about?

Abhishek Singh

Again, ultimately, I’ll go back to the end objectives. What is the purpose for which we are sharing the data? Is it serving public interest or is it serving private interest? Is there a benefit for the user to whom the data belongs? So, for example, health data is there. If aggregate level data is there, for example, we keep on hearing about outbreaks of, we are over COVID, but we keep on hearing about outbreaks of flu and other ailments. If aggregate data about incidence of such diseases and linkages with other factors, environmental factors, weather factors, rain factors, is shared so that people can think of devising a data. AI enables solutions of integrating various. data sets and trying to see why in a particular geography, in a particular locality, some element of this is happening.

That is the public interest. So, we will have to define in a case -by -case basis, whenever data is being shared, whether open source or in a proprietary solution, what is the end objective, what is the problem that I am going to solve? And is it serving the larger interest of the community? Is it serving larger public interest or is it being done to benefit a few corporations? Like, for example, the example she gave about insurance companies. If it leads to, if the data about health consumption or something leads to increase in my insurance premiums, that is like not fair. Because they are linking that data with the individuals to whom the data belongs. So, we will need to think of privacy preservation techniques, we will need to think of anonymizing techniques, so that in no ways the data principle to whom the data belongs is harmed in an adverse manner.

So, we will have to do this in a very new instrument. There is not one size fits all solution. If we do that, we will end the risks of the, the risks dominating the narrative and we will go somewhere towards the positives that would have been the most fun.

Martin Tisne

Thank you very much so then there’s a question i can’t resist asking you which is what’s your definition of sovereignty because then you mentioned the term we’ve talked a lot about this this week and in the context of this conversation it’s really interesting because there’s a question of sovereignty from a nation’s perspective there’s a question of sovereignty i mentioned that marie example indigenous data sovereignty from a community’s perspective and then both have been talking about health so there’s a question of you know at an individual level the sovereignty i have over a data about me so with your experience coming towards the end of the summit and the experience that india has how do you when you think about sovereignty and ai what do you what do you think of

Abhishek Singh

I feel like sovereignty of course traditionally it’s a science concept wherein it seems that nations which are sovereign need to have complete control over what they do how they do with the entire control of the decisions so when it apply when you apply to technology and when you apply to ai specifically the same concepts will apply with regard to what I want to do, with whom I want to do and how I want to do. Nobody else should decide to make decisions on my behalf. So maybe in the short term ideally a complete sovereign AI stack will mean that we should have complete control over all the five layers of AI. Whether it’s the energy layer, the data center, infrastructure, chips, models, applications, use cases.

We should have complete control over it. The technology is evolving right now. In fact, good for a few countries. In fact, good for humans. I don’t think any other country has complete control over the entire AI stack. Every other country has complete control. In the context of India, we are there. We are there on energy sufficiency. We have the data centers. We have our models, our applications, but we don’t have the complete. We have the capability to distribute the computer. We do hold them three to five years. We design our own chip. And in five to ten years, we’ll be able to have a fab which we can take it out also. In the short term, if I decide that which chip I want to use, how I want to use, how I procure rather than be subject to conditionality rather we force people something which will be sovereignty.

So sovereignty will apply the same concept of sovereignty that we apply at the beginning of political science where in complete control of the business live with the sovereign government that should be the way we should look at sovereignty in AI as well.

Martin Tisne

Thank you very much. So just as we’re ending and we’re now in the time, feel free to weave in the questions of sovereignty. Anne, curious, in the wake of President Macron state visit and the bilateral relationship between France and India what do you both see, so starting with Anne and then finishing with you Abhishek, what do you both see as opportunities for France and India to jointly work on these global norms, global approaches for a more contextual approach to artificial intelligence, for a culturally inclusive approach to AI?

Anne Bouverot

Well, I’ll try to be short, but this is the year of joint innovation between India and France. There’s many areas where we’re collaborating and will continue to collaborate. Clearly, current AI and this work on multilingual AI is one. Working on AI that is resilient and sustainable by design, as we were just discussing earlier with Abhishek, is clearly a priority. So, it’s a priority, joint research. And then just to weave, I can’t resist weaving the work on sovereignty. I think sovereignty, no one actually, not even the U .S., they don’t have the chips. So, nobody can do everything alone. I believe that means having a choice and building alternative solutions. And I really think we can and we will jointly build alternative solutions between France and India.

Abhishek Singh

I kind of echo her and in fact the partnership between India and France have been there for quite some time, in fact last year we coached with France Action Summit and the partnership has continued this year and this year of course as you know we have launched a year of innovation and many more activities have been announced by President Macron and our Prime Minister in the last week and we are looking forward to joining you at the World Tech in the next few months and there are many more activities, partnership at the university level, partnership at the research level, partnership at the business level, partnership at the government level, so I strongly believe that working jointly with especially a trusted partner like France and India we have complementary strengths and we can try to present an approach to building a solution that can become an example for the whole world.

Martin Tisne

Thank you very much thank you very much, it was an honour to launch Panteo and Paris and a pleasure to launch this partnership in India, thank you. Thank you.

Announcer

Hello. Thanks, Martin. Abhishek Singh sir, I request you to stay on stage. And Aya, we’d love to have you on to launch the Global Innovation Challenge in the spirit of what Anne said. And Amitabh Nagsir as well. Please.

Abhishek Singh

Am I going first? Okay, great. So, great session, great thoughts, great demo. So, all of us have seen the demo of the reference device, the device which has been built in partnership with Bhashini and Current AI. And in fact, I must mention that it was just a few weeks back when we had this discussion, because I have been discussing with Martin after the discussions and announcement at Public Interest AI, like what will Current AI do with the 400 million dollars that they have, euros that they have raised. And I was saying that let’s do something which can really make an impact and if we can do something at the impact summit, it will be worthwhile. But kudos to the teams and they have built the collaborative build design which was designed by both the engineers from Bhashini and the Current AI’s support is being done in such a way that it’s a platform, it’s a prototype on which we can innovate.

It’s completely open source. It’s hackable, it’s privacy preserving, it’s multilingual. And with on -device AI, this prototype is capable of functioning in remote locations in not only India but anywhere else in the world where connectivity is a challenge or for any reason, if there’s an earthquake or there’s a problem or if there is a natural calamity and we can’t have connectivity it can work. So that can be really transformational for people to access services. And in partnership with Current AI and Bhashini, in fact it’s my honor and privilege to announce the India AI Innovation Challenge. which will give an opportunity to researchers, to engineers, to developers, to entrepreneurs to build on this prototype. And this prototype will be available in an open source manner for everyone to hack it and make it smaller, you can make it more sleeker, you can solve individual use cases for different sectors and it’s based on an open source software and hardware design and the kind of use cases one can think of will be limitless.

So there will not be one but multiple solutions that can be built on it. And we are opening it today and in the next few weeks the date here says that submissions will open on 25th Feb. The 25th Feb on our website will launch the challenge on which applications can be submitted and there is some time to build the actual device and those who will win will get a very handsome reward that will be funded both by Bhashini and Current AI and together we will try to ensure that we are able to build a product that the whole world can use.

Amitabh Nag

so we will continue to you know support this effort to our quantization mechanism and also the technical support will be available with respect to the model enrichment etc so this will be a joint effort so people are supposed to put in the effort and come back to us on the challenges and we will work on that together

Ayah Bdeir

I just say for Amitabh because maybe he tried to say Bashi is offering I think $110 ,000 prize to the winners maybe no I guess should people make a demand how does the number increase On your way out please make a request everyone the number to go out so there’s a big page to it for also participants to make sure that they have support while they’re developing their hardware and software and showcase the work online to inspire many other people really the point of it is to kind of like expand imagination and start this conversation about making your own AI and start the conversation about AI being personal and multilingual and solving communities and individuals own problems and today it’s a piece of hardware tomorrow it could be something in the software the day after can be in data so really this is the beginning of the journey thank you so much for coming everyone thank you for doing such good partners thank you Amitabh and the Vashni team thank you to the current AI team thank you Martin for bringing us together and have a great rest of the week and rest hopefully for you bye

Related ResourcesKnowledge base sources related to the discussion topics (40)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Sushant Kumar asked how a paradigm can be built that makes artificial intelligence work for everyone and stated that this was the purpose of the gathering.”

The knowledge base records Sushant’s opening question about a paradigm that makes AI work for everyone as the session’s purpose [S5].

Confirmedhigh

“He introduced the theme – “The case for personal, local and multilingual AI”.”

The same theme title is cited in the knowledge base for the session [S5].

Confirmedhigh

“A joint effort between Bhashani and Current AI, coordinated by Kalpa Impact, was announced to showcase a seminal open‑source AI hardware device.”

The collaboration between Bhashani and Current AI, orchestrated by Kalpa Impact, is confirmed in the knowledge base [S5]; the hardware focus is not mentioned but the partnership is.

Additional Contextmedium

“The video emphasized real‑world impact, last‑mile delivery and a vision of AI that is not governed by any single country or corporation.”

The knowledge base highlights last-mile delivery as a key challenge for inclusive AI deployments [S116]; broader governance concerns are discussed in the summit’s equity narrative [S5].

Additional Contextmedium

“Sushant noted a logistical hurdle – the prototype had to clear customs before arriving.”

Customs clearance delays for imported technology are documented in the knowledge base regarding Indian customs procedures [S117].

Additional Contextlow

“Aya’s motivation to join Current AI was driven by Bhashani’s work on linguistic diversity and the 250‑plus language models.”

Bhashani’s focus on linguistic diversity is confirmed in the knowledge base, though the exact number of language models is not specified [S5].

Additional Contextlow

“Aya positioned the collaboration as a public‑good model in which partners co‑design, build and then release technology openly.”

The concept of digital public goods and open-source collaboration is discussed in the knowledge base as a principle for inclusive AI development [S121].

External Sources (122)
S1
ElevenLabs Voice AI Session &amp; NCRB/NPMFireside Chat — -Shailendra Pal Singh: Role/title not explicitly mentioned, but appears to be a co-presenter/expert on Bhashini translat…
S2
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — mostly from my understanding and experience with the English that has happened, in the past. Yeah. interesting points, P…
S3
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — -Shailendra Pal Singh- Senior General Manager, Bhashani
S4
Inclusive AI_ Why Linguistic Diversity Matters — – Shalindra Pal Singh- Andrew Tergis
S5
Inclusive AI_ Why Linguistic Diversity Matters — Speakers:Shalindra Pal Singh, Andrew Tergis Speakers:Sushant Kumar, Andrew Tergis, Amitabh Nag, Ayah Bdeir Speakers:Su…
S6
Inclusive AI_ Why Linguistic Diversity Matters — – Ayah Bdeir- Martin Tisne
S7
Inclusive AI_ Why Linguistic Diversity Matters — Speakers:Anne Bouverot, Martin Tisne Speakers:Ayah Bdeir, Martin Tisne
S8
Inclusive AI_ Why Linguistic Diversity Matters — -Sushant Kumar- Session moderator/host
S9
Building Public Interest AI Catalytic Funding for Equitable Compute Access — – Dr. Shikha Gitao- Andrew Sweet- Sushant Kumar
S10
Mobile Working Group Peer Reviewed Document — –  Device : …’a piece of equipment with the mandatory capabilities of communication and the optional capabilities of se…
S11
Foreword — 16. BT. 2019. BT’s Cyber Index reveals the scale of today’s cyber threat . https://newsroom.bt.com/ bts-cyber-index-reve…
S12
WSIS+20 Open Consultation session with Co-Facilitators — – **Jennifer Chung** – (Role/affiliation not clearly specified)
S13
Inclusive AI_ Why Linguistic Diversity Matters — – Amitabh Nag- Ayah Bdeir – Ayah Bdeir- Martin Tisne
S14
Inclusive AI_ Why Linguistic Diversity Matters — -Amitabh Nag- CEO of Bhashini
S15
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — mostly from my understanding and experience with the English that has happened, in the past. Yeah. interesting points, P…
S16
Open Forum #30 High Level Review of AI Governance Including the Discussion — – **Abhishek Singh** – Under-Secretary from the Indian Ministry of Electronics and Information Technology Abhishek Sing…
S17
Announcement of New Delhi Frontier AI Commitments — -Abhishek: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S18
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Abhishek Singh:I can take that, no worries. Thank you, Abhishek. The floor is yours. You can give your question. Yeah, t…
S19
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Giordano Albertazzi — -Announcer: Role/Title: Event announcer/moderator; Area of expertise: Not mentioned
S20
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Takahito Tokita Fujitsu — -Announcer: Role as event announcer/host, expertise/title not mentioned
S21
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Cristiano Amon — -Announcer: Role/Title: Event announcer/moderator; Areas of expertise: Not mentioned
S22
Building Trusted AI at Scale – Keynote Anne Bouverot — -Anne Bouverot: Special Envoy for Artificial Intelligence, France; Diplomat and technologist; Former Director General of…
S23
How to make AI governance fit for purpose? — – Anne Bouverot- Chuen Hong Lew – Jennifer Bachus- Anne Bouverot
S24
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Kumar emphasizes the need to translate theoretical frameworks about AI democratization into concrete, actionable solutio…
S25
https://app.faicon.ai/ai-impact-summit-2026/leaders-plenary-global-vision-for-ai-impact-and-governance-afternoon-session — Thank you, Prime Minister, for having us. As my colleagues have said, India will no doubt be a powerhouse in AI in many …
S26
Building Scalable AI Through Global South Partnerships — India knows what it is to be deprived of or denied technology. India knows what it is to actually try and work your way …
S27
Open Forum #33 Building an International AI Cooperation Ecosystem — Dai Wei: Distinguished guests, ladies and gentlemen, good day to you all. I’m delighted to join you in this United Natio…
S28
WS #93 My Language, My Internet – IDN Assists Next Billion Netusers — AUDIENCE: Sure, I am happy to answer this. Actually, for the challenges that affect the uptake of IDN or EAI or even …
S29
Artificial intelligence (AI) – UN Security Council — Moreover, the lack of transparency can erode public trust. If people cannot see or understand how decisions affecting th…
S30
Data first in the AI era — AI system, and particularly I’m thinking like large models like GPT-4, Lama, are trained on enormous data sets that are …
S31
Building the AI-Ready Future From Infrastructure to Skills — Discussion point:Importance of open ecosystems for innovation and market participation Discussion point:Open-source fra…
S32
Building the AI-Ready Future From Infrastructure to Skills — Zacharia emphasizes AMD’s pledge to base both hardware and software on open standards, enabling innovation and avoiding …
S33
Open Forum #73 Indigenous Peoples Languages in a Digital Age — Several panelists expressed support for open-source approaches as a way to provide communities greater control over thei…
S34
How Multilingual AI Bridges the Gap to Inclusive Access — This discussion focused on multilingual AI development and cultural diversity in artificial intelligence, taking place a…
S35
How Multilingual AI Bridges the Gap to Inclusive Access — Markus Reubi from Switzerland emphasized that AI can only serve the public good if it serves all languages and cultures,…
S36
Multistakeholder platform regulation and the Global South | IGF 2023 Town Hall #170 — Sunil Abraham:Thank you so much for that. And a special thanks to all my friends and colleagues at CGI.br I’m very grate…
S37
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — Ammari highlighted META’s open-source approach to large language models, explaining, “META has adopted an open source me…
S38
WSIS Action Line C8: Multilingualism in the Digital Age: Inclusive Strategies for a People-Centered Information Society — Low to moderate disagreement level with high strategic significance. While speakers agreed on fundamental goals of lingu…
S39
Decolonise Digital Rights: For a Globally Inclusive Future | IGF 2023 WS #64 — In conclusion, the analysis underscores the pressing need to address biases, promote transparency, preserve lesser-known…
S40
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Xianhong Hu:Thank you very much Mr. Ambassador. Good morning everyone. First of all please allow me, I’d like to be able…
S41
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — Moderator – Yves Poullet:Thanks Gabriela for this marvellous introduction. I think this introduction will help us to fix…
S42
Leaders TalkX: When policy meets progress: paving the way for a fit for future digital world — – Scaling regulatory approaches effectively in large, diverse markets 6. **Sustainability Focus**: Regulatory approache…
S43
What is it about AI that we need to regulate? — Potential Negative Consequences of Individual-Centric Digital Rights FramingSeveral sessions at the Internet Governance …
S44
Technology Regulation and AI Governance Panel Discussion — Joel Kaplan emphasized the importance of maintaining regulatory environments that support AI development through access …
S45
Lightning Talk #215 Governance in Citizen Science Technologies — The tension between individual data ownership rights and community-validated data as commons needs further resolution
S46
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — In tandem, debates around data governance have underscored the necessity of finding a balance between data utility, priv…
S47
Connecting open code with policymakers to development | IGF 2023 WS #500 — Efficient policy measures and rules are necessary to govern data usage while preserving privacy. GDPR mandates user cons…
S48
Digital Ecosystems and Competition Law: Ecological Approach (HSE University) — The discussion of data privacy highlights the need for a sophisticated interaction of privacy rules for data sharing for…
S49
India allocates $1.24 billion for AI infrastructure boost — India’s government has greenlit a ₹10,300 Crore ($1.24 billion) fundingprojectto enhance the country’s AI infrastructure…
S50
India faces AI challenge as global race accelerates — China’sDeepSeekhas shaken the AI industry by dramatically reducing the cost of developing generative AI models. While gl…
S51
AI Innovation in India — This comment is deeply insightful as it reframes the AI revolution not just as technological progress, but as a fundamen…
S52
AI to transform India’s $400 billion IT ambition by 2030 — India’s IT sector could reach$400 billion by 2030, Prime Minister Narendra Modi said in an interview with ANI, highlight…
S53
Advancing Scientific AI with Safety Ethics and Responsibility — Any governance structure that conflates open source with danger makes a significant mistake, as open source tools are cr…
S54
Advancing Scientific AI with Safety Ethics and Responsibility — -Balancing Open Science with Security: Panelists explored the challenge of preserving open science benefits while preven…
S55
Towards a Safer South Launching the Global South AI Safety Research Network — Artificial intelligence | Data governance | Open source Banifatemi describes ongoing work in Bangalore and San Francisc…
S56
WS #205 Contextualising Fairness: AI Governance in Asia — – Tejaswita Kharel: Project Officer at the Center for Communication Governance at the National University Delhi. Works o…
S57
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Summary:The discussion showed remarkable consensus on AI’s transformative potential, India’s leadership role, and the ne…
S58
Laying the foundations for AI governance — ## Societal and Democratic Implications ## Technical Challenges and Industry Perspectives ### Technology’s Impact on D…
S59
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S60
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Preserving multilingual societies is essential because different language structures enable different ways of thinking a…
S61
AI for Social Empowerment_ Driving Change and Inclusion — The required policy responses span multiple domains:
S62
WS #119 AI for Multilingual Inclusion — Promoting Language Equity and Inclusion Public services should provide materials and support in multiple languages to p…
S63
Artificial Intelligence &amp; Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S64
Inclusive AI_ Why Linguistic Diversity Matters — “So this is our prototype open AI inference device”[44]. “The hope is that anyone could feel empowered to connect up to …
S65
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Durga argues that AI applications should run inference locally so that the user experience does not degrade when network…
S66
All hands on deck to connect the next billions | IGF 2023 WS #198 — For community networks to succeed, a supportive policy and regulatory framework is essential. Countries must develop pol…
S67
Inclusive AI_ Why Linguistic Diversity Matters — The France-India partnership exemplified how countries with complementary strengths can collaborate to enhance rather th…
S68
Building Trusted AI at Scale – Keynote Anne Bouverot — Namaste. Bonjour. Excellencies, distinguished guests, dear guests. Dear friends. Thank you so much for welcoming me here…
S69
Open Internet Inclusive AI Unlocking Innovation for All — The discussion revealed sophisticated understanding of the tensions surrounding open-source AI development. Prince offer…
S70
The strategic shift toward open-source AI — The release of DeepSeek’s open-source reasoning model in January 2025, followed by the Trump administration’s July endor…
S71
Inclusive AI_ Why Linguistic Diversity Matters — “So this is our prototype open AI inference device”[44]. “The hope is that anyone could feel empowered to connect up to …
S72
Inclusive AI_ Why Linguistic Diversity Matters — The device integrates multiple AI technologies to provide a complete language processing pipeline. It can convert speech…
S73
Multistakeholder platform regulation and the Global South | IGF 2023 Town Hall #170 — Sunil Abraham:Thank you so much for that. And a special thanks to all my friends and colleagues at CGI.br I’m very grate…
S74
How nonprofits are using AI-based innovations to scale their impact — This panel discussion focused on an AI cohort program for nonprofits that ran from September to December, anchored by Pr…
S75
AI for agriculture Scaling Intelegence for food and climate resiliance — Evidence:Mahavistar and Bharatvistar designed with inclusion from the beginning, collaboration between Maharashtra gover…
S76
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — Judith Okonkwo: Yeah, I’m gonna let Judith come in for for a minute Judith are you you’re still with us Yes, I am So sor…
S77
AI for Social Good Using Technology to Create Real-World Impact — “Our collaboration with the Indian Institute of Science, and in particular on Project Vani, has now completed its second…
S78
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Ganesh describes successful collaboration through a consortium of 9 academic institutions working via a Section 8 not-fo…
S79
WSIS Action Line C8: Multilingualism in the Digital Age: Inclusive Strategies for a People-Centered Information Society — Low to moderate disagreement level with high strategic significance. While speakers agreed on fundamental goals of lingu…
S80
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Xianhong Hu:Thank you very much Mr. Ambassador. Good morning everyone. First of all please allow me, I’d like to be able…
S81
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — Moderator – Yves Poullet:Thanks Gabriela for this marvellous introduction. I think this introduction will help us to fix…
S82
https://dig.watch/event/india-ai-impact-summit-2026/inclusive-ai_-why-linguistic-diversity-matters — Means India has got about, means we were talking to Survey of India, and they have about 16 lakh places named, which are…
S83
Lightning Talk #215 Governance in Citizen Science Technologies — The tension between individual data ownership rights and community-validated data as commons needs further resolution
S84
What is it about AI that we need to regulate? — Potential Negative Consequences of Individual-Centric Digital Rights FramingSeveral sessions at the Internet Governance …
S85
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — Summary:Both speakers acknowledge the challenge of making government data available for AI innovation while protecting s…
S86
How Multilingual AI Bridges the Gap to Inclusive Access — The tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrat…
S87
Welfare for All Ensuring Equitable AI in the Worlds Democracies — The conversation maintained an optimistic and collaborative tone throughout, with participants sharing practical solutio…
S88
AI Innovation in India — The tone was consistently celebratory, inspirational, and optimistic throughout the discussion. Speakers expressed pride…
S89
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S90
Open Internet Inclusive AI Unlocking Innovation for All — The tone is optimistic and forward-looking throughout, with both speakers expressing confidence in alternative approache…
S91
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S92
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — Deborah Rogers:I think one of the most interesting examples of how mobile network operators have really had a big impact…
S93
How Humans Sense / Davos 2025 — The overall tone was enthusiastic and engaging, with the speaker using humor, personal anecdotes, and even a tattoo demo…
S94
From High-Performance Computing to High-Performance Problem Solving / Davos 2025 — Azeem Azhar: Good morning, and welcome to our panel discussion today on quantum computing, titled From High Performanc…
S95
Opening keynote — Doreen Bogdan-Martin:Good morning, and welcome to the AI for Good Global Summit. Let me start by thanking our more than …
S96
[Online Event] Cables, Novels and Nobels: The Journey of Diplomacy and Literature  — Fiction, in particular, is lauded for its role in personal development and enriching readers with diverse cultural insig…
S97
Leaders TalkX: Local Voices, Global Echoes: Preserving Human Legacy, Linguistic Identity and Local Content in a Digital World — Reflecting on personal experiences, the speaker showed how Danish, their mother tongue, naturally facilitates deeper con…
S98
WS #119 AI for Multilingual Inclusion — The tone was largely informative and collaborative, with speakers sharing insights and experiences from different perspe…
S99
How Trust and Safety Drive Innovation and Sustainable Growth — The discussion concluded with panelists predicting what AI summits might be called in five years’ time. Their responses …
S100
United Nations High-Level Leaders’ Dialogue — Magdalena Sepulveda Carmona: As the Secretary General remind us in her preliminary remark, the WSIS has been instrumenta…
S101
Beyond human: AI, superhumans, and the quest for limitless performance &amp; longevity — High level of consensus with significant implications for reframing how society approaches aging, disability, and human …
S102
WSIS Prizes Ceremony — Participants of the event are warmly thanked, recognising the crucial role played by audience engagement and interaction…
S103
Summit Opening Session — These were the days marked by vision and by hope. because collaboration and connectivity are in themselves expressions o…
S104
AI and Digital in 2023: From a winter of excitement to an autumn of clarity — Approaches to digital sovereignty will vary, depending on a country’s political and legal systems. Legal approaches incl…
S105
WS #172 Regulating AI and Emerging Risks for Children’s Rights — Nidhi Ramesh: Perfect. All right. Then I’ll just start again. Thank you so much, Leanda. That’s such an interesting …
S106
Rethinking Africa’s digital trade: Entrepreneurship, innovation, &amp; value creation in the age of Generative AI (depHub) — In summary, the analysis raises critical concerns regarding data protection, privacy, and ethical considerations. It und…
S107
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — The analysis explores different perspectives on technology development, highlighting concerns, and advocates for a proac…
S108
Driving Indias AI Future Growth Innovation and Impact — But there was also a lot of fear around AI about trust factors, about privacy, data, sovereignty, multiple issues about …
S109
From India to the Global South_ Advancing Social Impact with AI — And everybody told me it’s time for a holiday. Everyone is taking exams, midterms. Don’t do it now, do it later. I said …
S110
Powering AI Global Leaders Session AI Impact Summit India — “And what that really means is the technology continues to accelerate.”[14]. “going to become even faster and faster.”[1…
S111
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — The digital future must be built with equity, with ethics and, above all, with solidarity between all nations. That is w…
S112
Harnessing Collective AI for India’s Social and Economic Development — The discussion began with the moderator asking the audience whether they believed technology was reserved for the elite,…
S114
Transforming Health Systems with AI From Lab to Last Mile — Impact:This comment expanded the scope of the discussion beyond clinical decision support to population health and equit…
S115
From Innovation to Impact_ Bringing AI to the Public — Ask the questions and enhance your curiosity as a student. You could be in tier 3, you could be in tier 1 city. But the …
S116
KEY STATISTICS, FINDINGS AND RECOMMENDATIONS — 1. Last-mile delivery continues to pose a challenge
S117
THE 2016 NATIONAL TRADE ESTIMATE REPORT — India’s customs officials generally require extensive documentation, inhibiting the free flow of trade and leading to fr…
S118
Economic Diplomacy: India’s Experience — – Constructing and validating the facility to international standards; – Obtaining required permissions and clearances; …
S119
TREATY ON THE EURASIAN ECONOMIC UNION — on the necessity to obtain the missing documents before arriving into the territory of another member State;
S120
From India to the Global South_ Advancing Social Impact with AI — you know first I’m sorry I got a bit late I was in hall number 17, 19 you know what was happening there they had identif…
S121
The Power of the Commons: Digital Public Goods for a More Secure, Inclusive and Resilient World — Díaz Hernández calls for a fundamental change in how we approach technology development. She argues for creating technol…
S122
Session — Susan Ariel Aaronson: Thank you so much. So I am one of the fortunate few, I’d say, whose grant has not been cut as of y…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sushant Kumar
2 arguments109 words per minute999 words546 seconds
Argument 1
AI must be democratized to serve every community
EXPLANATION
Sushant frames the session around making AI work for everyone, emphasizing that the goal is to develop a paradigm where AI benefits all populations, not just pilot projects or promises. He stresses the need for clear use cases and last‑mile delivery to achieve real‑world impact.
EVIDENCE
He opens the session by asking how to develop a paradigm that can make AI work for everyone and states that the journey is no longer about pilots but about population reach, clear use cases, and last-mile delivery, describing it as real-world impact [1][12-15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Kumar stresses translating AI democratization frameworks into concrete solutions for large populations and highlights India’s effort to democratize AI at scale [S24][S26].
MAJOR DISCUSSION POINT
Democratizing AI
AGREED WITH
Ayah Bdeir, Martin Tisne
Argument 2
Offline inference runs four to five models simultaneously on the device
EXPLANATION
Sushant highlights that the prototype can operate entirely offline, running multiple AI models at once, which demonstrates the feasibility of delivering AI services without internet connectivity. This capability is crucial for reaching remote or underserved areas.
EVIDENCE
He notes that the device is offline and that four or five models are operational on the hardware, calling this a “no mean feat” [98-102].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The demonstration showed the device operating offline with four models running concurrently, emphasizing real-world impact [S4].
MAJOR DISCUSSION POINT
Offline multi‑model inference
AGREED WITH
Andrew Tergis, Shalindra Pal Singh, Device
A
Ayah Bdeir
5 arguments163 words per minute1650 words606 seconds
Argument 1
Current AI operates as a public‑interest partnership to build open, collaborative AI
EXPLANATION
Ayah describes Current AI as a public‑private partnership created to develop AI for the public interest, bringing together philanthropy, government and the private sector. The model focuses on collaborative development of open‑source technology that is released as a public good.
EVIDENCE
She explains that Current AI was born out of the AI Action Summit as a public-private partnership with a mission to create AI for the public interest, working with partners to identify common gaps and release technology as a public good [135-141].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Current AI is described as a public-private partnership focused on public-interest AI development and open collaboration [S24][S5].
MAJOR DISCUSSION POINT
Public‑interest AI partnership
AGREED WITH
Sushant Kumar, Martin Tisne
Argument 2
Personal frustration with Arabic voice‑recognition drives the push for multilingual AI
EXPLANATION
Ayah shares her personal experience of poor Arabic voice‑recognition on popular messaging platforms, which required her to spend more time correcting errors than communicating. This frustration motivates her commitment to multilingual AI that serves Arabic speakers.
EVIDENCE
She recounts that in the Arab world, voice recognition in Arabic is never good enough, leading her family to converse in English on WhatsApp and spend extra time correcting the technology [145-151].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bdeir recounts poor Arabic voice-recognition forcing her family to use English, reflecting broader Arabic language challenges [S5][S28].
MAJOR DISCUSSION POINT
Language bias in voice tech
AGREED WITH
Amitabh Nag, Andrew Tergis
Argument 3
Proprietary embodied AI devices risk hidden data collection and lack of transparency
EXPLANATION
Ayah warns that emerging embodied AI devices (e.g., glasses, robots, smart speakers) often operate without user knowledge of how they collect data or are trained, potentially infringing on privacy and reinforcing Western language bias.
EVIDENCE
She cites examples of embodied AI devices entering personal spaces, mentions Meta’s glasses performing facial recognition on strangers, and points out that such devices continuously record data and are often trained on Western languages without user awareness [195-205].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Concerns about opaque data collection in wearable AI devices are highlighted, with broader warnings about transparency eroding trust [S5][S29].
MAJOR DISCUSSION POINT
Privacy risks of embodied AI
Argument 4
Open‑source hardware prevents lock‑in and gives communities control over the stack
EXPLANATION
Ayah argues that open‑source hardware, like Linux for software, empowers communities to innovate without dependence on proprietary ecosystems, preventing a single company from controlling the entire AI stack.
EVIDENCE
She reflects on 15 years in open-source hardware, noting its power to let people do what they want, comparing it to the impact of Linux on software development [210-213].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open-source ecosystems are advocated to avoid vendor lock-in and empower communities, with industry pledges to use open standards [S31][S32][S33].
MAJOR DISCUSSION POINT
Open‑source hardware as empowerment
AGREED WITH
Andrew Tergis, Abhishek Singh
DISAGREED WITH
Abhishek Singh
Argument 5
Call for higher prize funding and broader participant support
EXPLANATION
Ayah suggests increasing the prize amount for the innovation challenge and ensuring participants have adequate support, emphasizing that better funding will encourage more robust development and broader participation.
EVIDENCE
She mentions a possible $110,000 prize, asks participants to request higher amounts, and calls for support for hardware and software development during the challenge [425].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for increased prize money and support are echoed by the announced $110,000 funding and technical assistance for the innovation challenge [S5][S4].
MAJOR DISCUSSION POINT
Funding and support for the challenge
AGREED WITH
Abhishek Singh
DISAGREED WITH
Abhishek Singh
A
Amitabh Nag
3 arguments164 words per minute814 words297 seconds
Argument 1
Bhashini was created to empower mother‑tongue users and address linguistic bias
EXPLANATION
Amitabh explains that Bhashini originated from the need to support people’s mother tongues, which are often marginalized in formal education and technology, aiming to reduce linguistic bias and enable native‑language interaction with AI.
EVIDENCE
He describes growing up with Bengali as his mother tongue, the challenges of switching to Hindi and English in school, and the resulting linguistic nuance that Bhashini seeks to address [108-110].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bhashini’s mission to support mother-tongue users and reduce linguistic bias is emphasized in discussions of multilingual AI initiatives [S4][S5].
MAJOR DISCUSSION POINT
Empowering mother‑tongue users
Argument 2
Bhashini now supports 22+ languages, adding tribal Bheeli and aims for 36 total
EXPLANATION
Amitabh outlines the current language coverage of Bhashini, noting 22 supported languages, an additional 14 text‑only languages for a total of 36, and the recent digitisation of the tribal Bheeli language, which lacks a script.
EVIDENCE
He states that the system currently covers 22 languages, has 14 more text languages for a total of 36, and recently added the tribal Bheeli language to the platform [170-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Current coverage of 22 languages, recent addition of the tribal Bheeli language, and a target of 36 languages are documented [S4][S5][S35].
MAJOR DISCUSSION POINT
Expanding language coverage
AGREED WITH
Ayah Bdeir, Andrew Tergis
Argument 3
Bhashini will provide quantization support and model enrichment for challengers
EXPLANATION
Amitabh commits Bhashini to continue supporting the innovation challenge by offering technical assistance for model quantisation and ongoing model enrichment, ensuring participants can build on the prototype effectively.
EVIDENCE
He says Bhashini will continue to support quantisation mechanisms and provide technical assistance for model enrichment to challenge participants [424].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bhashini committed to offering quantisation mechanisms and ongoing model enrichment for challenge participants [S5].
MAJOR DISCUSSION POINT
Technical support for the challenge
A
Andrew Tergis
3 arguments161 words per minute546 words202 seconds
Argument 1
Demonstration shows a handheld device delivering offline, multilingual AI to users
EXPLANATION
Andrew presents a prototype handheld AI inference device that can run locally without connectivity and supports multiple languages, allowing users to develop and run their own applications on the device.
EVIDENCE
He describes the prototype as a handheld device designed for any number of users and use cases, emphasizing offline operation and multilingual capability, and mentions a flagship application for vision-impaired users that works in 22 languages [52-55][58-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The handheld prototype operated offline, supporting multiple languages and real-world use cases [S4][S5].
MAJOR DISCUSSION POINT
Handheld offline multilingual AI
AGREED WITH
Ayah Bdeir, Amitabh Nag
Argument 2
The device runs a full pipeline (ASR → translation → LLM → TTS) locally on a handheld
EXPLANATION
Andrew details the end‑to‑end processing chain on the device: automatic speech recognition converts spoken input to text, neural machine translation translates to English, a large language model generates an answer using image data, then translation and text‑to‑speech return the response in the user’s native language.
EVIDENCE
He explains the sequence: ASR to text, MMT to English, LLM with image data, MMT back to native language, and TTS to audio, all executed on the device [58-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The device’s ability to run several models locally, enabling end-to-end speech-to-speech processing, is highlighted in the offline multi-model demonstration [S4].
MAJOR DISCUSSION POINT
Full AI pipeline on‑device
Argument 3
Uses Jetson hardware but is platform‑agnostic, enabling any model deployment
EXPLANATION
Andrew notes that while the current prototype runs on an Intel/NVIDIA Jetson platform, the software architecture is not tied to that hardware, allowing future deployment on other platforms and supporting any model the user wishes to run.
EVIDENCE
He states the prototype is based on the Intel Jetson processing platform but has been used to support other platforms because the processing does not depend on the hardware, and they are working on the ability to deploy any model [88-90].
MAJOR DISCUSSION POINT
Hardware‑agnostic design
AGREED WITH
Ayah Bdeir, Abhishek Singh
S
Shalindra Pal Singh
1 argument136 words per minute108 words47 seconds
Argument 1
Model quantization was achieved without sacrificing accuracy
EXPLANATION
Shalindra explains that the team applied quantisation to fit large models onto the device, a process that usually reduces accuracy, but they managed to retain full accuracy, enabling high‑fidelity LLMs to run locally.
EVIDENCE
He describes the quantisation process, noting that while it typically incurs an accuracy hit, their approach reached a point where there is no loss in accuracy [71-73].
MAJOR DISCUSSION POINT
Accurate model quantisation
AGREED WITH
Sushant Kumar, Andrew Tergis, Device
D
Device
1 argument113 words per minute11 words5 seconds
Argument 1
The prototype correctly identifies objects and responds in natural language
EXPLANATION
When asked about the items on the table, the device accurately recognized the candy wrappers and named the brands, demonstrating its ability to perform visual recognition and generate natural‑language responses.
EVIDENCE
The device responded, “The table has candy wrappers of Twix, Milky Way, and KitKat.” [79].
MAJOR DISCUSSION POINT
Object recognition and natural‑language output
AGREED WITH
Sushant Kumar, Andrew Tergis, Shalindra Pal Singh
M
Martin Tisne
2 arguments221 words per minute1206 words326 seconds
Argument 1
AI grounded in democratic values is essential for inclusive technology
EXPLANATION
Martin stresses that AI development must be guided by democratic principles to avoid cultural homogenisation and ensure that diverse cultures are represented, positioning democratic AI as a cornerstone for inclusive technology.
EVIDENCE
He remarks that ensuring AI does not squash cultures into a monoculture is one of the most important questions, and calls for a vision where AI respects cultural diversity, linking it to democratic values [241-244].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Democratic principles are framed as essential for inclusive AI, aligning with calls for public-interest and democratic AI imperatives [S24][S26][S35].
MAJOR DISCUSSION POINT
Democratic values in AI
AGREED WITH
Sushant Kumar, Ayah Bdeir
Argument 2
Collaborative AI grounded in democratic values is the shared vision for the future
EXPLANATION
Martin reiterates that the collective vision for AI is one built through collaboration and democratic values, emphasizing that such an approach will shape the future of AI globally.
EVIDENCE
He frames the future as one where collaborative AI, rooted in democratic values, guides development and implementation [241-244].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The vision of collaborative, democratically-guided AI is reiterated in discussions of democratic AI for the public good [S24][S35].
MAJOR DISCUSSION POINT
Shared democratic AI vision
AGREED WITH
Abhishek Singh, Anne Bouverot
A
Abhishek Singh
5 arguments182 words per minute2010 words659 seconds
Argument 1
Incorporating traditional knowledge prevents AI hallucinations and preserves culture
EXPLANATION
Abhishek illustrates how embedding indigenous and traditional knowledge into training data can reduce AI hallucinations and ensure cultural contexts are respected, using a real‑world example of tribal women annotating pest data.
EVIDENCE
He recounts a Netflix documentary where tribal women in Jharkhand use local knowledge to correctly label a pest, highlighting the importance of traditional knowledge for accurate AI models and cultural preservation [286-294][295-296].
MAJOR DISCUSSION POINT
Traditional knowledge in AI training
Argument 2
Community‑driven standards are needed to share data responsibly while respecting privacy
EXPLANATION
Abhishek argues that data sharing must be governed by standards that reflect community values and privacy protections, ensuring that data use serves public interest without violating individual rights.
EVIDENCE
He emphasizes the need for community-driven standards rooted in cultural belief systems to share data responsibly and protect privacy, noting that without such standards, commercial pressures could lead to misuse [299-300].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Emphasis on data governance, consent, and responsible sharing aligns with concerns about data provenance and privacy in AI systems [S30][S31].
MAJOR DISCUSSION POINT
Responsible community data standards
AGREED WITH
Anne Bouverot
DISAGREED WITH
Anne Bouverot
Argument 3
Sovereignty means full control over the AI stack—from chips to applications—at national and community levels
EXPLANATION
Abhishek defines AI sovereignty as complete national control over every layer of the AI stack, from hardware to models and applications, and argues that India is progressing toward this goal while emphasizing the broader principle of self‑determination in AI.
EVIDENCE
He describes sovereignty as control over energy, data centers, chips, models, and applications, noting India’s current capabilities and future plans for chip fabrication, asserting that no other country fully controls the entire stack [368-382].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s drive to control the full AI stack, from hardware to models, reflects broader goals of AI sovereignty and open standards [S26][S31][S32].
MAJOR DISCUSSION POINT
AI sovereignty
DISAGREED WITH
Ayah Bdeir
Argument 4
Joint research, university, and business partnerships can set inclusive AI standards
EXPLANATION
Abhishek highlights the existing and future collaborations between India and France across research institutions, universities, and businesses, suggesting that such partnerships can help establish inclusive AI norms and standards.
EVIDENCE
He mentions ongoing collaborations with France Action Summit, upcoming joint activities at World Tech, and partnerships at university, research, business, and government levels to build inclusive AI solutions [401-403].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Collaborative initiatives, including India-France partnerships and open-source language projects, are highlighted as pathways to inclusive standards [S33][S34][S35].
MAJOR DISCUSSION POINT
India‑France collaborative standards
AGREED WITH
Anne Bouverot, Martin Tisne
Argument 5
Launch of the India AI Innovation Challenge to hack the open‑source prototype with cash prizes
EXPLANATION
Abhishek announces the India AI Innovation Challenge, inviting developers to build on the open‑source hardware prototype, with submissions opening on 25 February and cash prizes funded by Bhashini and Current AI.
EVIDENCE
He details the challenge, its open-source nature, the deadline for submissions (25 Feb), and the prize funding from both organisations, emphasizing the opportunity for innovators to create diverse solutions [419-424].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The challenge was announced with $110,000 prize funding and technical support for participants [S4][S5].
MAJOR DISCUSSION POINT
Innovation challenge launch
AGREED WITH
Ayah Bdeir
DISAGREED WITH
Ayah Bdeir
A
Anne Bouverot
3 arguments157 words per minute1107 words422 seconds
Argument 1
French policy quotas illustrate how legal mechanisms can protect cultural production
EXPLANATION
Anne explains that France has legislated quotas for French language content in radio, film, and music, which have helped preserve cultural heritage and ensure continued creation of French cultural works.
EVIDENCE
She references laws mandating percentages of music and movies in French, describing how these mechanisms fund French creators and protect cultural production without being hegemonic [262-274].
MAJOR DISCUSSION POINT
Legal quotas for cultural protection
Argument 2
Balancing open‑source tools with culturally sensitive data governance is a key tension
EXPLANATION
Anne points out the tension between using open‑source AI tools and respecting cultural data rights, noting that creators want control and compensation while societies need cultural data for AI training, and suggests mechanisms like opt‑out rights for artists.
EVIDENCE
She discusses the conflict between artists’ rights to compensation and the need for cultural data in AI, proposing opt-out options and distinguishing between historical data and living artists’ work [318-325].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tension between open-source AI development and cultural data rights is discussed, with proposals for opt-out mechanisms and data governance frameworks [S31][S30][S33].
MAJOR DISCUSSION POINT
Tension between open source and cultural data rights
AGREED WITH
Abhishek Singh
DISAGREED WITH
Abhishek Singh
Argument 3
France and India will co‑develop resilient, sustainable AI and alternative solutions
EXPLANATION
Anne describes a joint India‑France initiative to create resilient, sustainable AI systems and alternative solutions, emphasizing collaborative research, joint innovation, and the development of alternatives to dominant technologies.
EVIDENCE
She states that 2024 is the year of joint innovation between India and France, highlighting collaborative work on multilingual AI, resilient sustainable AI, and building alternative solutions together [391-400].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Joint India-France efforts to create resilient, sustainable AI systems are outlined in the collaborative AI session [S34].
MAJOR DISCUSSION POINT
India‑France joint AI development
AGREED WITH
Abhishek Singh, Martin Tisne
A
Announcer
1 argument152 words per minute40 words15 seconds
Argument 1
Announcer urges speakers to stay on stage to officially open the challenge
EXPLANATION
The announcer requests that Abhishek Singh and Aya remain on stage so they can formally launch the Global Innovation Challenge, ensuring the ceremony proceeds as planned.
EVIDENCE
He says, “Abhishek Singh sir, I request you to stay on stage… And Aya, we’d love to have you on to launch the Global Innovation Challenge” [404-408].
MAJOR DISCUSSION POINT
Opening the innovation challenge
Agreements
Agreement Points
AI should be democratized and serve the public interest
Speakers: Sushant Kumar, Ayah Bdeir, Martin Tisne
AI must be democratized to serve every community Current AI operates as a public‑interest partnership to build open, collaborative AI AI grounded in democratic values is essential for inclusive technology
All three speakers stress that AI must be made accessible to all communities, built through public-interest partnerships and guided by democratic values, emphasizing real-world impact over pilots [1-4][12-15][135-141][241-244]
POLICY CONTEXT (KNOWLEDGE BASE)
The push for democratizing AI aligns with recent policy shifts that prioritize open-source AI as a national strategic priority, as highlighted by the US endorsement of open-source AI and the broader consensus on open-source versus proprietary approaches at IGF 2023 [S70][S57][S53].
Offline, multi‑model inference on a handheld device is crucial for reach
Speakers: Sushant Kumar, Andrew Tergis, Shalindra Pal Singh, Device
Offline inference runs four to five models simultaneously on the device Demonstration shows a handheld device delivering offline, multilingual AI to users Model quantization was achieved without sacrificing accuracy The prototype correctly identifies objects and responds in natural language
Speakers highlight that the prototype can run several AI models locally without connectivity, thanks to quantization that preserves accuracy, and it successfully performs object recognition and language output [98-102][52-55][58-62][71-73][79]
POLICY CONTEXT (KNOWLEDGE BASE)
Offline, multi-model inference on handheld devices is advocated in recent demonstrations of open AI inference hardware that runs locally without network connectivity, underscoring the need for edge AI to reach underserved users [S64][S65].
Open‑source hardware prevents lock‑in and empowers communities
Speakers: Ayah Bdeir, Andrew Tergis, Abhishek Singh
Open‑source hardware prevents lock‑in and gives communities control over the stack Uses Jetson hardware but is platform‑agnostic, enabling any model deployment Launch of the India AI Innovation Challenge to hack the open‑source prototype with cash prizes
The panel agrees that open-source hardware, like Linux for software, avoids vendor lock-in, is hardware-agnostic, and enables a community-driven innovation challenge built on an open-source prototype [210-213][88-90][419-424]
POLICY CONTEXT (KNOWLEDGE BASE)
Open-source hardware is framed as a safeguard against vendor lock-in, a view supported by discussions on the strategic importance of open-source AI for lower-resource settings and economic analyses of the open-source shift in the AI ecosystem [S53][S69][S70].
Multilingual support is essential for inclusive AI
Speakers: Ayah Bdeir, Amitabh Nag, Andrew Tergis
Personal frustration with Arabic voice‑recognition drives the push for multilingual AI Bhashini now supports 22+ languages, adding tribal Bheeli and aims for 36 total Demonstration shows a handheld device delivering offline, multilingual AI to users
Speakers underline the need for AI to work in many languages, citing personal experience with Arabic, Bhashini’s expanding language coverage, and a multilingual handheld demo [145-151][170-176][58-62]
POLICY CONTEXT (KNOWLEDGE BASE)
Multilingual AI support is emphasized in policy papers calling for language equity and decolonizing AI, noting that preserving linguistic diversity is essential for inclusive digital societies [S60][S62].
Data sharing must follow community‑driven standards and respect privacy
Speakers: Abhishek Singh, Anne Bouverot
Community‑driven standards are needed to share data responsibly while respecting privacy Balancing open‑source tools with culturally sensitive data governance is a key tension
Both emphasize that data governance should reflect community values, protect privacy, and balance open-source development with cultural rights [299-300][318-325]
POLICY CONTEXT (KNOWLEDGE BASE)
Community-driven data sharing that respects privacy reflects GDPR-based consent requirements and emerging frameworks that combine property, liability, and inability rules to protect consumer data while enabling social good [S47][S48][S56].
India‑France collaboration can set inclusive AI standards and develop resilient solutions
Speakers: Abhishek Singh, Anne Bouverot, Martin Tisne
Joint research, university, and business partnerships can set inclusive AI standards France and India will co‑develop resilient, sustainable AI and alternative solutions Collaborative AI grounded in democratic values is the shared vision for the future
The speakers see joint India-France initiatives-research, university, business partnerships, and shared democratic values-as a pathway to build resilient, sustainable AI and set inclusive norms [391-403][391-400][241-244]
POLICY CONTEXT (KNOWLEDGE BASE)
The India-France partnership has been highlighted as a model for collaborative AI standards that enhance sovereignty and reduce dependence on dominant providers, aligning with bilateral initiatives discussed at recent summits [S67][S68].
Funding and prize incentives are vital to stimulate open‑source AI innovation
Speakers: Abhishek Singh, Ayah Bdeir
Launch of the India AI Innovation Challenge to hack the open‑source prototype with cash prizes Call for higher prize funding and broader participant support
Both call for substantial financial incentives to encourage developers to build on the open-source prototype, suggesting higher prize amounts and robust support structures [419-424][425]
POLICY CONTEXT (KNOWLEDGE BASE)
Significant public funding commitments, such as India’s ₹10,300 Crore AI infrastructure program and prize-based incentives, illustrate the recognized need for financial mechanisms to accelerate open-source AI innovation [S49][S50][S51][S66].
Similar Viewpoints
Both stress that AI should be democratized and built through public‑interest collaborations to reach all communities [1-4][12-15][135-141]
Speakers: Sushant Kumar, Ayah Bdeir
AI must be democratized to serve every community Current AI operates as a public‑interest partnership to build open, collaborative AI
Both highlight the technical feasibility of running high‑fidelity AI models offline via effective quantization [52-55][58-62][71-73]
Speakers: Andrew Tergis, Shalindra Pal Singh
Demonstration shows a handheld device delivering offline, multilingual AI to users Model quantization was achieved without sacrificing accuracy
Both warn that AI systems must be transparent, respect privacy, and be guided by democratic principles to avoid hidden data collection [195-205][241-244]
Speakers: Ayah Bdeir, Martin Tisne
Proprietary embodied AI devices risk hidden data collection and lack of transparency AI grounded in democratic values is essential for inclusive technology
Both emphasize the importance of expanding language coverage to address linguistic bias and improve user experience [145-151][170-176]
Speakers: Amitabh Nag, Ayah Bdeir
Bhashini now supports 22+ languages, adding tribal Bheeli and aims for 36 total Personal frustration with Arabic voice‑recognition drives the push for multilingual AI
Both argue that data governance must balance open‑source innovation with cultural rights and privacy safeguards [299-300][318-325]
Speakers: Abhishek Singh, Anne Bouverot
Community‑driven standards are needed to share data responsibly while respecting privacy Balancing open‑source tools with culturally sensitive data governance is a key tension
Unexpected Consensus
Alignment on democratic, privacy‑focused AI governance between hardware‑focused and policy‑focused speakers
Speakers: Ayah Bdeir, Martin Tisne
Proprietary embodied AI devices risk hidden data collection and lack of transparency AI grounded in democratic values is essential for inclusive technology
It is unexpected that a hardware-centric advocate (Ayah) and a policy-centric facilitator (Martin) converge on the need for democratic, transparent AI that protects privacy, linking concerns about embodied devices with broader democratic values [195-205][241-244]
POLICY CONTEXT (KNOWLEDGE BASE)
Consensus on democratic, privacy-focused AI governance is echoed in panels that advocate tiered access and contextual norms for open-source tools, balancing openness with security and aligning hardware and policy perspectives [S53][S54][S58][S56].
Overall Assessment

The panel shows strong consensus on democratizing AI through public‑interest, open‑source, and multilingual approaches; on the technical feasibility and importance of offline, multi‑model handheld devices; on the need for community‑driven data governance; and on fostering India‑France collaboration with adequate funding mechanisms.

High consensus across technical, policy, and cultural dimensions, indicating a unified vision that can drive coordinated actions for inclusive, offline, multilingual AI ecosystems.

Differences
Different Viewpoints
Amount and level of funding for the India AI Innovation Challenge
Speakers: Ayah Bdeir, Abhishek Singh
Call for higher prize funding and broader participant support Launch of the India AI Innovation Challenge to hack the open‑source prototype with cash prizes
Ayah urges organizers to increase the prize amount (suggesting $110,000) and provide more support for participants [425], while Abhishek announces the challenge with a set prize pool but does not specify increasing it, implying the current funding is sufficient [419-424].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates over the appropriate scale of funding reference the announced ₹10,300 Crore budget for India’s AI infrastructure and the broader goal of building a national foundational model within a tight timeline [S49][S50].
How to balance open‑source tools with culturally sensitive data governance
Speakers: Anne Bouverot, Abhishek Singh
Balancing open‑source tools with culturally sensitive data governance is a key tension Community‑driven standards are needed to share data responsibly while respecting privacy
Anne highlights the tension between open-source AI development and protecting artists’ rights, proposing opt-out mechanisms and compensation for creators [318-325], whereas Abhishek stresses the need for community-driven standards that focus on public-interest use cases and privacy preservation, without specific compensation mechanisms [299-304].
POLICY CONTEXT (KNOWLEDGE BASE)
Balancing open-source tools with culturally sensitive data governance draws on recommendations for tiered access systems and culturally contextual safety evaluations to ensure ethical deployment across diverse societies [S54][S55][S60].
Path to AI sovereignty and control over the technology stack
Speakers: Abhishek Singh, Ayah Bdeir
Sovereignty means full control over the AI stack—from chips to applications—at national and community levels Open‑source hardware prevents lock‑in and gives communities control over the stack
Abhishek defines sovereignty as complete national control over every layer of the AI stack, from energy to chips and models [368-382], while Ayah argues that open-source hardware, akin to Linux, empowers communities by avoiding vendor lock-in, focusing on openness rather than national ownership [210-213].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on AI sovereignty reference calls for national-level coordination versus international standards, as well as bilateral efforts like the France-India partnership to diversify the AI supply chain and reduce reliance on dominant providers [S58][S67][S70].
Unexpected Differences
Whether formal legal quotas are needed to protect cultural production in AI
Speakers: Martin Tisne, Anne Bouverot
Collaborative AI grounded in democratic values is essential for inclusive technology French policy quotas illustrate how legal mechanisms can protect cultural production
Martin asks if AI governance should include set norms similar to France’s media quotas [266-270], implying a need for formal regulation, whereas Anne responds that while mechanisms exist, she is uncertain about the necessity of a set norm and suggests existing incentives suffice [267-274]. This reveals an unexpected divergence on the role of statutory quotas in AI cultural policy.
Overall Assessment

The discussion reveals consensus on the importance of multilingual, inclusive AI, but diverges on implementation strategies: funding levels for innovation challenges, the balance between open‑source development and cultural data rights, and the route to AI sovereignty. These disagreements are centered on governance, financing, and control mechanisms rather than the core vision of AI for all.

Moderate – while participants align on the overarching goal of democratizing AI, they hold differing views on funding, data governance, and sovereignty. The implications suggest that achieving a unified approach will require negotiated frameworks that reconcile open‑source empowerment with adequate financing and culturally sensitive data policies.

Partial Agreements
All three speakers share the overarching goal of making AI accessible to all. Sushant frames it as democratization for entire populations [1][12-15], Ayah emphasizes a public‑interest partnership model to create open AI for the public good [135-141], and Andrew demonstrates a concrete offline, multilingual device as a means to that end [52-55][58-62]. Their disagreement lies in the primary pathway: policy‑driven public‑interest collaboration versus hardware‑centric, offline solutions.
Speakers: Sushant Kumar, Ayah Bdeir, Andrew Tergis
AI must be democratized to serve every community Current AI operates as a public‑interest partnership to build open, collaborative AI Demonstration shows a handheld device delivering offline, multilingual AI to users
Takeaways
Key takeaways
AI must be democratized to serve every community, with personal, local, multilingual capabilities at its core (Sushant Kumar, Amitabh Nag). Current AI operates as a public‑interest partnership that builds open, collaborative AI infrastructure and releases it as a public good (Ayah Bdeir). Bhashini’s mission is to empower mother‑tongue users, reduce linguistic bias, and preserve cultural knowledge through multilingual models (Amitabh Nag, Ayah Bdeir). A working handheld prototype was demonstrated that runs a full offline pipeline (ASR → translation → LLM → TTS) on a Jetson platform, supporting 4‑5 models simultaneously with no loss of accuracy thanks to quantization (Andrew Tergis, Shalindra Pal Singh). The device is platform‑agnostic, open‑source hardware, enabling any model deployment and encouraging community‑driven innovation (Andrew Tergis). Language coverage is expanding: 22+ languages currently, aiming for 36, including tribal languages such as Bheeli (Amitabh Nag). Concerns were raised about proprietary embodied AI, hidden data collection, and the need for transparent, privacy‑preserving, sovereign AI stacks (Ayah Bdeir, Abhishek Singh). Balancing open‑source tools with culturally sensitive data governance requires community standards, consent mechanisms, and possibly third‑party trustees (Anne Bouverot, Abhishek Singh). India–France collaboration is seen as a model for co‑creating resilient, sustainable, and culturally inclusive AI standards (Anne Bouverot, Abhishek Singh). The India AI Innovation Challenge was announced to hack the open‑source prototype, with cash prizes and technical support from Bhashini and Current AI (Abhishek Singh).
Resolutions and action items
Launch of the India AI Innovation Challenge – open‑source prototype available, submissions open 25 Feb, prizes funded by Bhashini and Current AI. Bhashini commits to provide quantization assistance, model enrichment, and technical support to challenge participants. Current AI will continue to collaborate on hardware/software development, keep the platform open‑source, and explore mesh‑network or micro‑data‑center extensions. Both organisations will work to expand language coverage (target 36 languages) and improve form‑factor (smaller, lower‑cost devices). Commitment to pursue India‑France joint research on resilient, sustainable AI and to co‑develop inclusive AI norms. Agreement to keep the device offline‑first, privacy‑preserving, and to make the stack publicly available for community innovation.
Unresolved issues
How to design and enforce community‑driven data‑sharing standards that respect privacy, cultural ownership, and compensation for creators. Specific mechanisms for reciprocity – ensuring communities benefit from data use while protecting individual rights – remain undefined. The appropriate level of regulatory or policy quotas for cultural AI (e.g., French‑style content quotas) was discussed but no concrete framework was agreed. Funding level for the Innovation Challenge prize pool was questioned; no decision on increasing the amount was made. Long‑term sustainability and scaling of the hardware (mass production, battery life, cost reduction) were identified as challenges without a concrete roadmap. Definition and implementation of “AI sovereignty” at national and community levels remain conceptual, with no actionable plan presented.
Suggested compromises
Adopt privacy‑preserving, third‑party trusted platforms for data sharing, allowing community control while enabling research (Anne Bouverot, Abhishek Singh). Introduce opt‑out rights for artists and creators, balancing cultural preservation with compensation concerns (Anne Bouverot). Use open‑source hardware to avoid vendor lock‑in while permitting regulated data governance through community standards (Ayah Bdeir). Combine open‑source toolkits with selective, consent‑based data contributions to satisfy both open‑source development and cultural data protection.
Thought Provoking Comments
I’m concerned about this new frontier of embodied AI… devices that continuously record our data, send it to the cloud, are trained on Western languages, and create a hardware lock‑up that makes us dependent on proprietary stacks.
She highlights a fundamental risk of current AI deployments—loss of privacy, cultural bias, and technological lock‑in—framing it as a sovereignty issue rather than just a technical problem.
This comment shifted the conversation from showcasing the prototype to a broader debate on privacy and control. It prompted participants to discuss open‑source hardware as a countermeasure and led to deeper analysis of data sovereignty, influencing later remarks by Amitabh Nag and Abhishek Singh about community rights and national AI sovereignty.
Speaker: Ayah Bdeir
You can improve the device itself—lower cost, better battery, smaller size—and also connect many of them in a mesh network or build a stationary micro‑data‑center with solar power, opening endless application possibilities.
She moves the discussion from current limitations to a forward‑looking vision, illustrating how open hardware can evolve into scalable, sustainable infrastructure.
Her optimism expanded the scope of the dialogue, inspiring Andrew and Shalindra to emphasize the prototype’s flexibility and prompting the audience to imagine diverse use‑cases (farmers, toys, tourism). It set the stage for the later announcement of the India AI Innovation Challenge.
Speaker: Ayah Bdeir
We recently digitized one of the tribal languages, Bheeli, which doesn’t have a script, adding it to our system.
This concrete example demonstrates the project’s commitment to linguistic inclusivity beyond major languages, tackling the challenge of undocumented languages.
The remark deepened the conversation on language breadth, leading Sushant to ask about future expansion and prompting further discussion on cultural preservation and the importance of covering underserved communities.
Speaker: Amitabh Nag
In France we have laws that mandate a certain percentage of music and film production in French, which helps preserve cultural patrimoine. Should AI have similar norms to ensure cultural representation?
She introduces a policy perspective, linking existing cultural quotas to potential AI governance mechanisms, thereby bridging technology and legislative frameworks.
This comment opened a new line of inquiry about regulatory approaches to AI, causing Martin and others to explore the balance between open‑source innovation and cultural protection, and influencing the later discussion on reciprocity and data governance.
Speaker: Anne Bouverot
A tribal woman annotating pest data refused to label a worm as a pest because, in her local knowledge, it helps the forest. This shows the value of indigenous knowledge in AI training data.
The anecdote powerfully illustrates how local, undocumented expertise can correct AI misconceptions, emphasizing the need for culturally grounded data.
The story reinforced the argument for incorporating community‑sourced data, prompting further dialogue on data sovereignty, reciprocity, and the ethical use of culturally specific information.
Speaker: Abhishek Singh
What is the world you would like us to live in when it comes to the intersection of AI and culture? If we get it right, what does it look like in five or ten years?
A broad, visionary question that invited participants to articulate long‑term aspirations, moving the discussion from technical details to societal impact.
This question acted as a turning point, eliciting forward‑looking responses from Ayah, Amitabh, and Anne, which shaped the latter part of the session toward vision, policy, and collaborative action, culminating in the announcement of the innovation challenge.
Speaker: Martin Tisne
Sovereignty in AI means having complete control over all five layers of the stack—energy, data centers, chips, models, applications—so no external entity decides for us.
He frames AI sovereignty in concrete technical terms, linking national autonomy to every layer of the technology stack, a nuanced expansion of the sovereignty concept.
This comment reframed earlier privacy concerns into a national strategy discussion, influencing subsequent remarks about India’s capabilities and the need for alternative solutions, and tying back to the earlier theme of open‑source hardware as a path to sovereignty.
Speaker: Abhishek Singh
Overall Assessment

The discussion began with a product demonstration but quickly pivoted to deeper themes of inclusivity, privacy, and sovereignty, driven by a handful of pivotal remarks. Ayah Bdeir’s warnings about embodied AI and her hopeful vision of open, mesh‑connected devices opened a dual narrative of risk and opportunity, steering the conversation toward systemic concerns. Amitabh Nag’s concrete example of digitizing a script‑less tribal language and Abhishek Singh’s tribal annotation story grounded the abstract debate in real‑world cultural stakes, reinforcing the need for diverse data. Anne Bouverot’s policy analogy introduced a regulatory dimension, while Martin Tisne’s visionary question broadened the dialogue to long‑term societal outcomes. Finally, Abhishek Singh’s articulation of AI sovereignty tied together privacy, control, and national strategy, setting the stage for the announced India AI Innovation Challenge. Collectively, these comments shifted the session from a showcase to a multifaceted exploration of how open, multilingual, and locally controlled AI can be built, governed, and scaled.

Follow-up Questions
How can the device support deployment of any AI model, ensuring compatibility and optimization across diverse architectures?
Andrew highlighted the goal of enabling any model to run on the device, indicating a need for research into model compatibility, quantization, and runtime optimization.
Speaker: Andrew Tergis
How to maintain model accuracy after quantization across all supported languages?
Shalindra mentioned quantization typically reduces accuracy but claimed no loss in their case, suggesting further validation and research are needed.
Speaker: Shalindra Pal Singh
How can the language coverage be expanded to include additional languages, especially tribal languages like Bheeli that lack scripts?
Amitabh noted recent digitization of a tribal language and the goal to increase from 22 to 36 languages, indicating research into data collection, script creation, and model training for low‑resource languages.
Speaker: Amitabh Nag
What processes are needed to enrich models with comprehensive glossaries and contextual data such as the 1.6 million place names from the Survey of India?
Amitabh identified the need for extensive lexical and contextual enrichment, requiring research into data acquisition, curation, and integration into multilingual models.
Speaker: Amitabh Nag
How can the device’s form factor, cost, battery life, and aesthetics be improved for broader last‑mile adoption?
Ayah discussed opportunities to shrink size, lower price, extend battery, and make the device more attractive, pointing to hardware design and manufacturing research.
Speaker: Ayah Bdeir
How can multiple devices be networked in a mesh to enable distributed inference and larger model execution?
Ayah suggested mesh networking as a future capability, requiring research into communication protocols, load balancing, and distributed AI inference.
Speaker: Ayah Bdeir
What are the design considerations for creating stationary, solar‑powered micro‑data‑center versions of the device?
She mentioned a larger, solar‑powered version, indicating research into power management, cooling, and scaling of on‑device AI workloads.
Speaker: Ayah Bdeir
How should data sovereignty and community rights be protected when community‑generated data is used to train AI models?
Martin asked about community involvement and rights; Abhishek emphasized the need for standards rooted in local culture, highlighting a research gap in governance frameworks.
Speaker: Martin Tisne, Abhishek Singh
What reciprocity mechanisms can ensure that communities benefit from the use of their data in AI applications?
Both participants raised the issue of fair benefit sharing, indicating a need for policy and economic models that link data contribution to tangible community gains.
Speaker: Martin Tisne, Abhishek Singh, Anne Bouverot
How can a balance be struck between open‑source AI development and controlled governance of cultural data?
The discussion highlighted tension between openness and cultural protection, calling for research into hybrid governance models.
Speaker: Martin Tisne, Anne Bouverot
Should AI systems be subject to normative quotas for cultural content similar to French media regulations, and how might such quotas be designed?
Anne was asked whether cultural quotas are needed for AI, suggesting a research area in policy design for cultural representation in AI outputs.
Speaker: Martin Tisne, Anne Bouverot
What frameworks are needed to share health‑related data in a privacy‑preserving way that still enables public‑interest research?
The conversation about health data highlighted the need for technical and legal frameworks that balance privacy with societal benefit.
Speaker: Martin Tisne, Abhishek Singh
What does AI sovereignty entail across the five layers of the AI stack, and how can nations achieve full control over these layers?
Abhishek outlined a vision of sovereign AI control, indicating research into national strategies for chips, data centers, models, applications, and energy.
Speaker: Abhishek Singh
What concrete opportunities exist for France‑India collaboration on culturally inclusive AI norms and joint research?
Both speakers discussed bilateral cooperation, pointing to a need for joint policy, research agendas, and funding mechanisms.
Speaker: Martin Tisne, Anne Bouverot, Abhishek Singh
How can embodied AI devices be designed to avoid becoming surveillance tools and ensure user privacy?
Ayah expressed concern about devices that record and recognize faces without consent, indicating research into privacy‑by‑design hardware and software.
Speaker: Ayah Bdeir
How can open‑source hardware platforms be structured to prevent lock‑in by major tech companies and foster broad innovation?
She compared the impact of open platforms like Linux to current AI hardware, suggesting research into licensing, modularity, and community governance.
Speaker: Ayah Bdeir
What methods can effectively involve local communities in data annotation while safeguarding their rights and ensuring fair compensation?
Abhishek highlighted the need for community standards and compensation mechanisms, indicating research into participatory data collection models.
Speaker: Abhishek Singh

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Multistakeholder Partnerships for Thriving AI Ecosystems

Multistakeholder Partnerships for Thriving AI Ecosystems

Session at a glanceSummary, keypoints, and speakers overview

Summary

This discussion focused on the critical role of multi-stakeholder partnerships in developing and deploying responsible AI for sustainable development, featuring perspectives from government, private sector, and implementation organizations. The panel was moderated by Robert Opp from the UN Development Program and included representatives from Germany’s Federal Ministry for Economic Cooperation and Development, Salesforce, Wadwani AI Global, and Tata Consultancy Services.


Dr. Bärbel Koffler emphasized that governments must create frameworks and governance structures to ensure AI benefits are accessible to all, noting significant gaps in venture capital distribution and data center resources between the Global North and South. She stressed the need for investment in skills training and open-source solutions to prevent AI from widening existing inequalities. Arundhati Bhattacharya from Salesforce highlighted how technology democratization enabled India’s successful financial inclusion program, demonstrating that adoption occurs naturally when technology improves people’s lives, but requires proper policy frameworks and infrastructure to ensure ethical implementation.


Nakul Jain from Wadwani AI Global shared practical examples of successful AI deployments in education and healthcare, emphasizing that building technology is the easiest part while creating sustainable institutional mechanisms and partnerships is the real challenge. He described their role as conveners who facilitate collaboration between governments, technical partners, and field-level implementers. Dr. Sachin Loda from TCS discussed the importance of sensing infrastructure and data ecosystems, highlighting collaborative initiatives like quantum valleys and responsible AI evaluation platforms.


The discussion concluded with promotion of the Hamburg Declaration on AI for Sustainable Development, which requires signatories to make concrete, measurable commitments rather than just endorsing principles. The panelists agreed that effective AI deployment for development requires coordinated action across all sectors of society, with each stakeholder bringing unique capabilities while working toward shared objectives.


Keypoints

Major Discussion Points:

Multi-stakeholder partnerships are essential for responsible AI development: The discussion emphasized that no single entity – whether government, private sector, or civil society – can address AI challenges alone. Each stakeholder brings unique capabilities, with governments providing frameworks and governance, private sector offering innovation and scaling, and organizations like Wadwani AI serving as conveners and integrators.


Addressing the AI equity gap and power imbalances: Speakers highlighted significant disparities in AI access, with only 17% of venture capital reaching regions representing 90% of the global population, and only 0.1% of data center capacity located in the Global South. The focus was on democratizing AI technology rather than just innovating it.


Moving from principles to concrete, measurable actions: The Hamburg Declaration on AI for Sustainable Development was presented as a model for translating high-level commitments into specific, trackable outcomes. Examples included Germany’s commitment to train 160,000 people (achieving 190,000) and creating AI building blocks for climate action.


Real-world implementation challenges and successes: Panelists shared specific examples of successful AI deployments, such as India’s financial inclusion program using biometric identification and mobile networks, and educational AI tools in Gujarat that required collaboration between government, technical partners, and implementation organizations.


Infrastructure and capacity building as foundational requirements: Discussion covered the need for sensing infrastructure, data accessibility, compute resources, and skills training. Emphasis was placed on creating enabling environments where innovation can flourish, including open-source approaches and digital public infrastructure.


Overall Purpose:

The discussion aimed to explore how different stakeholders can collaborate effectively to ensure AI development serves sustainable development goals while being equitable and responsible. The session sought to move beyond theoretical frameworks toward practical strategies for implementing responsible AI at scale, using the Hamburg Declaration as a concrete example of collective action.


Overall Tone:

The tone was constructive and solution-oriented throughout, with speakers building on each other’s points rather than debating. It began with high-level policy perspectives and gradually became more technical and specific as panelists shared real-world experiences. The atmosphere remained collaborative and optimistic, with speakers demonstrating genuine enthusiasm for the potential of responsible AI partnerships while acknowledging significant challenges that need to be addressed.


Speakers

Speakers from the provided list:


Robert Opp – Representative from the UN Development Program, moderator of the panel discussion


Bärbel Kofler – Parliamentary State Secretary at Germany’s Federal Ministry for Economic Cooperation and Development, member of the Bundestag since 2004, champion of human rights and development


Arundhati Bhattacharya – Chairperson and CEO of Salesforce South Asia, former first woman chairperson of the State Bank of India, leader with over four decades of experience in digital transformation


Nakul Jain – CEO and Managing Director of Wadwani AI Global, mission-driven technology leader working on AI solutions for underserved communities in the global south


Speaker 1 – Chief Scientist and Head of Research at Tata Consultancy Services (identified as Dr. Sachin Loda based on context), leader in cybersecurity and privacy research, heads work on trustworthy AI and quantum resilience


Audience Member 1


Audience Member 2


Audience Member 3 – Undergraduate student of economics at University of Delhi


Additional speakers:


None – all speakers mentioned in the transcript are accounted for in the provided speakers names list.


Full session reportComprehensive analysis and detailed insights

This comprehensive discussion on multi-stakeholder partnerships for responsible AI development brought together diverse perspectives from government, private sector, and implementation organisations to explore how artificial intelligence can serve sustainable development goals whilst addressing equity challenges. Moderated by Robert Opp from the UN Development Programme, the panel featured Dr. Bärbel Koffler from Germany’s Federal Ministry for Economic Cooperation and Development, Arundhati Bhattacharya from Salesforce South Asia, Nakul Jain from Wadwani AI Global, and Dr. Sachin Loda from Tata Consultancy Services.


The AI Equity Challenge: From Innovation Gap to Power Gap

Robert Opp opened the discussion by highlighting UNDP’s concern that without responsible deployment, AI could exacerbate existing inequalities rather than serve sustainable development goals. This framing set the stage for Dr. Koffler’s crucial reframing of the challenge: “It’s not an innovation gap, it’s a power gap.” She provided stark evidence of this disparity, noting that only 17% of global venture capital reaches regions representing over 90% of the world’s population, while merely 0.1% of data centre capacity exists in the Global South.


This perspective challenged common assumptions about developing countries lacking innovative capacity. Instead, Dr. Koffler argued that innovative people and ideas exist globally, but access to enabling environments—including funding, infrastructure, and markets—remains concentrated in the Global North.


Government’s Role: Creating Enabling Frameworks

Dr. Koffler outlined the government’s responsibility for creating frameworks and legal structures that make AI advantages accessible to all citizens. This includes ensuring smallholder farmers can utilise AI tools, citizens can interact with government services in their own languages, and doctors in remote areas can access advanced diagnostic capabilities.


Government investment extends to skills training, vocational education, and university research programmes that connect academic work with small and medium-sized enterprises. She emphasised ensuring research outcomes, datasets, and AI tools remain available through open-source approaches to prevent concentration of benefits among large players.


Private Sector Democratisation: The India Financial Inclusion Case Study

Arundhati Bhattacharya demonstrated how technology democratisation can transform populations when properly implemented. Drawing from her experience leading India’s largest bank during digital transformation, she articulated a fundamental principle: “Any improvement in technology, if it is not really democratised, then it doesn’t really have an impact.”


Her detailed case study illustrated this principle in action. India’s financial inclusion programme struggled for 15 years until technology enablers—biometric identification through Aadhaar and widespread mobile network coverage—made it possible to reach 600,000 villages directly. This allowed the government to transfer subsidies directly to beneficiaries, eliminating intermediaries who previously captured 87% of intended benefits.


The subsequent introduction of the Universal Payment Interface (UPI) enabled even small shopkeepers to accept digital payments, creating cash flow records that made them eligible for formal credit. This transformed loan terms from 10% per month to 7% per year, demonstrating how technological infrastructure can create transformative economic opportunities.


Bhattacharya emphasised that adoption occurs naturally when technology genuinely improves people’s lives, noting that “adoption is not going to be a problem” when people see real benefits in their daily lives.


Corporate Responsibility and Self-Regulation

From the private sector perspective, Bhattacharya described Salesforce’s “1-1-1 model,” contributing 1% of equity, products, and employee time to community initiatives. The company has trained 3.9 million “Trailblazers” in India—people skilled in Salesforce technology—representing the second-largest such community globally after the United States.


She advocated for corporate self-regulation through dedicated offices for humane and ethical technology use, arguing that proactive self-governance is preferable to heavy-handed external regulation that might stifle innovation.


Implementation Realities: Beyond Technology Development

Nakul Jain provided crucial insight about AI deployment: “Building technology, at least for an application organisation like ours, is the most easiest part of the entire spectrum. It’s everything around it that becomes tedious for us to get done.” This observation highlighted that successful deployment depends more on institutional mechanisms and partnerships than technological sophistication.


Jain shared specific examples of successful collaborations. In education, Wadwani AI Global developed an oral reading fluency assessment tool for students in partnership with the Gujarat government, which provided data access, policy ownership, and integration with existing educational systems. Rather than creating standalone applications, they embedded AI capabilities within workflows teachers already used.


Healthcare partnerships required collaboration with evaluation agencies like the Indian Council of Medical Research (ICMR) from project inception, ensuring success criteria and evaluation parameters were established from day one. This proved essential for building trust in AI systems impacting human health and safety.


Technical Infrastructure and Practical Considerations

Dr. Sachin Loda emphasised that effective AI deployment requires comprehensive “sensing infrastructure” as part of digital public infrastructure. Using air pollution monitoring as an example, he explained that better AI outcomes depend on deploying sufficient sensors across regions, making them cheaper and more effective, then using AI to analyse resulting data streams.


Both Jain and Loda challenged assumptions about Large Language Models versus traditional machine learning approaches. Jain noted that in resource-constrained environments—where mobile phones are basic and internet connectivity limited—traditional ML models often work better than LLMs and are more practically deployable.


Loda reinforced this perspective, explaining that whilst LLMs represent significant advancement, industry-specific and context-specific solutions still require substantial development work, particularly given data fragmentation within enterprises.


Concrete Action Through the Hamburg Declaration

Dr. Koffler emphasised moving beyond performative commitments through the Hamburg Declaration on AI for Sustainable Development: “We are not signing something we don’t want to do and then we have another year of time and applauding ourselves for signing something. We want to come up really with concrete steps.”


The German ministry exceeded their specific commitments: training 190,000 people instead of 160,000, delivering 15 AI building blocks for climate action instead of 12, and creating 55 datasets instead of 30. These translated into concrete projects including satellite data analysis for Kenyan farmers, cervix cancer detection systems in Cambodia, and support for India’s Bhashini project to include multiple subcontinental languages in AI frameworks.


Future Challenges and Opportunities

The discussion identified critical gaps requiring future partnerships. Jain highlighted the need for a global repository and marketplace of AI solutions enabling startups in one country to deploy innovations in others. Currently, significant barriers prevent a startup in India from selling solutions in Ethiopia, despite having potentially relevant tools.


This extends to evaluation and quality assurance, where the absence of regional AI evaluation hubs creates uncertainty about how solutions assessed in one country might perform in different contexts. Jain proposed creating shared playbooks, governance frameworks, and talent pools that could be leveraged across borders.


Addressing Market Concentration Concerns

An audience member raised concerns about whether AI democratisation would genuinely benefit small enterprises or primarily advantage large technology companies. Bhattacharya addressed this by emphasising that sustainable value creation depends on user adoption and practical utility rather than market capitalisation. She advised focusing on work satisfaction and improving people’s living standards rather than being driven by valuation metrics.


Multi-Stakeholder Coordination Model

The panel revealed sophisticated understanding of how different stakeholders must collaborate whilst maintaining distinct roles. Governments create enabling frameworks and digital public infrastructure. Private sector organisations bring scaling capabilities and innovation capacity whilst embracing social responsibility. Implementation organisations serve crucial convening functions, facilitating collaboration and ensuring technological capabilities translate into practical impact.


Academic institutions provide evaluation capabilities and technical expertise that build trust in AI systems. International organisations contribute coordination mechanisms and platforms for collective action.


Conclusion

This discussion demonstrated that responsible AI for sustainable development requires moving beyond theoretical frameworks toward practical, measurable collaboration. The panel’s insights revealed that whilst AI technology development may be straightforward, creating institutional mechanisms and implementation ecosystems for sustainable impact represents the real challenge.


The reframing from innovation gaps to power gaps suggests solutions must address structural inequalities in resource access rather than simply promoting technology transfer. Evidence from successful implementations like India’s financial inclusion programme demonstrates that democratised technology can transform populations when supported by appropriate frameworks and partnerships.


The Hamburg Declaration model of concrete commitments over abstract principles provides a pathway for translating intentions into measurable outcomes. The path forward requires continued collaboration across stakeholders, with each maintaining distinct roles whilst working toward shared objectives of equitable, responsible AI deployment that genuinely serves sustainable development goals.


Session transcriptComplete transcript of the session
Robert Opp

From the perspective of the UN Development Program, certainly we see a concern with what is happening in the development and adoption of AI in that there’s no question and we are very convinced that AI can have such a powerful impact on sustainable development in a positive way. It can help us really close some of those gaps that we see in persistent development challenges. However, we know that the way that it is evolving now is not equitable. And if we do not have a kind of commitment to responsible use of AI, we fear that the AI equity gap could actually get worse or inequality could get worse. And more so, it also can be harmful in some cases if we’re not applying AI responsibly.

So we want to get into then the point about… How do we actually address some of those challenges? How do we get some of the… measures in place or the kinds of commitments, principles that we need to have as an overall community, especially in the international development community, but beyond in terms of private sector and others as well. In terms of what are our commitments to making sure that we are deploying and using AI in a responsible way and building AI ecosystems. So this does tie in with the process that has been a feature of previous AI summits as well as the Hamburg Sustainability Conference that is hosted every year in the city of Hamburg and sponsored by the government of Germany with other partners like UNDP in the city of Hamburg.

And as part of that process, we have introduced a declaration on AI for sustainable development, the Hamburg Declaration. And so we’ll talk about that a little bit in the context of this conversation after exploring some of the kind of opening thoughts around multistate. Thank you very much. in general. So, I’m going to start off and introduce our panelists and then we’ll go through a couple of rounds of questions. Be thinking about your questions as well because I think that we’ll have time to go to you as well for a Q &A and so happy to get your thoughts and get some interaction with the panel as part of this session. So, with that, I have a very distinguished panel to introduce to you and it’s a real pleasure to have them here today.

I would like to introduce, sitting on my left, Dr. Barbel Koffler, who is the Parliamentary State Secretary at Germany’s Federal Ministry for Economic Cooperation and Development. She’s a long -standing champion of human rights and development, a member of the Bundestag since 2004, and she plays a key role in shaping Germany’s global development economy. Thank you. Yes, exactly. We are also honored to have Ms. Arundhati Bhattacharya, who’s the chairperson and CEO of Salesforce South Asia, a transformative leader with over four decades of experience. She made history as the first woman chairperson of the State Bank of India, where she led one of the country’s most significant digital transformation journeys. She oversees a huge part of Salesforce business in this region and beyond, shaping how technology partnerships drive innovation skills and inclusive growth.

We’re also joined by Nakul Jain, who’s the CEO and managing director of Wadwani AI Global. Nakul is a mission -driven technology leader. He works to advance AI solutions for underserved communities across the global south. And his team has done 40 deployments, of AI reaching more than 150 million people in use cases such as health care, healthcare, agriculture, and education. And we’re also joined by Dr. Sachin Loda, who’s the chief scientist and head of research at Tata Consultancy Services. He’s a leader in cybersecurity and privacy research. He heads Tata Consultancy Services’ work on trustworthy AI, quantum resilience, cloud de -risking, and privacy by design. And that is all about translating cutting -edge research into real -world, award -winning innovation.

Please join me in welcoming our panelists. Okay, so let us start. Dr. Kofler, I’d like to start with you, please. You know, you represent the government perspective here on this panel. What do you think are the difficulties with the distinct roles of government, but also some of the other players like… private sector, civil society and international organizations, the roles that they play in building AI ecosystems that are both innovative and inclusive.

Bärbel Kofler

I used this one in the meantime. Thank you for the question and thank you for having me also as a representative of the government. But I’m very happy to be on a panel with scientists, with academia and with private sector because at the end of the day, artificial intelligence and how we shape really the future with that is depending on multistakeholder engagement. There has to be an engagement from all parts of society and also including civil society because it’s a broad outreach in our societies we have to undertake. So that’s why I think it’s very important to discuss it. You were pointing out challenges and advantages of artificial intelligence. And I think there is the role.

I think there is the role of governments coming in. because, yes, there are tremendous advantages of artificial intelligence. In my sphere of politics, we discuss about how we can use that for detect diseases better through AI, cancer in various topics. We discuss about climate change and how we can make prediction applicable for everybody on the ground. We discuss about administration and the advantages in administration. AI could offer. But at the end of the day, all those hopes will come into reality if we shape that in a framework, in a legal framework, in a framework we coordinate with partners around the globe and within society, which makes all those advantages accessible for everybody, for all parts of society, for any citizen to address its government through AI conversation in its own language, for example, to smallholder farmers if they get the chance to make use of it, to doctors in remote areas who can really then detect with the new technology diseases or whatever.

So we have to close the gap. And I would say it’s not an innovation gap, it’s a power gap. Because innovative people are existing around the globe. Ideas are created around the globe in all spheres of society. So I’ve been here on a panel or at a session, this brilliant young Indian startup, yeah, how would I call it, people, I don’t know, I forgot the English word, sorry. This brilliant people who are creating startups. And that is possible all over the globe, but the environment has to be there. And if you look how venture capital, for example, is distributed, and we know that, I think, it’s 17 % of venture capital only is in those parts of the world who are presenting more than 90 % of the people.

So we feel a power gap at the end of the day. If you look about where you have data center resources, it’s even worse. Global North, it’s almost all the data centers available capacity and only 0 .1 % is somewhere in the Global South. So there is a big gap and we have to overcome that gap. We have to close that gap. And I think that’s something governments have to do also and where they utilize to create an environment. If it’s coming to energy consumption, if it’s coming to an enabling environment for research, if it’s coming to skill training for everybody. I learned in one of the sessions that we should change our mindset and everybody should get a job.

And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. But to do so, we have to invest in the mindset and in the skills of all users. We have to work on vocational training and university training to make research, education, and the needs of even small and medium -sized enterprises heard and linked together to bring small and medium -sized enterprises into the position to participate also from that new technology.

Not only the big players. So all those things need framework and need governance. And we have to make sure that the outcome, the research, the results, the data sets are available for everybody. That’s why we are also investing in open source. It’s something we very much are aligned with the Indian ideas because if we don’t do so, the first thing, the advantages are

Robert Opp

Okay, now it’s working. So framework and governance, important factors to have in the multi -stakeholder partnerships. Arundhati, maybe let me turn to you with a very similar question, but taking this from the private sector perspective. How do you think the responsibility should be distributed between the different stakeholders? And what does industry or private sector companies, tech companies in this case, uniquely bring to the table? And where is partnership essential?

Arundhati Bhattacharya

Thank you and good morning, every one of you. So I have been very fortunate in the fact that I have worked with two organizations. The one that I worked with initially was the State Bank of India, which is the largest bank in India and also the bank that really and truly spearheaded the financial inclusion program of the current government, which is the PM Chandan Yojana. And while doing that, I realized that any improvement in technology, if it is not really democratized, then it doesn’t really have an impact. If you want it to have an impact, you need to democratize technology. That’s the first thing. The second thing is currently I work at Salesforce. And Salesforce, again, it was set up with the intention that business is a platform for change.

We have what we call a one -by -one -by -one mission, which is we contribute 1 % of our profits or equity. 1 % of our products and 1 % of our time to the community, to the non -profit community. Of course, in India, it is 2 % of the profits because that’s what the law demands. And while doing that, we ensure that the non -profit sector not only has access to our products but also knows how to use them and use them to the best of their abilities. Along with that, we also do a lot of work in skilling various people. So we call people who basically are trained in the sales force technology as trailblazers. And India has 3 .9 million of them, the second largest up to the US.

And this is a community that has been literally nurtured by us. So it’s not only a question of ensuring that we are getting our products, out to the community, to the non -profit sector, to the various enterprises where we obviously market our products to them and then help them implement it in their organizations. But we are doing a lot of work on the skilling front as well because we feel that is something that’s very, very essential. Without that, India will not really be able to take the advantage of all that technology brings to it. But one thing is very clear in our heads. And that is that a populous nation like India can never really have the standard of living that it deserves unless technology is a part of the play.

We tried this financial inclusion initiative for 15 long years before we actually were successful in 2014. And the reason for that was technology. Because by that time, the unique identification authority which gave us our biometric units, you know, marker to enable us to get the unique identification number. And that’s what we’re doing. that helped us do the KYC and second the spread of the mobile networks enabled us to actually approach the 600 ,000 villages which is India before this we never had that connectivity now it was because of this that these people then got accounts 97 % of these accounts were without any balances and a lot of people asked us was this merely a ploy was this merely you know some kind of action without any meaning but very soon we found that those accounts started getting money because the government could then target the subsidies they were giving to below poverty line people directly to the citizens without having to go through any middlemen in fact some politician at some point of time had said that only 13 paise after for every one rupee of subsidy actually goes to the person who needs it Here, when the money started flowing directly into the account, the whole of the rupee was going to the person who needed the subsidy.

Now, over and above that, we then had the UPI, which is the Universal Payment Interface. And what did that do? That enabled even small customers and small shopkeepers to start actually taking money in a digital form. And because they were taking money in the digital form, these accounts of theirs, which had been opened, started showing a cash flow. And the moment an account has cash flow, bankers then become interested in lending you money because they know that there is a cash flow to sort of back it up, to enable you to repay it. So here is a person who was taking a loan of, say, 2 ,000 rupees, paying 10 % a month, suddenly becomes eligible for that 2 ,000 rupees at 7 % a year.

Imagine the difference it can make in the life of that person. And this is all… All on account of TikTok. AI is not going to be anything different. AI is going to solve a lot of problems which otherwise in a populous nation like ours cannot be solved. And in order to solve it, we therefore need to understand people are eager to take it. They will take the technology because it improves their lives. But in order for them to take it, it’s up to the policy makers to make those interventions to ensure that they are not being taken for a ride. That whatever is being offered to them is being offered in an ethical manner. That we are doing the right, we are creating the right kind of infrastructure to enable them to actually be able to access it.

That the right to privacy of their data is properly maintained. So I think the policy makers, the people who are there, who are conducting all of this, they have a far bigger responsibility. Because adoption is not something which is a policy. Adoption is not a problem in India. Adoption will happen. as soon as people realize that it’s helping them in their regular lives adoption is not going to be a problem there’s not going to be any pessimism on adoption and if you go to the floor where the expo is you’ll actually see people how interested they are in finding out how it’s going to improve their lives but it is people who make the policies people who make the infrastructure available people who actually make the initiatives available who need to take the responsibilities and companies like ours we need to take the responsibility as to how we skill the people enable them to understand what is good for them and what is not good for them make them understand that there is both good and bad and they need to choose the better they should not be taken for a ride so all of us have a role to play and I think we need to be aware of those roles and we need to play them well thank you thank you so much

Robert Opp

so actually those Those two answers complement each other so nicely because, as you were saying, Dr. Koffler, there’s the governance and the framework, and this is key to the Indian experience as well. Government sets down the framework, the rails, but then private sector has such an important role in scaling, in innovating, in helping people be successful and interact with that. So I’m going to turn now to Nakul. And a question now because, you know, your Wadwani AI Global has done a lot of work in the space of multi -stakeholder partnerships and the rollout of AI for specific use cases. From your experience, where have multi -stakeholder partnerships already demonstrated tangible impact in terms of advancing responsible AI for development?

And what are the conditions that helped ensure these collaborations? Translated into sustained impact rather than… and just remaining solved.

Nakul Jain

Thank you for that question, Rob. Also, thank you for having me here. I hope this is working. It’s working, but it won’t be working. All right, okay. Yeah, no, thank you, everyone, for being here. So, you know, my answer will be from my experience of deploying some of these solutions on field and what has worked for us, what has not worked for us. Now, what we have realized is that building technology, at least for an application organization like ours, is the most easiest part of the entire spectrum. It’s everything around it that becomes, you know, tedious for us to sort of get done, which essentially means that there is a need for a multi -stakeholder ecosystem that can bring in their expertise, who will take care of everything that sits around the technology.

The ecosystem that works is the one that ensures from day one that we are able to do everything that we need to do. What will be the institutional mechanism? of getting something done. How will the solution, the use case be institutionalized in the ecosystem, how will it be embedded within the framework we are speaking about, within the department ministry we are talking about, within the problem area and the people who will be using it. That cannot be an afterthought, that has to be something that needs to come in from day one. And I’ll give you a few examples, some of the solutions that we were able to deploy at scale and what helped them work.

So we built a solution in education around oral reading fluency, essentially to assess students on their reading abilities. Technology part, like I said, not very difficult to do. But this is not something that Wadwani AI Global could have done alone. This required a collaboration between government, ownership from government, and what Arunati also mentioned, that it is required for policy makers to own this problem. We as an organization, came in as a facilitator. But the government of Gujarat was the one who led this entire initiative, who made sure that data is available, who helped us find the right partners who could then be leveraged to annotate some of this data. The other thing, rather than trying to create this solution in silos, what we also started thinking about is where will it get embedded, right?

Tomorrow, if this has to be used, and there are tons and tons of applications that are already being pitched to government for education, pitched to teachers, how do you ensure that they actually get adopted? And one way to do that was to understand how is government currently running their programs. So we started working with government and their technical partner to work on this together. So rather than thinking about this as a solution that we have created and we give you an app, we started figuring out how can whatever we develop plug into the existing system. . in classrooms, wherever they were, and then how do you ensure that there is a monitoring mechanism established so that the government is now fully capable of not just following up on the adoption, but also to figure out if there’s a genuine impact of technology.

All of this was only possible because we had this collaboration between the government with the technical partner who government was working with us being the AI partner. And the partners who were working with government at school levels to help them programmatically, to help teachers build capacity on this, to help teachers understand it, to also counsel teachers, because there’s also a constant threat about AI trying to replace jobs. So there’s also that handholding that is required at field level. And as a tech organization, we could not have done that. So you need that partner as well. Moving to a slightly different example around healthcare. And we have done a lot of product. in tuberculosis and you know some of the things remain common around your government partnership data collection through them and you know your programmatic partnership but what additional support we got was from icmr how can we start thinking about evaluating this from day one because health being a much more sensitive area right directly impacting the lives you know when some of our decisions give outcome but some of our models give those decisions how do you ensure that evaluation and the success criteria is not an afterthought so you don’t just work with the government you also work with the agency who will eventually evaluate it so you are finally following your from the first day itself how you should be optimizing for what parameters and what results so these are some of the examples robert i can think of where you know technology government ecosystem partners played a role came together and delivered something

Robert Opp

thank you nicole and just a quick follow up on that so Wadwani AI Global is how would you describe the space that you occupy in that? You’re not government, you’re not exactly private sector you’re kind of an integrator, right?

Nakul Jain

Yeah no absolutely, I think what we would like to call ourselves is a convener of technology, especially for social good, so our role essentially is to ensure there is an impact, leveraging artificial intelligence who all need to participate is something that we work with partners along we work with governments, whether it requires capacity building whether it requires advisory to the government, whether it requires actual product development we help government with everything because what we have also realized is while government is well intent, there is a need for a lot of hand holding because they have priorities and artificial intelligence intelligence, while now it’s become a buzzword, they still need a lot of hand -holding in order to make sure that some of these solutions just don’t end up living in labs and some of these rooms like this, but actually go on field.

So our role becomes even more important to make sure that everyone who needs to participate comes along, participates, and ensures that there is an actual impact on field. Thank you.

Robert Opp

And the reason I asked that is because also to go over to Sachin, you know, there are, you know, kind of the roles of, you know, private sector innovation, and there’s the government role. The international organizations like UNDP have also bring a kind of multilateral and global perspective on some of these things. But, you know, companies like Tata Consultancy also serve kind of in the middle there as well. And I’d love your perspectives on that same question, which is, you know, where are you seeing multi -stakeholder partnerships resulting in tangible and tangible results? And I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question.

I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great I think that’s a great question. I think that’s a

Speaker 1

to contextualize it’s not just true for help it’s true for any domain you look at and and yes data is fragmented siloed that’s true even within enterprise it’s of course true for at a bigger scale I personally believe that bringing this data kit together is important to create open data ecosystem but I believe problem is lies somewhere else the problem lies in the sensing infrastructure that you need to put together to be able to have high quality high volume high velocity data I take example of for example I mean air pollution monitoring right just think of that you probably could do a much better job if you had sufficient number of sensors deployed across the region of interest probably they are going to be large in number.

You will have to think of, you know, how do you make it cheaper, faster, better. And having done that, it will also depend on, you know, how you could analyze, process and sort of derive insights. That’s where AI will come in, right? So I think government can play a very big role in sort of catalyzing this and enabling sort of, you know, putting together a great sensing infrastructure could be thought of, you know, part of the digital public infrastructure. On top of that, private and public entities could innovate, academicians and inventor community could innovate. So what are we doing here? So we are. So Indian government actually has been very proactive on the overall responsible AI mission.

I think. I was part of the task force that was set up by. principal scientific advisor to Indian government, Professor K. Vijay Raghavan on responsible AI. This happened about eight years back. And we kind of tried to study the implications of AI and the responsible facet of AI in Indian context then. And following that, a lot of mission projects were launched. So as part of that, I have seen now there are different AICOEs supported by government. So we are part of AICOE for sustainability in collaboration with IIT Kanpur. And it’s going live. In fact, some of you might have attended a session by IIT Kanpur team yesterday on this very same topic, where we are looking at a lot of interesting problems in overall sustainability domain.

So that’s just data. I just want to make, So if you just mention your other two points super quickly. Yeah, yeah. So I’ll just mention about compute, which is, again, a very important facet. There is a big gap between what is available in global north versus global south. I have just two ideas to share. One is we should think of how we could repurpose the legacy hardware. And you would see that a lot of high -tech companies have innovated along that. It’s not necessary that you need to have current latest hardware. You could do a lot of innovations around it. Two, of course, you should explore the new hardware that is becoming available. So TCS, IBM, and Andhra government have come together to create a quantum valley in India, which could open ways for that.

So I think these are some quick remarks I thought I’d make.

Robert Opp

Great. No, I mean, and the depth is incredible, right? And you’re talking about these very… Very concrete collaboration. that are making this, you know, a possibility. Okay, I think we’ll go straight into just a quicker second round and then we’re going to go for some comments from you. And I want to bridge this now into, you know, we’ve been talking about multi -stakeholder partnerships and how there are different roles across different parts and stakeholders that have different roles to play. But what’s clear is that we also need collective action. And so I wanted just to kind of think about, okay, how do we move to kind of getting overall kind of collective action in these spaces for responsible use of AI that kind of gives that sort of global framing around it.

And as I mentioned at the beginning, one of the things that we have been doing with the, as part of the Hamburg Sustainability Conference and the Hamburg Declaration, on responsible AI for sustainable development. is really looking at this space of collective action. So a couple of years ago, we did the first Hamburg Declaration. We have about 50 organizations that have endorsed the declaration. It has a number of principles related to it around responsible use of AI. And Dr. Koffler, I’m going to turn back to you to ask you a question especially about that process, which is since the adoption of the Hamburg AI Declaration, what tangible progress have we seen and what are the concrete actions that you think are now required to move from the higher level principles into sustained and scaled impact?

Bärbel Kofler

Well, thank you for mentioning the concrete action because that’s actually what really it is all about. We were coming up with that idea on Hamburg Sustainability Conference and on that paper, not because we want to create another system, but because we want to create another paper or we want to invite to another conference. We really want to come up with commitment every stakeholder has to undertake when they are signing that memorandum. And so my ministry also was signing it, and we had concrete, tangible, measurable duties we have to fulfill, commitments we have to fulfill. I would love to give you a few examples. In the sake of time, I’m trying to be very short and very brief.

For example, one of the commitments was training people. We were committing to train 160 ,000 people within one year. We fulfilled that already, and that makes me very proud because we were trained already 190 ,000. So that is the first step we were committing. We were committing to open up AI building blocks, 12 AI building blocks, especially for using AI to climate action. We are now by a number of 15. And we were also committing 30 AI diagnostics. We are now by a number of 15. and we achieved 55 data sets. So that’s what it’s all about. We are not signing something we don’t want to do and then we have another year of time and applauding ourselves for signing something. We want to come up really with concrete steps and that’s what my ministry did until now.

There are examples all over the world we are working with partners. We are working, for example, in Kenya. There is about analyzing satellite data which makes them available for farmers. We are working with Cambodia where it’s about medicine to detect cervix cancer, for example. We are working with the Indian government. Your government is doing a fantastic project including languages of the subcontinent, analyzing, we were just talking about it, analyzing the data sets for many languages which have to be included in the overall AI framework. So we are supporting Bershini by nine data sets. We were supporting the collection. So that’s what it’s all about. at the end of the day, and that will be my last sentence.

I invite everybody in the room, and maybe you spread the thought, to join Hamburg Declaration, because we started that, and it’s continuing. Government should be part, private sector should be part, civil society should be part, academia should be part of that, and come up with concrete, measurable, tangible commitments which could be fulfilled or can be fulfilled then. So you’re

Robert Opp

I’m glad you mentioned that, because the QR code that you see on the screen here, it gives you more information. If you would like to endorse the Hamburg Declaration, just go to the website there. We’re getting short on time for audience interaction here, but Sachin, really quickly, TCS endorsed the Hamburg Declaration last year. What? What progress have you seen since then?

Speaker 1

Yeah, so I’ll just quickly point out… to some activities we have been doing since then. So as I said, we are part of responsible AI deliberation in India and elsewhere. And we have launched a big program around it, where Carnegie Mellon is now our academic partner. And we are also talking to some Indian academics. As you rightly said, these are very hard problems, and this requires significant collaborations across sectors. We are also building our own technology to be able to evaluate, calibrate, and sort of help AI engineers build more responsible AI. It’s called Trusty Platform. More details we can share offline. We are also very keen on greening of AI, and that requires… a lot of resource -aware AI.

work. So this is again very significant and that’s something we are on.

Robert Opp

Fantastic. And you just mentioned greening AI. That’s one of the key principles of the declaration. Arundhati and Nakul, I’ll turn back to you now for some quick thoughts before we go for some Q &A. Where do you think as we look ahead, we’ve talked about the Hamburg Declaration as one sort of platform, but where do you see the strongest opportunities for new types of partnerships going forward? Things that maybe don’t exist yet but that should? Or what do you think is the next wave of this?

Arundhati Bhattacharya

Well, you know, as an organization, Salesforce from 2014 has had an office of humane and ethical use of technology. So this is something that I think most large corporates have to have a self -regulatory feature. This is important because unless you self -regulate you will have regulators coming down with a very heavy hand at some point of time or the other and that is not something you want if you want to keep innovating. So that’s something I think everybody, all of us should look at. And having said that, the other ways in which we can actually take part with the stakeholders is to continue to co -innovate with many of our customers, those who are interested, to take forward our participation in many of these forums that are there.

For instance, the National Skilling Mission, the skilling mission that is undertaken by NASCOM, which is the IT industry body. We also do internships in partnership with AICT, which is the All India Council of Technical Education. So there are very many apex bodies that are willing to give us space for innovation. For the private sector to come in and be part of the initiative. and the advantage of going through them is they have far better reach to the colleges, to the communities that enables us to actually bring our products directly to them. So I think if we look at all of these taking them together, that really makes the story much better and much stronger.

Robert Opp

Thank you very much. Nakul, same question to you.

Nakul Jain

So in the interest of time I’ll try to be quick. Two things that we feel is missing in the ecosystem one is this global repository of solutions that we are trying to work on and this is not just like a DPG platform or a DPI platform but even that can include private sector. If I am a startup in India, I have built a good tool, how do I sell it, deploy it in Ethiopia? It becomes very difficult right now. So is there an opportunity for us to create a marketplace of sorts that has shared solutions, shared playbooks, shared covers? governance ecosystem frameworks has a talent pool that could be leveraged who worked on these solutions.

So I think that’s one thing that is currently missing. The other is around AI assurance. So is there an opportunity for us to create regional evaluation hubs, which are, you know, for a given particular region, because I see a lot of organizations struggle within country to get solutions evaluated. Now, imagine a situation where we are trying to take solution from one country to other, but there is no clear guidelines around how the evaluation, you know, yardsticks would differ from one country to another. So that could be another area of collaboration between, you know, various organizations who can set this up.

Robert Opp

Great ideas from you both on very concrete things. Okay, so we have time for a couple of questions from Okay, all right, we’ve got a lot of hands. I’m going to try. We’re going to take a couple of questions. We’re going to take a couple of questions at the same time. And then we’ll just go back. for one more round from the panelists for answers. So I think we’ll start here, and then I saw another hand here. We’ll try to do three quick ones. Okay, so one, two, three.

Audience Member 1

My question is to Mr. Robert and Nakul. So I want to ask is, in comparison to LLMs, your chat GPTs, Geminis, what is the value being added in comparison to LLMs versus traditional ML? I mean, your classical improvements over functions, et

Robert Opp

Okay. Then there was a question over here. There are two questions there. Yes, gentleman in the blue, and then beside him.

Audience Member 2

My question is to Mr. Bhadwani. You talked about bringing together people. There are fragmented initiatives. There are fragmented enabling services. But they are competing. So how do we bring them together for a shared purpose? And… And to create… a bigger impact.

Audience Member 3

Okay. Hi, sir. Hi, ma ‘am. I’m current undergraduate student of economics at University of Delhi. My question to you, ma ‘am Bhattacharya, first of all, I would like to understand from your perspective, having served in the private and in the public sector, how do you see all this debate that is around that SaaS as a service will be dead as a result of all this, but the context window, the context you have will actually be the most beneficial. So what will actually happen might, one might say that the valuation will be doubled by all these companies who are actually in this space. So in that context, if I ask you that how do you create value for small enterprises that’s happened?

For TCS, for example, the valuation is around $100 billion. Cursor, which is a very new company, the valuation is $30 billion. So an employee count of 300 is creating so much value, and that’s at 600 ,000 employee count is creating that value. So in that context, with the democratization of software and tech actually happen or is it just the big tech that benefit out of it? There will obviously be the benefit of people here and there but in the larger context,

Robert Opp

Okay, so some very technical questions here. Arundhati, do you want to start with that one first about how to create value for small companies?

Arundhati Bhattacharya

Yeah, so you know, over here, I don’t think we are talking merely about the value something creates on a particular day. And by the way, those values, they keep sliding up and down. Nobody really knows who’s the winner in the race till the race is over and the race is far from over. Okay, so towards that extent, I would ask you not to be too, you know, not to be too pressured or too influenced by things like SaaS is dead and the entire valuation will come to only these few companies and things like that. End of the if there are no users, what valuation will this company have? You need users. Now who creates users?

Users are created by many many processes and methods and only just having an LLM is not going to give you all of that. So a company when it actually works, there are workflows, there are governance rules, there are auditability requirements, there is observability, there is everything in over there and therefore just to think about one particular development by some particular company making that kind of a statement, I think at this point of time is a little too ahead of its time. Things will shake themselves out. Having said that, it’s also true that whether it be TCS, whether it be Salesforce, whether it be State Bank of India or whether it be any other company, the ways in which we do our work is going to change.

The ways in which people take in technology and that influences their life. will change and companies that will be amenable to the change companies that will take advantage of the change are the ones that are going to sustain are the ones that are going to grow up in value but my sincere request to you because you’re a youngster is don’t try to look at market cap while trying to create your company do the right things the market cap will follow okay it’s not market cap that makes you want to work it’s your work and your satisfaction and being able to get something that really and truly helps people improve their standards of living improve the way they actually stay in this world that is what should drive you not the market cap numbers

Robert Opp

and you wanted to add quickly to that

Speaker 1

yeah I just want to add to that think of LLM as something that is worldwide Okay. And that’s a great advancement of AI. But you need something that is industry wise, something that is company wise, something that is context wise, task wise, right? We are far from that. And there are lots of challenges. I mean, just before this, Arundhati and I were discussing the kind of data silos that exist even within an enterprise, right? So we are looking at AI and agentic AI in particular is something that is going to bring more connectedness within enterprise. There is significant amount of work to be done. But you imagine now this combined intelligence, right? Where you have different agents coordinating and working together will probably help us get more.

So some of parts could be more than one is very likely scenario. There are challenges to get it done. Okay. But nature is, the nature of work may change, but there is plenty of work coming at us. LLM has just solved part of the problem

Robert Opp

Nakul there were a couple questions about how do we bring people together for shared purpose and specific question I think it was LLM versus ML is that what you’re saying yeah do you want to take this sorry we’ve got just a few minutes before we have to close the session so just

Nakul Jain

so LLM versus traditional ML in our experience what we have realized is in a lot of places specifically because we work in impact sector a lot of work we do is for ground level users resource is a big challenge right you’re talking about situations where the cell mobile phones are very basic internet is a challenge adoption is an issue in those scenarios we have often learned that traditional ML models have worked much better to serve the purpose and large language models have not been able to serve the purpose and in a lot of cases not even deployable right so there are genuine technical issues because we have of infrastructure because of low resource settings, which have not allowed LLMs to sort of reach where we would want them to reach, right?

Second, in such cases, small language models or small AI is what we’ve been now moving towards to have it serve very specific purpose rather trying to give general intelligence to, you know, some of these users on field, right? So that’s a very quick answer I could have just gone on. To answer your question, sir, I don’t think we are trying to eliminate competition. I think the way we should be thinking about this, that there are different ways organizations can collaborate, right? Some of them could collaborate based on geographical synergies. Some of them can collaborate based on expertise, you know, synergies. Some of them can collaborate based at what part of the life cycle of this entire AI deployment they can collaborate on that.

And absolutely, there will be consortia that will be created. It will still continue to compete with each other. And that’s a good thing to ensure that there’s also quality in the… but the idea is to foster collaboration where those synergies are possible, at least we start with that.

Robert Opp

Thank you. Sorry, sir, we’re going to have to close the panel because we have literally 10 seconds left on the counter. I just want to say thank you very much to our panelists. We have gone from highest level government frameworks to questions about competitive landscape and how small companies can exist or can prosper, but the through line to all of this is that it doesn’t happen in silos. It cannot happen with only one stakeholder of society. This has to be multistakeholder and we have to commit to collective action. So I again refer to the QR code if you’re interested in the Hamburg Declaration that has more information on how to endorse that. Please join me in thanking our panelists for the excellent insights.

And thanks to all of you and good rest of summit. you. Thank you. Thank you. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
R
Robert Opp
1 argument138 words per minute1956 words845 seconds
Argument 1
AI can have powerful positive impact on sustainable development but current evolution is not equitable and could worsen inequality without responsible use
EXPLANATION
Robert Opp argues that while AI has tremendous potential to address persistent development challenges and close gaps in sustainable development, its current trajectory is inequitable. Without responsible deployment and use, AI could exacerbate existing inequalities rather than reducing them.
EVIDENCE
He mentions the need for commitments to responsible use of AI and references the Hamburg Declaration on AI for sustainable development as part of addressing these challenges.
MAJOR DISCUSSION POINT
AI for Sustainable Development and Equity Challenges
AGREED WITH
Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain
B
Bärbel Kofler
4 arguments160 words per minute1225 words458 seconds
Argument 1
Innovation exists globally but there’s a power gap in resource distribution – only 17% of venture capital goes to Global South representing 90% of people, and only 0.1% of data center capacity is in Global South
EXPLANATION
Bärbel Kofler argues that the problem is not lack of innovation or ideas globally, but rather a power gap in resource distribution. She emphasizes that innovative people and ideas exist worldwide, but the enabling environment and resources are concentrated in the Global North.
EVIDENCE
She provides specific statistics: 17% of venture capital goes to regions representing more than 90% of the world’s population, and only 0.1% of global data center capacity is in the Global South.
MAJOR DISCUSSION POINT
AI for Sustainable Development and Equity Challenges
AGREED WITH
Arundhati Bhattacharya
Argument 2
Artificial intelligence’s future depends on multi-stakeholder engagement including government, private sector, civil society, and academia
EXPLANATION
Kofler emphasizes that shaping the future with AI requires engagement from all parts of society. She argues that no single stakeholder can address AI challenges alone, and broad societal participation is essential for realizing AI’s benefits.
EVIDENCE
She mentions being on a panel with scientists, academia, and private sector as an example of necessary collaboration, and references the need for broad outreach in societies.
MAJOR DISCUSSION POINT
Multi-stakeholder Partnerships and Roles
AGREED WITH
Robert Opp, Arundhati Bhattacharya, Nakul Jain
Argument 3
Government role is creating frameworks, legal structures, and enabling environments for research, skills training, and ensuring outcomes are available to everyone
EXPLANATION
Kofler outlines the specific role of governments in AI development as providing the structural foundation through legal frameworks, coordination with global partners, and ensuring equitable access. She emphasizes that governments must create conditions where AI advantages are accessible to all citizens and sectors of society.
EVIDENCE
She provides examples including AI for disease detection, climate change predictions, improved administration, and enabling smallholder farmers and remote doctors to access AI technologies.
MAJOR DISCUSSION POINT
Multi-stakeholder Partnerships and Roles
AGREED WITH
Speaker 1
Argument 4
Hamburg Declaration focuses on concrete, measurable commitments rather than just principles – German ministry exceeded training targets (190,000 vs 160,000) and AI building blocks (15 vs 12)
EXPLANATION
Kofler emphasizes that the Hamburg Declaration is about tangible, measurable commitments that stakeholders must fulfill, not just signing another document. She demonstrates this with specific achievements by her ministry that exceeded their original commitments.
EVIDENCE
Specific examples include training 190,000 people versus the committed 160,000, creating 15 AI building blocks versus the committed 12, and achieving 55 datasets. She also mentions concrete projects in Kenya (satellite data for farmers), Cambodia (cervix cancer detection), and India (language datasets for Bhashini).
MAJOR DISCUSSION POINT
Hamburg Declaration and Collective Action
AGREED WITH
Nakul Jain
A
Arundhati Bhattacharya
6 arguments158 words per minute1779 words673 seconds
Argument 1
Technology democratization is essential for impact, as demonstrated by India’s financial inclusion success through biometric identification and mobile networks
EXPLANATION
Bhattacharya argues that technology must be democratized to have meaningful impact, drawing from her experience leading India’s financial inclusion program. She explains how the combination of biometric identification and mobile network expansion enabled successful financial inclusion after 15 years of failed attempts.
EVIDENCE
She provides detailed evidence from India’s PM Jan Dhan Yojana, including the role of unique identification authority for biometric KYC, mobile network spread to 600,000 villages, direct subsidy transfers eliminating middlemen (from 13 paise per rupee reaching beneficiaries to the full rupee), and UPI enabling digital payments that created cash flow records for lending.
MAJOR DISCUSSION POINT
AI for Sustainable Development and Equity Challenges
AGREED WITH
Bärbel Kofler
Argument 2
Populous nations like India cannot achieve deserved standard of living without technology integration
EXPLANATION
Bhattacharya asserts that countries with large populations like India fundamentally require technology integration to achieve adequate living standards for their citizens. She positions technology as not optional but essential for development at scale.
EVIDENCE
She references the 15-year struggle with financial inclusion that only succeeded when technology (biometric ID and mobile networks) was integrated, and mentions how AI will solve problems that otherwise cannot be solved in populous nations.
MAJOR DISCUSSION POINT
AI for Sustainable Development and Equity Challenges
Argument 3
Private sector brings scaling, innovation, and user success through democratizing technology and extensive skilling programs
EXPLANATION
Bhattacharya outlines the private sector’s role in multi-stakeholder partnerships, emphasizing democratization of technology and comprehensive skilling initiatives. She describes how companies like Salesforce contribute through their business model and community engagement.
EVIDENCE
She provides examples from Salesforce’s 1-by-1-by-1 mission (1% profits, products, and time to community), India’s 3.9 million Salesforce-trained ‘trailblazers’ (second largest after US), and extensive work with non-profit sector and skilling programs.
MAJOR DISCUSSION POINT
Multi-stakeholder Partnerships and Roles
AGREED WITH
Robert Opp, Bärbel Kofler, Nakul Jain
Argument 4
Self-regulation through offices of humane and ethical technology use is essential for large corporates to avoid heavy-handed regulation
EXPLANATION
Bhattacharya argues that large corporations must establish self-regulatory mechanisms for ethical technology use to maintain innovation freedom. She suggests that proactive self-regulation prevents restrictive external regulation that could stifle innovation.
EVIDENCE
She mentions Salesforce’s office of humane and ethical use of technology established in 2014 as an example of self-regulatory features that large corporates should implement.
MAJOR DISCUSSION POINT
Hamburg Declaration and Collective Action
Argument 5
Continued co-innovation with customers and participation in national initiatives like skilling missions and technical education partnerships
EXPLANATION
Bhattacharya identifies future partnership opportunities through co-innovation with customers and engagement with national-level initiatives. She emphasizes leveraging existing apex bodies and institutions that have established reach and credibility.
EVIDENCE
She provides specific examples including participation in National Skilling Mission, NASCOM (IT industry body) initiatives, internships with AICT (All India Council of Technical Education), and how these partnerships provide better reach to colleges and communities.
MAJOR DISCUSSION POINT
Future Partnership Opportunities
Argument 6
Market valuations should not drive company creation; focus should be on work satisfaction and improving people’s living standards
EXPLANATION
Bhattacharya advises against being driven by market capitalizations when creating companies, emphasizing that sustainable value comes from meaningful work and positive impact on people’s lives. She argues that market cap follows good work, not the reverse.
EVIDENCE
She references the volatility of valuations and uses examples of how market caps fluctuate, emphasizing that without users, companies have no real value, and users are created through comprehensive processes beyond just having technology.
MAJOR DISCUSSION POINT
Technical Considerations and Value Creation
N
Nakul Jain
9 arguments170 words per minute1535 words539 seconds
Argument 1
Building technology is the easiest part; everything around it requires multi-stakeholder ecosystem with expertise in institutional mechanisms and embedding solutions within existing frameworks
EXPLANATION
Nakul Jain argues that for organizations focused on AI applications, the actual technology development is the simplest component. The real challenges lie in the surrounding ecosystem requirements, including institutional mechanisms, embedding solutions within existing frameworks, and ensuring proper integration from day one.
EVIDENCE
He draws from Wadwani AI Global’s experience of deploying solutions and emphasizes that institutionalization cannot be an afterthought but must be considered from the beginning of any AI project.
MAJOR DISCUSSION POINT
Multi-stakeholder Partnerships and Roles
AGREED WITH
Robert Opp, Bärbel Kofler, Arundhati Bhattacharya
Argument 2
Successful AI deployment requires government ownership, data availability, embedding solutions in existing systems, and monitoring mechanisms from day one
EXPLANATION
Jain outlines the critical success factors for AI deployment based on practical experience. He emphasizes that government ownership and integration with existing systems are essential, along with establishing monitoring mechanisms from the project’s inception rather than as an afterthought.
EVIDENCE
He provides a detailed example of an oral reading fluency assessment solution in education, developed in collaboration with Gujarat government, where government led the initiative, provided data access, helped find annotation partners, and worked with technical partners to embed the solution in existing systems rather than creating a standalone app.
MAJOR DISCUSSION POINT
Concrete Implementation and Success Factors
AGREED WITH
Bärbel Kofler
Argument 3
Partnerships must include programmatic support, capacity building for teachers, and addressing job replacement concerns through field-level handholding
EXPLANATION
Jain emphasizes that successful AI partnerships require comprehensive support beyond technology, including programmatic assistance, capacity building, and addressing human concerns about AI replacing jobs. He argues that technology organizations cannot provide this level of support alone.
EVIDENCE
He describes the need for partners who work with government at school levels to help teachers build capacity, understand the technology, and receive counseling about AI’s role, noting that as a tech organization, Wadwani AI Global could not have provided this comprehensive support independently.
MAJOR DISCUSSION POINT
Concrete Implementation and Success Factors
Argument 4
Healthcare AI requires collaboration with evaluation agencies like ICMR to establish success criteria and optimization parameters from the beginning
EXPLANATION
Jain argues that healthcare AI applications require specialized evaluation partnerships due to the sensitive nature of health decisions. He emphasizes the importance of establishing evaluation criteria and success parameters from day one rather than treating evaluation as an afterthought.
EVIDENCE
He mentions Wadwani AI Global’s work in tuberculosis and collaboration with ICMR (Indian Council of Medical Research) to ensure proper evaluation frameworks are established from the beginning, given that AI model decisions directly impact lives.
MAJOR DISCUSSION POINT
Concrete Implementation and Success Factors
Argument 5
Technology organizations serve as conveners and facilitators, requiring extensive hand-holding with governments to ensure solutions move from labs to field implementation
EXPLANATION
Jain describes the role of organizations like Wadwani AI Global as conveners of technology for social good, emphasizing that their primary function is ensuring actual impact rather than just technology development. He notes that governments, while well-intentioned, need significant support to move AI solutions from research to practical implementation.
EVIDENCE
He explains that their role includes advisory services, capacity building, product development, and comprehensive hand-holding because governments have multiple priorities and AI, despite being a buzzword, still requires extensive support to move from labs to field deployment.
MAJOR DISCUSSION POINT
Multi-stakeholder Partnerships and Roles
Argument 6
Need for global repository and marketplace of AI solutions with shared playbooks, governance frameworks, and talent pools to enable cross-border deployment
EXPLANATION
Jain identifies a critical gap in the current ecosystem: the lack of a comprehensive global platform that enables AI solutions to be deployed across borders. He envisions a marketplace that includes not just solutions but also the supporting infrastructure needed for successful implementation.
EVIDENCE
He provides a specific example of the challenge: ‘If I am a startup in India, I have built a good tool, how do I sell it, deploy it in Ethiopia? It becomes very difficult right now.’ He suggests this marketplace should include shared solutions, playbooks, governance frameworks, and talent pools.
MAJOR DISCUSSION POINT
Future Partnership Opportunities
Argument 7
Regional AI evaluation hubs required to establish clear guidelines for solution assessment across different countries
EXPLANATION
Jain identifies the need for regional evaluation infrastructure to address the challenge of AI solution assessment across different countries. He argues that organizations struggle with evaluation even within countries, making cross-border deployment even more challenging without standardized evaluation frameworks.
EVIDENCE
He notes that many organizations struggle to get solutions evaluated within their own countries and emphasizes the need for clear guidelines on how evaluation standards differ between countries to facilitate cross-border AI solution deployment.
MAJOR DISCUSSION POINT
Future Partnership Opportunities
Argument 8
Traditional ML models often work better than LLMs in resource-constrained, low-infrastructure settings typical of impact sector work
EXPLANATION
Jain argues that in practical deployment scenarios, especially in impact sectors serving ground-level users, traditional machine learning models frequently outperform large language models due to infrastructure and resource constraints. He emphasizes that technical superiority doesn’t always translate to practical utility.
EVIDENCE
He describes working in situations with basic mobile phones, poor internet connectivity, and adoption challenges, where traditional ML models have proven more effective and deployable than LLMs, which often cannot even be deployed in such resource-constrained environments.
MAJOR DISCUSSION POINT
Technical Considerations and Value Creation
AGREED WITH
Speaker 1
DISAGREED WITH
Speaker 1
Argument 9
Small language models and task-specific AI are more suitable for ground-level users than general intelligence solutions
EXPLANATION
Jain advocates for a shift toward small language models and specialized AI solutions rather than pursuing general artificial intelligence for ground-level applications. He argues that specific-purpose AI better serves the needs of users in resource-constrained environments.
EVIDENCE
He mentions that Wadwani AI Global has been moving toward small language models or ‘small AI’ to serve very specific purposes rather than trying to provide general intelligence to field users, based on their practical deployment experience.
MAJOR DISCUSSION POINT
Technical Considerations and Value Creation
S
Speaker 1
3 arguments139 words per minute853 words367 seconds
Argument 1
Infrastructure challenges require sensing infrastructure as part of digital public infrastructure, with government catalyzing deployment and private/public entities innovating on top
EXPLANATION
Speaker 1 argues that the fundamental challenge is not just data fragmentation but the lack of adequate sensing infrastructure to generate high-quality, high-volume, high-velocity data. They propose that governments should catalyze the deployment of sensing infrastructure as part of digital public infrastructure, with private and public entities innovating on top of this foundation.
EVIDENCE
They provide the example of air pollution monitoring, explaining that better outcomes require sufficient numbers of sensors deployed across regions of interest, which need to be large in number, cheaper, faster, and better, with AI helping to analyze and derive insights from the collected data.
MAJOR DISCUSSION POINT
Concrete Implementation and Success Factors
AGREED WITH
Bärbel Kofler
Argument 2
TCS has launched responsible AI programs with Carnegie Mellon partnership and developed Trusty Platform for evaluating and calibrating AI systems
EXPLANATION
Speaker 1 describes TCS’s concrete actions following their endorsement of the Hamburg Declaration, including establishing academic partnerships and developing proprietary technology for responsible AI evaluation. They emphasize the collaborative approach needed for addressing complex responsible AI challenges.
EVIDENCE
Specific initiatives include launching a program with Carnegie Mellon as academic partner, discussions with Indian academics, development of the ‘Trusty Platform’ for evaluating and calibrating AI systems to help engineers build more responsible AI, and focus on ‘greening of AI’ through resource-aware AI work.
MAJOR DISCUSSION POINT
Hamburg Declaration and Collective Action
Argument 3
LLMs solve only part of the problem; industry-specific, company-specific, and context-specific solutions still require significant development work
EXPLANATION
Speaker 1 argues that while Large Language Models represent a significant advancement, they are just one component of a much larger puzzle. They emphasize that substantial work remains in developing industry-specific, company-specific, and context-specific AI solutions.
EVIDENCE
They mention data silos existing even within enterprises, the need for agentic AI to bring more connectedness within enterprises, and the potential for combined intelligence where different agents coordinate and work together, suggesting that ‘the sum of parts could be more than one.’
MAJOR DISCUSSION POINT
Technical Considerations and Value Creation
AGREED WITH
Nakul Jain
DISAGREED WITH
Nakul Jain
A
Audience Member 1
1 argument128 words per minute43 words20 seconds
Argument 1
Inquiry about value comparison between LLMs and traditional ML approaches
EXPLANATION
Audience Member 1 seeks clarification on the comparative value and advantages of Large Language Models (like ChatGPT and Gemini) versus traditional machine learning approaches and classical function improvements.
MAJOR DISCUSSION POINT
Questions and Clarifications
A
Audience Member 2
1 argument139 words per minute43 words18 seconds
Argument 1
Question about bringing together competing fragmented initiatives for shared purpose and bigger impact
EXPLANATION
Audience Member 2 raises the challenge of how to unite fragmented and competing initiatives and enabling services to work toward a shared purpose and create greater collective impact, acknowledging the competitive nature of these initiatives.
MAJOR DISCUSSION POINT
Questions and Clarifications
A
Audience Member 3
1 argument183 words per minute211 words69 seconds
Argument 1
Concern about whether democratization of software benefits small enterprises or primarily big tech companies
EXPLANATION
Audience Member 3, an economics undergraduate, questions whether the democratization of software and technology truly benefits small enterprises or primarily advantages large technology companies. They express concern about value creation disparities between large established companies and smaller, newer entities.
EVIDENCE
They provide specific examples comparing TCS’s $100 billion valuation with 600,000 employees to Cursor’s $30 billion valuation with only 300 employees, questioning the sustainability and equity of such value distributions.
MAJOR DISCUSSION POINT
Questions and Clarifications
Agreements
Agreement Points
Multi-stakeholder collaboration is essential for AI development and deployment
Speakers: Robert Opp, Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain
AI can have powerful positive impact on sustainable development but current evolution is not equitable and could worsen inequality without responsible use Artificial intelligence’s future depends on multi-stakeholder engagement including government, private sector, civil society, and academia Private sector brings scaling, innovation, and user success through democratizing technology and extensive skilling programs Building technology is the easiest part; everything around it requires multi-stakeholder ecosystem with expertise in institutional mechanisms and embedding solutions within existing frameworks
All speakers agree that successful AI development and deployment requires collaboration across government, private sector, civil society, and academia, with each stakeholder bringing unique capabilities and expertise
Technology democratization is crucial for equitable impact and development
Speakers: Bärbel Kofler, Arundhati Bhattacharya
Innovation exists globally but there’s a power gap in resource distribution – only 17% of venture capital goes to Global South representing 90% of people, and only 0.1% of data center capacity is in Global South Technology democratization is essential for impact, as demonstrated by India’s financial inclusion success through biometric identification and mobile networks
Both speakers emphasize that technology must be made accessible and available to all, not concentrated in the hands of a few, to achieve meaningful development impact
Government role is creating enabling frameworks and infrastructure
Speakers: Bärbel Kofler, Speaker 1
Government role is creating frameworks, legal structures, and enabling environments for research, skills training, and ensuring outcomes are available to everyone Infrastructure challenges require sensing infrastructure as part of digital public infrastructure, with government catalyzing deployment and private/public entities innovating on top
Both speakers agree that governments should focus on creating the foundational infrastructure, legal frameworks, and enabling environments that allow other stakeholders to innovate and deploy solutions
Concrete, measurable commitments are more important than high-level principles
Speakers: Bärbel Kofler, Nakul Jain
Hamburg Declaration focuses on concrete, measurable commitments rather than just principles – German ministry exceeded training targets (190,000 vs 160,000) and AI building blocks (15 vs 12) Successful AI deployment requires government ownership, data availability, embedding solutions in existing systems, and monitoring mechanisms from day one
Both speakers emphasize the importance of tangible, measurable outcomes and practical implementation over abstract principles or declarations
Traditional ML often more practical than LLMs in resource-constrained environments
Speakers: Nakul Jain, Speaker 1
Traditional ML models often work better than LLMs in resource-constrained, low-infrastructure settings typical of impact sector work LLMs solve only part of the problem; industry-specific, company-specific, and context-specific solutions still require significant development work
Both speakers agree that while LLMs represent advancement, traditional ML approaches are often more suitable for practical deployment in resource-constrained settings and specific use cases
Similar Viewpoints
Both speakers emphasize that technology is essential for development at scale, particularly in populous nations, and requires careful implementation support to move from concept to practical impact
Speakers: Arundhati Bhattacharya, Nakul Jain
Populous nations like India cannot achieve deserved standard of living without technology integration Technology organizations serve as conveners and facilitators, requiring extensive hand-holding with governments to ensure solutions move from labs to field implementation
All three speakers agree on the importance of responsible AI development, whether through government frameworks, corporate self-regulation, or academic partnerships, emphasizing the need for ethical considerations in AI deployment
Speakers: Bärbel Kofler, Arundhati Bhattacharya, Speaker 1
Government role is creating frameworks, legal structures, and enabling environments for research, skills training, and ensuring outcomes are available to everyone Self-regulation through offices of humane and ethical technology use is essential for large corporates to avoid heavy-handed regulation TCS has launched responsible AI programs with Carnegie Mellon partnership and developed Trusty Platform for evaluating and calibrating AI systems
Both speakers see the need for better coordination and knowledge sharing mechanisms, whether through global platforms or national initiatives, to scale AI solutions and build capacity
Speakers: Nakul Jain, Arundhati Bhattacharya
Need for global repository and marketplace of AI solutions with shared playbooks, governance frameworks, and talent pools to enable cross-border deployment Continued co-innovation with customers and participation in national initiatives like skilling missions and technical education partnerships
Unexpected Consensus
Traditional ML superiority over LLMs in practical deployment
Speakers: Nakul Jain, Speaker 1
Traditional ML models often work better than LLMs in resource-constrained, low-infrastructure settings typical of impact sector work LLMs solve only part of the problem; industry-specific, company-specific, and context-specific solutions still require significant development work
Despite the current hype around Large Language Models, both technical experts agree that traditional machine learning approaches are often more practical and effective for real-world deployment, especially in resource-constrained environments. This consensus challenges the prevailing narrative about LLM superiority
Self-regulation as preferable to external regulation
Speakers: Arundhati Bhattacharya, Speaker 1
Self-regulation through offices of humane and ethical technology use is essential for large corporates to avoid heavy-handed regulation TCS has launched responsible AI programs with Carnegie Mellon partnership and developed Trusty Platform for evaluating and calibrating AI systems
Both private sector representatives agree that proactive self-regulation is preferable to external regulation, which is somewhat unexpected given that self-regulation is often viewed skeptically. Their consensus suggests industry recognition of the need for responsible practices
Overall Assessment

The speakers demonstrated remarkable consensus across multiple key areas: the necessity of multi-stakeholder collaboration, the importance of technology democratization, the government’s role in creating enabling frameworks, the preference for concrete commitments over abstract principles, and the practical advantages of traditional ML in many deployment scenarios. There was also strong agreement on the need for responsible AI development and the value of self-regulation in the private sector.

High level of consensus with significant implications for AI governance and development. The agreement across government, private sector, and implementation organizations suggests a mature understanding of AI deployment challenges and a shared vision for responsible, inclusive AI development. This consensus provides a strong foundation for collaborative action and suggests that multi-stakeholder initiatives like the Hamburg Declaration have potential for meaningful impact.

Differences
Different Viewpoints
Technology approach for resource-constrained environments
Speakers: Nakul Jain, Speaker 1
Traditional ML models often work better than LLMs in resource-constrained, low-infrastructure settings typical of impact sector work LLMs solve only part of the problem; industry-specific, company-specific, and context-specific solutions still require significant development work
Nakul emphasizes that traditional ML models are more practical and deployable in low-resource settings with basic infrastructure, while Speaker 1 focuses on LLMs as foundational but requiring additional layers of specialized solutions. They approach the technical solution from different practical perspectives.
Unexpected Differences
Self-regulation versus government frameworks
Speakers: Arundhati Bhattacharya, Bärbel Kofler
Self-regulation through offices of humane and ethical technology use is essential for large corporates to avoid heavy-handed regulation Government role is creating frameworks, legal structures, and enabling environments for research, skills training, and ensuring outcomes are available to everyone
This disagreement is unexpected because both speakers generally support multi-stakeholder approaches, yet they differ on the primary mechanism for ensuring responsible AI – Bhattacharya advocates for corporate self-regulation to maintain innovation freedom, while Kofler emphasizes the need for government frameworks and governance structures
Overall Assessment

The discussion showed remarkably high consensus on goals (responsible AI, equity, multi-stakeholder partnerships) with disagreements primarily on implementation approaches and emphasis rather than fundamental objectives

Low to moderate disagreement level with high strategic alignment. The disagreements are constructive and complementary rather than conflicting, suggesting different stakeholders can pursue parallel approaches toward shared goals. This indicates a mature, collaborative ecosystem where different actors can contribute their unique strengths without undermining overall objectives.

Partial Agreements
Both agree on the critical importance of skills training and capacity building, but disagree on primary responsibility – Kofler emphasizes government’s role in creating enabling environments and frameworks, while Bhattacharya focuses on private sector’s role in democratization and direct skilling implementation
Speakers: Bärbel Kofler, Arundhati Bhattacharya
Government role is creating frameworks, legal structures, and enabling environments for research, skills training, and ensuring outcomes are available to everyone Private sector brings scaling, innovation, and user success through democratizing technology and extensive skilling programs
Both identify global inequities in AI access and resources, but propose different solutions – Kofler focuses on addressing fundamental resource distribution gaps through government frameworks, while Jain proposes creating new marketplace infrastructure to facilitate cross-border solution sharing
Speakers: Bärbel Kofler, Nakul Jain
Innovation exists globally but there’s a power gap in resource distribution – only 17% of venture capital goes to Global South representing 90% of people, and only 0.1% of data center capacity is in Global South Need for global repository and marketplace of AI solutions with shared playbooks, governance frameworks, and talent pools to enable cross-border deployment
Takeaways
Key takeaways
AI has tremendous potential for sustainable development but current evolution is inequitable, with massive resource gaps between Global North and South (only 17% of venture capital and 0.1% of data center capacity in Global South) Multi-stakeholder partnerships are essential – no single actor can successfully deploy AI for development alone; government provides framework and governance, private sector scales and innovates, tech organizations serve as conveners Technology democratization is crucial for impact, as demonstrated by India’s financial inclusion success that reached 600,000 villages through biometric identification and mobile networks Building AI technology is the easiest part; the challenge lies in institutional mechanisms, embedding solutions in existing systems, capacity building, and ensuring sustained adoption Concrete commitments with measurable outcomes are more valuable than high-level principles – Hamburg Declaration signatories must deliver specific, quantifiable results Traditional ML often outperforms LLMs in resource-constrained environments typical of Global South applications Self-regulation by large corporations through ethical technology offices is preferable to heavy-handed external regulation
Resolutions and action items
Invitation extended for organizations to endorse the Hamburg Declaration with concrete, measurable commitments rather than just signing principles German ministry committed to and exceeded specific targets: trained 190,000 people (vs 160,000 target), delivered 15 AI building blocks (vs 12 target) Ongoing concrete projects identified: satellite data analysis for Kenyan farmers, cervix cancer detection in Cambodia, supporting Indian language datasets for Bhashini TCS launched responsible AI programs with Carnegie Mellon partnership and developed Trusty Platform for AI evaluation Continued participation in national initiatives like skilling missions and technical education partnerships
Unresolved issues
How to effectively bridge the massive resource gap between Global North and South in AI infrastructure and funding Lack of global repository and marketplace for AI solutions to enable cross-border deployment from startups in one country to implementation in another Absence of regional AI evaluation hubs with clear guidelines for solution assessment across different countries How to bring together competing fragmented initiatives for shared purpose while maintaining healthy competition Whether democratization of AI technology will truly benefit small enterprises or primarily advantage big tech companies How to address data fragmentation and silos that exist even within individual enterprises Scaling challenges for moving AI solutions from labs to field implementation across different contexts
Suggested compromises
Organizations can collaborate based on different synergies (geographical, expertise-based, or lifecycle stage-based) while still maintaining competitive relationships Focus on repurposing legacy hardware and exploring new hardware innovations to address compute gaps rather than requiring latest technology Use small language models and task-specific AI instead of general intelligence LLMs for resource-constrained environments Embed AI solutions within existing government systems and frameworks rather than creating standalone applications Combine self-regulation by private sector with government framework-setting to balance innovation with responsibility Create consortia that collaborate in some areas while competing in others to ensure both cooperation and quality through competition
Thought Provoking Comments
I would say it’s not an innovation gap, it’s a power gap. Because innovative people are existing around the globe. Ideas are created around the globe in all spheres of society… If you look how venture capital, for example, is distributed, and we know that, I think, it’s 17% of venture capital only is in those parts of the world who are presenting more than 90% of the people.
This comment reframes the entire AI equity discussion by challenging the common assumption that developing countries lack innovation capacity. Instead, it identifies the real issue as unequal access to resources and power structures. The specific statistics about venture capital distribution provide concrete evidence of systemic inequality.
This comment shifted the conversation from technical solutions to structural inequalities, setting the tone for subsequent discussions about infrastructure, governance frameworks, and the need for deliberate policy interventions to level the playing field.
Speaker: Bärbel Kofler
Any improvement in technology, if it is not really democratized, then it doesn’t really have an impact. If you want it to have an impact, you need to democratize technology… A populous nation like India can never really have the standard of living that it deserves unless technology is a part of the play.
This insight connects technology democratization directly to societal outcomes and quality of life, moving beyond abstract discussions to concrete human impact. It establishes democratization as a prerequisite for meaningful technological progress rather than an optional add-on.
This comment provided the philosophical foundation for the subsequent detailed case study of India’s financial inclusion success, demonstrating how democratized technology can transform entire populations’ economic prospects.
Speaker: Arundhati Bhattacharya
Building technology, at least for an application organization like ours, is the most easiest part of the entire spectrum. It’s everything around it that becomes, you know, tedious for us to sort of get done, which essentially means that there is a need for a multi-stakeholder ecosystem.
This comment challenges the tech-centric view that dominates many AI discussions by revealing that technology development is actually the simplest component. It highlights the critical importance of implementation ecosystems, partnerships, and institutional mechanisms.
This insight fundamentally reoriented the discussion toward the practical challenges of deployment and scaling, leading to detailed examples of successful multi-stakeholder collaborations and the specific roles each partner must play.
Speaker: Nakul Jain
The problem lies in the sensing infrastructure that you need to put together to be able to have high quality high volume high velocity data… government can play a very big role in sort of catalyzing this and enabling sort of, you know, putting together a great sensing infrastructure could be thought of, you know, part of the digital public infrastructure.
This comment introduces a sophisticated understanding of AI infrastructure needs, moving beyond simple data collection to the systematic sensing infrastructure required for effective AI systems. It positions this as a public good that governments should provide.
This technical insight elevated the discussion to consider AI infrastructure as a fundamental public utility, similar to roads or electricity, which influenced subsequent conversations about government roles and public-private partnerships.
Speaker: Sachin Loda
We want to come up really with concrete steps… We are not signing something we don’t want to do and then we have another year of time and applauding ourselves for signing something. We want to come up really with concrete steps and that’s what my ministry did until now.
This comment addresses a critical weakness in international cooperation – the tendency toward performative commitments without accountability. It demonstrates how to move from aspirational declarations to measurable outcomes with specific targets and timelines.
This emphasis on concrete accountability transformed the discussion of the Hamburg Declaration from a theoretical framework to a practical tool for driving real change, encouraging other participants to think about measurable commitments rather than abstract principles.
Speaker: Bärbel Kofler
If I am a startup in India, I have built a good tool, how do I sell it, deploy it in Ethiopia? It becomes very difficult right now. So is there an opportunity for us to create a marketplace of sorts that has shared solutions, shared playbooks, shared governance ecosystem frameworks has a talent pool that could be leveraged?
This comment identifies a critical gap in the global AI ecosystem – the lack of mechanisms for cross-border knowledge and solution transfer. It proposes a concrete solution that could dramatically accelerate AI adoption in developing countries by leveraging existing innovations.
This insight opened up a new dimension of the discussion about creating global infrastructure for AI solution sharing, moving beyond individual country initiatives to think about systematic knowledge transfer mechanisms across the Global South.
Speaker: Nakul Jain
Overall Assessment

These key comments fundamentally shaped the discussion by challenging conventional assumptions and introducing new frameworks for understanding AI development challenges. Kofler’s ‘power gap’ insight reframed the entire equity discussion, while Bhattacharya’s democratization principle and Jain’s observation about implementation challenges shifted focus from technology creation to deployment ecosystems. Loda’s infrastructure perspective elevated the conversation to consider AI as public utility, and the emphasis on concrete accountability transformed abstract principles into actionable commitments. Together, these insights created a comprehensive framework that moved the discussion from theoretical concerns to practical solutions, emphasizing the critical importance of multi-stakeholder collaboration, systematic infrastructure development, and measurable outcomes in achieving responsible AI for sustainable development.

Follow-up Questions
How do we actually address some of those challenges? How do we get some of the measures in place or the kinds of commitments, principles that we need to have as an overall community, especially in the international development community, but beyond in terms of private sector and others as well?
This foundational question about implementing responsible AI governance frameworks was posed at the beginning but requires ongoing exploration as the field evolves
Speaker: Robert Opp
How can we create a global repository of solutions that includes both public and private sector innovations, with shared playbooks and governance frameworks?
This addresses the challenge of scaling AI solutions across different countries and contexts, which currently lacks a systematic approach
Speaker: Nakul Jain
How can we establish regional evaluation hubs for AI solutions, given that evaluation guidelines differ significantly between countries?
This is critical for ensuring AI solutions can be safely and effectively deployed across different regulatory and cultural contexts
Speaker: Nakul Jain
How do we repurpose legacy hardware for AI applications in resource-constrained environments?
This technical challenge is important for bridging the compute gap between Global North and Global South
Speaker: Sachin Loda
How can we build comprehensive sensing infrastructure as part of digital public infrastructure to enable better AI applications?
This infrastructure question is fundamental to generating the high-quality data needed for effective AI solutions in areas like environmental monitoring
Speaker: Sachin Loda
What is the comparative value being added by LLMs versus traditional ML approaches in development contexts?
This technical question is important for understanding which AI approaches are most effective for development challenges
Speaker: Audience Member 1
How do we bring together fragmented and competing initiatives for shared purpose to create bigger impact?
This addresses the coordination challenge in the AI for development ecosystem where multiple organizations work on similar problems
Speaker: Audience Member 2
Will democratization of software and technology actually happen, or will only big tech companies benefit from AI advancement?
This question about market concentration and accessibility is crucial for understanding whether AI will truly serve inclusive development goals
Speaker: Audience Member 3
How can small and medium-sized enterprises be brought into position to participate and benefit from new AI technology, not just big players?
This addresses the equity gap in AI adoption and the need for inclusive economic participation in the AI economy
Speaker: Bärbel Kofler
How do we ensure that research results and datasets are available for everybody through open source approaches?
This relates to the fundamental question of knowledge sharing and preventing AI benefits from being concentrated among a few organizations
Speaker: Bärbel Kofler

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

How the Global South Is Accelerating AI Adoption_ Finance Sector Insights

How the Global South Is Accelerating AI Adoption_ Finance Sector Insights

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel framed finance’s shift from frontier AI to “institutional AI,” where legitimacy and trust now dominate adoption decisions [4-10][17-19]. John stressed that trust is the financial business model, so only reliable, auditable, resilient AI will be accepted [5-8][10]. Moderator Bharat asked RBI’s Suvendu Pati how India regulates AI in its tightly regulated sector [26-27]. Suvendu said the RBI enables responsible AI with a tech-neutral, principle-based framework already in consumer-protection and outsourcing rules [29-33][38-41][44-46]. He placed accountability on AI deployers-banks and fintechs-requiring a “glass-box” that informs users and audits bias, drift, degradation [48-52][184-188][190-196]. The RBI runs fintech forums, a regulatory sandbox, and is building an “AI sandbox” to give smaller firms data and compute [280-286][291-295]. JPMorgan’s Tara Lyons cited AI in fraud detection, payments, markets and compliance, noting India’s principle-based, tech-neutral rules enable proportional risk experimentation [77-80][82-84][86-88]. Ashutosh Sharma said AI cuts operating costs in India’s $2 trillion credit market, improves thin-file underwriting with unstructured data, and expands reach via conversational interfaces for underserved users [108-112][115-118][119-126]. He added emerging best practices such as human-in-the-loop oversight and strict data-privacy safeguards [127-131]. Razorpay’s Harshil Mathur argued AI’s data-processing power enables “agentic commerce,” turning 300-400 million UPI users into online shoppers through voice-first, multilingual experiences [143-166][167-168]. He warned that India’s data-residency rules and LLM hallucination risks create regulatory challenges and liability for financial firms [300-307][308-319][320-327]. The panel concluded that trustworthy, language-aware AI can expand financial inclusion via voice-based banking, and that transparent, regulator-aligned infrastructure is essential for scaling AI in the south [338-347][368-371][373-375][18][19][376-379].


Keypoints

Major discussion points


Trust and legitimacy are the scarce assets for AI in finance. John Tass-Parker frames the shift from “frontier AI” to “institutional AI” and stresses that “trust is not a feature, it’s the business model” and that institutions will reward systems that demonstrate reliability, auditability and resilience rather than raw performance [4-10][17-19].


The RBI’s policy stance is to enable responsible AI while staying technology-neutral. Suvendu K. Pati explains that the Reserve Bank of India focuses on innovation enablement, risk mitigation, and a set of seven “principles or sutras” that have been adopted government-wide; the regulator emphasizes a “tech-agnostic” approach, consumer-protection objectives, and a willingness to nudge the sector rather than impose heavy-handed rules [28-40][56-63][65-68].


Financial institutions are positioned as the primary custodians of AI trust. The regulator clarifies that only the “regulated entities” (banks, NBFCs, fintechs) are subject to oversight, and they must ensure “glass-box” transparency, human-in-the-loop controls, and robust audit frameworks to capture incremental AI risks such as bias, model drift, and degradation [171-188][189-196].


Concrete AI use-cases are already delivering value and shaping future strategies. JPMorgan Chase cites fraud-remediation, payments, markets and compliance as high-impact areas [78-81]; Razorpay’s Harshil Mathur describes “agentic commerce” – voice-first, multilingual conversational interfaces that can unlock the 300-400 million Indian UPI users who currently do not shop online [143-166]; other speakers highlight productivity gains, biometric payments, AI-led collection agents, and the potential to bring financial services to underserved populations [260-268][270-277].


Key challenges revolve around data residency, compute infrastructure, and model reliability, prompting new regulatory tools. Razorpay points to Indian data-residency rules that block many foreign LLMs and the scarcity of “cutting-edge” compute in Indian data centres [300-311]; the RBI counters with regular fintech-engagement forums, a “FinQuery/Finteract” series, and plans for an AI sandbox that provides affordable data and compute to smaller players [280-294][306-311].


Overall purpose / goal of the discussion


The panel was convened to explore how the financial sector-particularly in the Global South-can transition to “institutional AI” by building trustworthy, auditable, and regulator-aligned systems. Participants shared regulatory frameworks, industry best practices, and investment perspectives with the aim of shaping a scalable, responsible AI adoption roadmap that enhances productivity, financial inclusion, and innovation while safeguarding consumers.


Overall tone and its evolution


The conversation began with a formal, forward-looking introduction that highlighted the strategic importance of trust. As regulators and industry leaders spoke, the tone remained professional yet increasingly optimistic, emphasizing enablement, “nudges,” and collaborative frameworks. When challenges such as data residency and model hallucinations were raised, the tone shifted to constructively critical, acknowledging real obstacles but quickly moving to proposed solutions (e.g., AI sandbox, human-in-the-loop). The closing remarks returned to a hopeful, visionary tone, envisioning AI-driven financial inclusion and personalized services for all citizens.


Speakers

Ashutosh Sharma – Investor in India’s fintech ecosystem; focuses on AI strategy and adoption in finance. [S1]


Terah Lyons – JPMorgan Chase representative discussing trusted AI and its impact on financial services. [S2]


Harshil Mathur – Executive at Razorpay working on AI-based payment solutions and agentic commerce. [S4]


Suvendu K. Pati – Chief General Manager & Head of FinTech, Reserve Bank of India; regulator on AI governance in the financial sector. [S5]


Bharat – Moderator of the panel discussion. (no external title provided)


John Tass-Parker – Leads Policy Partnerships at JPMorgan Chase; focuses on AI governance and responsible adoption in financial services. [S11]


Additional speakers:


Swendu Pati – RBI regulator (same individual as Suvendu K. Pati) mentioned in the transcript. [S5]


Full session reportComprehensive analysis and detailed insights

John Tass-Parker opened the session by framing the financial sector’s AI journey as a shift from “frontier AI” to what he termed institutional AI – a stage where success is judged not by raw model capability but by the technology’s legitimacy. He emphasized that “trust is not a feature, it’s actually the business model” of finance, arguing that only systems that can demonstrably deliver reliability, auditability and resilience will win approval from boards, regulators and customers [5-7][4-10][17-19].


Suvendu K. Pati of the Reserve Bank of India (RBI) then outlined the regulator’s enable-by-design, technology-neutral, principle-based framework. He explained that the RBI seeks to nurture responsible AI adoption rather than impose heavy-handed rules, nudging innovation while managing risk. The framework is built around seven principles (referred to as “sutras” in the RBI report) that have been accepted by the Indian government [56-63][38-41]. Pati highlighted the RBI’s “tolerant and differentiated” stance, noting that AI is a probabilistic technology and the regulator should adopt a flexible approach when embedding it in finance [65-68].


Following this overview, Pati detailed the accountability expectations for AI deployers – banks, NBFCs and fintechs. Responsibility, he argued, rests with the entity that puts the model into service, not with the model developer. He called for a shift from “black-box” to “glass-box” AI, requiring that customers be informed when they are interacting with an AI system and that a non-AI alternative be offered. Institutions must embed board-level AI policies, internal audit mechanisms and continuous monitoring for bias, model drift and degradation as part of a comprehensive lifecycle-management approach [171-188][189-196].


To operationalise these principles, the RBI has instituted regular industry engagement through the FinQuery/Finteract series, reaching over 2 000 entities [280-294]. It is also developing an AI sandbox that will complement the existing regulatory sandbox (in place since 2019) by providing smaller fintechs with affordable data and compute resources [291-295]. The RBI’s own AI model, MuleHunter.ai, is already deployed across 26 banks, illustrating a hands-on approach to building trustworthy AI tools [291-295].


From the private-sector side, Tara Lyons of JPMorgan Chase highlighted high-impact AI use cases already delivering value: fraud and scam remediation, payments optimisation, market-trading analytics and compliance monitoring. She stressed that the sector’s principle-based, tech-neutral regulatory environment has allowed banks to experiment proportionately with risk, and that the governance practices developed in finance can serve as a template for other industries [78-84][85-87].


Ashutosh Sharma, representing India’s fintech investment community, argued that AI is strategically vital for the country’s $2 trillion credit market. By automating 3-5 % of operating expenses, AI can dramatically improve productivity; more importantly, it can enrich “thin-file” borrowers with unstructured data, enabling underwriting for large segments of the informal economy. He cited voice-first, conversational interfaces as a way to reach users who find current app-based onboarding complex, and advocated human-in-the-loop oversight to safeguard outcomes. Sharma also noted the shift from OTP to biometric UPI payments and highlighted AI-driven collection agents that handle 60-70 % of first-30-day collections [106-112][115-126][127-132].


Harshil Mathur of Razorpay expanded on the agentic commerce vision, describing how AI’s ability to process massive data volumes can power voice-first, multilingual interactions that could convert the 300-400 million UPI users who currently only pay bills into active online shoppers. He warned that India’s data-residency requirements block many foreign large-language models, and that even low-rate hallucinations pose unacceptable liability for financial decisions. Consequently, he called for India-hosted model infrastructure and robust guard-rails (monitoring hallucinations, ensuring data-residency compliance) to guarantee accurate, trustworthy outputs [134-141][143-166][320-327]. Beyond payments, Mathur gave concrete examples of AI helping farmers choose crops and small retailers compete with large supermarkets, illustrating broader societal impact [300-319].


Across the panel there was strong consensus on the centrality of trust: John’s framing of trust as the business model, Tara’s emphasis on trusted AI for fraud and compliance, Suvendu’s glass-box transparency mandate, and Harshil’s insistence on reliable, auditable outcomes all converge on the view that AI must be trustworthy to be adopted in finance. The speakers also agreed that a principle-based, technology-agnostic regulatory approach is sufficient, with existing consumer-protection and IT-outsourcing rules already covering many safety aspects [78-84][38-41].


Points of tension centred on practical constraints rather than philosophical disagreement. Both Suvendu and Harshil acknowledged that India’s data-residency rules and the lack of domestic LLM infrastructure create regulatory gaps that hinder adoption [38-41][303-308]. While Suvendu advocated mandatory glass-box disclosure, Harshil focused on internal safeguards (guard-rails, hallucination monitoring, data-residency compliance) without opposing user-facing transparency [180-188][134-141]. Regarding automation, both Ashutosh and Harshil highlighted the importance of human oversight: Ashutosh explicitly recommended a human-in-the-loop approach, and Harshil described AI-driven agents as augmenting, not replacing, human decision-making [127-132][134-141].


In future bets, the panel identified three “big bets” for AI in finance: (1) financial inclusion via AI-enhanced underwriting for thin-file borrowers; (2) voice-first, multilingual banking that lowers friction for billions of unserved Indians; and (3) ultra-personalized, low-cost servicing that could drive a “Vixit Bharat” – a digitally empowered India. Realising this vision will require robust governance (board-level AI policies, glass-box transparency, human-in-the-loop controls), continued regulator-industry collaboration (ongoing FinQuery forums, AI sandbox implementation), and resolution of technical bottlenecks (domestic compute, data-residency compliance, mitigation of LLM hallucinations). The panel’s optimism – from Suvendu’s language-driven services, Mathur’s cost-down and personalization pathways, Sharma’s “Vixit Bharat” bet, to Tara’s vision of an AI advisor in every pocket – underscores a shared belief that coordinated policy and industry action can make AI a catalyst for inclusive, resilient financial services [338-347][368-371][373-375][376-379].


Session transcriptComplete transcript of the session
John Tass-Parker

Hello everyone, my name is, oh sorry we’ve got a photographer here now, so we’re going to take our photo. False start, sorry, bear with us. Well now that we’ve got the most important thing out of the way, we’ll get started. Hello everyone, my name is John Tass -Parker I lead policy partnerships at JPMorgan Chase and just wanted to firstly thank everyone for being here for this very important conversation when people talk about AI the conversation tends to focus on model breakthroughs speed, capability but in finance, which our wonderful panellists here represent that’s never been the real question we’re really moving from this era of frontier AI in our world certainly to an era of institutional AI and in this phase the hard problem is not actually the capability itself it’s legitimacy and trust financial services is one of the most regulated sectors in the global economy and yet it’s consistently been one of the earliest to be a part of the global economy and one of the first adopters of AI and all…

technologies. Why? Because in finance, trust is not a feature. It’s actually the business model. Institutions only absorb systems they trust. The C -suite can only scale what their boards can govern. Regulators can only enable what they can supervise. And increasingly, those that can demonstrate reliability, auditability, resilience, not just model performance, will be the ones that are rewarded. The more important story is coming into focus in rooms like this. It’s the infrastructure enabling institutional AI, model risk management, oversight, explainability, cyber security, regulatory engagement. Finance has had to learn how to deploy these incredibly powerful systems inside real world guardrails. And that’s why conversations matters beyond and beyond the door. And that’s why this conversation, frankly, not only matters for our financial and banking sectors, but also beyond that.

If we want AI to drive productivity for small business, for farmers, for teachers, for local government, for state government, for international, across the global south, then trusted deployment is what unlocks it. Capability is increasingly being commoditized. It’s the legitimacy that is the scarce attribute here. Today’s discussion is about how we build systems that institutions will actually absorb and how finance can help shape a framework for responsible, scalable adoption. With that, I’m delighted to hand it over to Bharat to set the broader context for how we think about safe and trusted AI globally.

Bharat

Thank you, John. It is my honor to moderate this discussion with a truly distinguished panel. So without further ado, let me just jump straight into it. Capitalizing the artificial intelligence moment for finance. The financial sector, as we all know, is one of the most regulated sectors in our country in India. and in most parts of the globe. So I think it’s appropriate to turn to the regulator from India, Mr. Swendu Pati from RBI, who’s to my right. Swendu ji, the financial sector has been one of the earliest adapters of AI, despite being one of the most regulated sectors, as I mentioned. Given this dichotomy, how is India approaching AI regulation in finance?

Suvendu K. Pati

Yeah, thank you, Bharat, and thank you, everyone, for having me here. I would begin by saying we are not exactly the phrasing I would entirely agree with, that regulating AI, but I would say that we are here to sort of enable responsible adoption of AI in the financial sector. That would be the overall approach to this technology, I would say, what Reserve Bank of India, you know, understand. and why I would say that it is clearly we recognizing the potential of this new technology, although it’s not very new in that sense, but it has really come to a limelight over the past five years. And that’s because, you know, data is one of the key ingredients which it thrives on.

And we had constituted an external expert committee of which I was a member to look at this sector and look at this technology, how it can be embedded into the financial services segment. So our approach when we looked at, you know, we wanted to be slightly more nudging towards enabling innovation in some sense. And unless we play around with this technology experiment enough, you would not ever utilize the full potential of it. So basically it is concentrated towards… you know, innovation, enablement, as well as risk mitigation. The risks that have been talked about, bias, accountability, auditability, explainability, these are pretty well known. And this needs to be managed in a way so that we ultimately we come out with the principles of enhancing trust, which was also a fundamental attribute of the financial sector.

And in terms of regulation, Reserve Bank’s approach has been largely tech neutral. It’s tech agnostic in some sense, because most of the times you would, you know, new technologies, new things would keep evolving. But for example, the safety or the consumer protection, not doing consumer harm, is a good stated objective to pursue irrespective of what technology you adopt. Similarly, on IT services, outsourcing guidelines, on, you know, managing concentration risk, there are already existing guidelines. which do provide the guidance to the regulated entities like banks and NBFCs, how do they manage their affairs. So in some sense, the consumer protection guidelines also do cover some of the safety aspects that we would generally talk about. So in some sense, there is a regulation which is in place.

There is guidance which is already in place. It’s only that because of this transformational technology, if there is a need to look at it from a new technology lens, any additional guidance that needs to be incremental guidance that needs to be provided. And that’s a precise point we have come out with in this report. And one of the things that we expect institutions to go forward is with the entire lifecycle management of AI, should be a thought process. The institutions need to look at… the liability and accountability framework in a much different way. Our expectation is that customers need to be protected in all cases. So it’s not a question, it’s about the model deployed by the entities rather than the model developers.

The responsibility should rest with the model deployers and which are the regulated entities in this case. And therefore, there are three or four additional dimensions which need to be looked at in terms of supervision, in terms of the internal audit assurance framework. How do you audit or how do you validate or improve your product approval process to capture the additional incremental risks on account of AI? So these are some of the additional things that we are looking at to provide some nudge. And one principle that we had come out. Within the report, there are seven principles or sutras that the report talks about. and these have been adopted. I’m happy to report that these have been adopted by the government of India for implementation across sectors.

So these are generic principles and they have found acceptance. So one of the principles that we have talked about there is innovation versus restraint. Everything else remaining constant, entity should prioritize innovation rather than restraint. So that is a nudge. That is an innovation enablement or a nudge that we are trying to give to the sector. They should feel comfortable with this. So our whole approach is optimistic. We want people to experiment, adopt it responsibly, but think creatively in terms of liability framework, revisiting the accountability framework, have a board governance policy in place, and improve their internal systems and processes to give the comfort to not only the people, not only their own set of employees, but to other stakeholders.

about this new technology. All said and done, this is a probabilistic technology. There are bound to be some mistakes here and there. So we need to have a very tolerant and differentiated approach when we embed this into the financial services where people’s money is involved. I will stop here, but we’ll talk something more later.

Bharat

Thank you, Swenduji, for that insight. If I could now turn towards the global view, our employer J .P. Morgan Chase is one of the world’s largest deployers of artificial intelligence. Tara, in terms of trust, what are some of the most impactful use cases trusted AI is being leveraged for in finance in your purview?

Terah Lyons

We joke that we shouldn’t worry about AI until we figure out AV. So I guess this is a perfect example of that. Thanks for the question, Arat. I think maybe the first thing to say about this, and this probably isn’t news to this room especially, but AI has been used in finance in deployed settings for over a decade. And at JPMorgan Chase, we’ve been using it, spanning use cases across our bank, starting first with the era of analytic tools, moving into machine learning capabilities, now in the direction of large language model deployment and sort of looking directionally towards the era of agentic capabilities and beyond. And spanning all of those, I think the most impactful use cases that we have seen, certainly in fraud and scams remediation, which is just a huge priority for the entire sector.

Payments, there’s some really exciting applications and in markets as well. And honestly, in compliance use cases for us too, just given the focus that we have on ensuring that we’re being compliant with our regulatory requirements. I think I also, I just want to pick up on a couple of things. that were previously mentioned that I think are worth underscoring. And one of those points was the point that you made, Mr. Patti, about one of the strengths of the financial sector regulatory approach being the principles -based technology -neutral approach that our regulators have taken. And I think it has allowed banks to experiment to a wide degree with the types of techniques that I just talked about.

Well, thinking about the proportionate risk of each one of those use cases as we are deploying. So I think that’s been really key. And I think the second point to underscore that you had mentioned previously, which I think was a really good one for us to address as well, is that there are, I think because of the strength of the financial sector’s approach to AI governance, really useful lessons that can be exported from this sector in considering questions of oversight and regulatory control. And I think that’s a really good point. And I think that’s a really good point. And I think that’s a really good point. And I think that’s a really good point. And I think that’s a really good point.

And I think that’s a really good point. And I think that’s a really good point. And I think that’s a really good point. That speaks to the sutures that you mentioned being adopted more widely across the economy in the RBI report that I think are really well aligned to wider consideration just beyond, you know, the banking

Bharat

Thank you, Tara. And I think now we move to the more important issue of putting money into this particular industry. Ashutosh is one of the leading deployers of finance in India’s fintech ecosystem. What makes AI so strategic in your view for the sector? And what are some of the best practices you see being adopted by fintechs to build particularly trust in AI?

Ashutosh Sharma

Super. Thank you so much for having me here. I think over the last two, three, four days, folks in the room. room have probably attended 5, 10, 15 such sessions, maybe more. And I think if there’s one takeaway that you have taken with you is that AI is going to change almost everything. And so it will the financial services sector. I think this is equally, in fact, more importantly applicable to India in a bigger measure than anywhere in the world. And the reason I would say is threefold. The first is unit economics. Let’s take an example. Indian credit market is $2 trillion in value. We spend anywhere from 3 % to 5 % on OPEX. Just on OPEX, we invest $60 to $100 billion a year.

And what AI can do strategically to improving productivity and therefore making these businesses much more healthy. It’s only a beginning of… of the journey we are taking. I think second strategic point of strategic importance is risk. A large section of our economy in India is unformalized. What I mean is that in credit parlance, it’s called a thin file issue, which is that for a large section of society, we don’t have enough data points, enough matrices to make the file thick enough for you to underwrite them. Now with AI, because of the technology’s ability to use unstructured data, you can actually very quickly and in a very cost -effective way make that thin file, thick file.

So again, I think underwriting risk for a large section of society in India will be possible now. with this. I think the last, not the last, one of the more important other points is reach. Buying a financial product is not like buying a shirt on Myntra or ordering food on Swiggy or ordering a saree on Mishra. This is a complex product. It needs engagement. The app or whatever platform you’re using asks you a bunch of questions. Before you even decide. Today, again for a large section of Indian society, it’s very hard to engage with that app. It’s complex. Now imagine a world tomorrow where you can speak to that app. And therefore now that enables reach of financial products, financial services to again a very large section of society.

So I think it’s extremely, extremely strategic from that standpoint. Also best practices look, we are too early. I mean we can only talk about practice. practices best or not only time will tell so so i mean look and look because we are early and because of what sir said um this is this is a high impact transaction for anyone a financial services transaction um and and therefore having a bot run a bank i think is not advisable so one of the practices that good fintech companies are using is keeping a human in the loop the technology can prepare a file but in the end it’s a human who kind of the second thing is again is is data is while data is of primary importance in the in the ai world but this is a lot of sensitive data that you as a fintech or financial services product provider you have that so ensuring at all times that you are following the dpdp guardrails i think is again something which is this is just a start we’ll evolve uh but i think it’s a good thing that we’re following the

Bharat

Thank you, Ashutosh. Turning now to the person who’s actually deploying the money, which is Harshal. That’s a pointy edge. Do you really believe that this is AI’s big moment in finance? I gather at Razorpay, you are rolling out AI -based payment solution models. How do you think this will transform the payments landscape?

Harshil Mathur

First of all, just from a back -end usage, like my colleague spoke about, I think finance typically deals with large volumes of data. Large volumes of data is generally harder for humans to really skim through. We always have to use machines and software to run through it. AI makes that job much, much easier. Anywhere where large volumes of data has to be interpreted, inference has to be drawn, I think you need systems to do that. AI is a system that allows you to do it at far more data points than it was possible in older systems. You can see, you can do as much analysis on Excel sheets or at . software, but with AI you can do 1000x more.

So I think just this advantage of that and things like underwriting and risk management and identifying fraud and multiple things that finance ecosystem has to do becomes increasingly important. So I think that’s why finance has been one of the earliest adopters because it’s just natural that the system is so much better than the previous systems. Coming to payments, I think one of the things that we’ve done is we’ve taken a very early bet on agentic commerce and the reason is fairly simple that there are 300 to 400 million Indian consumers who are on UPI today on district payments today. Less than 200 million of those actually do shopping online. But if you go, peel it even further and this is based on data that we see at Razorpay, less than 10 million of those users do 70 % of all commerce in India.

Just 10 million in a country of a billion and a half do 70 % of all commerce online. And that’s because, like he said, the commerce systems that we have built so far are not natural to most people in India. So we’ve built apps, we’ve built all the accesses. available, but while the access is there, the accessibility is missing. Because Indians don’t buy stuff the way Americans do. So the way we have built our apps is our American shop. It’s like a supermarket. Everything is available. You pick and choose yourself. Indians shop on retailers, where you go and talk. You say, hey, I want to buy this. He tells you, hey, why don’t you buy this, and so on.

We are conversational in commerce. And that’s why the app ecosystem we have so far has only penetrated 10 million or maybe 15 million. The rest of India needs conversations. Like take an example of travel. There are OTAs available everywhere. $50 billion of travel is purchased through agents on the ground, because people want to talk before they make a booking. 95 % of insurance in India is sold through offline brokers. There’s Policy Bazaar, and there’s so many brokers available, which will give you far cheaper, which will not missell you insurance. People still trust their local insurance broker. Because Indians want to converse before they buy. They want to ask 20 questions about what their and that’s hard to do in the apps that we have so far.

And I think agentic commerce is that next wave which will unlock the next form of commerce for the next billion people who have not really come, in spite of all the apps being available, who are not really shopping online, who are not really consuming online. They may be paying their bills online, but that’s it, just because they don’t want to stand in the line. But everything else they’re still doing through offline channels and if we can bridge that gap through agentic commerce, which is voice first, which is multilingual, which is conversational, I think we can unlock commerce for a large volume of Indians who have not come online properly.

Bharat

Thank you, Arshil. I think the next angle which I’d like to touch upon is elevating deployers as key custodians of trust. So Venduji, the RBI has traditionally been ahead of the curve in comparison to some other sectors due to key initiatives which you’ve promulgated such as the Free AI Committee and its very progressive policy recommendations. If I may ask, is there a distinction in your approach for regulating AI developers and…

Suvendu K. Pati

See, under the remit of the mandate given to the Reserve Bank of India, under the Reserve Bank of India Act or the Banking Regulation Act, our remit is towards the we can regulate only the regulated entities like the banks, non -banking financial companies or fintechs or so and so forth. So, model developers would strictly fit into the IT or technology companies. So in our remit or the official mandate that we have, we really cannot sort of regulate or prescribe rules for them. So what we are looking at is from a deployment point of view. And so our regulations or our guidance, I would refrain from using the word regulation in this sector. But in this context, but our guidance would be towards the deployers, which are the regulated entities.

And these, as I would say, are more, already some are in place through various, you know, guidelines. I have talked about IT outsourcing, third -party dependencies, and also on the customer engagement and things like that. So what we are looking at is how does the regulated entity be, you know, accountable. Once the regulated entity is providing a service to a customer, it is the complete responsibility of the entity to ensure the transparency, accountability, the way the customer engages with an AI system or a service. So from, you know, if I may loosely put it, from a typically black box is something which is associated with AI systems. You really do not know what happens inside and the result is produced.

But as far as the regulated entity. As far as the regulated entity is dealing with the customers are concerned, we would like. this to be a not a black box but a glass box. It’s completely should be customer should be knowing what they are getting. When they are engaging they should be clearly told upfront that they are engaging with an AI system. If they choose they should have the freedom to offer a non AI based engagement and transparency. Similarly for the accountability the institution should devise their audit systems to capture incremental risks arising out of the AI. How does the bias get removed? Is there a model drift? Is there a model degradation? Does it get addressed periodically?

So those kind of checks and balances regulated entities need to put as part of their board policy and set the implementation and some things like understandability by design. You know the course itself should ensure implementation. These are some of the things we have talked about. And over a period of time, we would like that this gets addressed and gets refined and gets embedded and implemented across their processes.

Bharat

Thank you, Sovenduji. Deployers are also fast emerging as key custodians of trust in the AI ecosystem. And, you know, frankly, it’s the responsibility to the global economy to get AI integration right for large financial services firms such as J .P. Morgan. So how is J .P. Morgan positioning itself in this debate?

Terah Lyons

Well, I think AI is not made useful unless it’s deployed, and it can’t be deployed at scale without trust and transparency. And so the way that we’re thinking about these questions really rests. It rests on, again, the strengths of the sort of, I think, the culture of risk management and oversight that we have grown into in financial services, deploying technology of all sorts, not just AI, but certainly AI more recently, as I mentioned. and having there really be a sort of a use case focus on the risks entailed in every single one of our deployments. I think a lot of the lessons that can be learned in financial services risk management, again, are applicable widely to other sectors, as we’ve talked a little bit about this afternoon, including in sort of AI lifecycle oversight and management in model risk management guidelines and principles, in the principles and practices of real transparency and auditability that we’ve spoken to up here and many, many others.

And so I think what that allows is, as we’ve spoken to, banks and financial service organizations are sort of uniquely positioned in many ways, given the nature of the data estates that we sit on top of, given the necessity of the business model. given customer demand and market demand and a host of other issues that I would say surround kind of the innovation envelope here. But I think the risk management practices that we have are a huge strength there, too. So, yeah, I would say that that’s all really key to engendering trust with customers and making sure that we’re doing right by the products and services that we’re delivering to them.

Suvendu K. Pati

providing what information they need to fill in while account opening, those kind of summarization effects may not be subjected to very, very elaborate degree of scrutiny or risk testing or template, those kind of processes. This is what I would feel personally, but yes. And just to make this more of a conversation, I’ll add one additional point, which is that I think it’s important to understand that the way that we’re dealing with this is not just about the data. It’s about the information that we’re getting from the data. It’s about the information that we’re getting from the data. And I think that’s what I would feel personally, but yes. And I think that’s what I would feel personally, but yes.

Harshil Mathur

If you’re a large company, it’s competing with a small, let’s say, retailer, and let’s say they’ve opened a new supermarket opposite to them. It’s hard for a small retailer to compete because they don’t have the intelligence available to the large supermarket in terms of what products to put in, what things to deploy, what marketing ideas to deploy, and so on. But now you can really open a chat GPT app, ask it to prepare a business plan for you, tell it how do I fight this, and it can really help you compete. So the advantage of having intelligence on demand really, I think, balances the scale than what was available before because it reduces the cost of intelligence.

A large company could always afford that intelligence, but now it’s available. Similar examples will be available later. let’s say it’s a farmer on the ground who is unable to figure out, like, which crop should he purchase this season, right? And I met companies recently who are essentially doing that, that they’re deploying AI models to be available to farmers on the ground, that, hey, you can ask it. It can tell you information that is generally not available to you. So I think that’s on the general side of things. Now, if you come to finance side of things, one of the biggest problems in finance is mis -selling, right? Or fraud, and fraud. Like, for example, I have told my dad, my dad is 70 years old, and I’ve told him, hey, if you’re making any expensive purchase decision, just give me a call.

I don’t know if you’re getting fraud aid, if you’re getting digital estate. I don’t know if you’re getting insurance sold, which you don’t need, or a financial instrument you don’t need. But, like, AI allows me to put something smarter than me in his own pocket, right? So he doesn’t need to call me now. He can open, and I taught my dad how to use chat GPT. Now he opens it up and asks it in his voice, like, I’m going to buy this. Should I buy this or should I not buy this? I can imagine a year or two years from now. Now, all of us will have an AI agent who is essentially your assistant.

So, when you’re shopping something, it’s searching for the best prices online. When you’re buying something, it’s searching for the best features. Is this the best product? Is something else the best product? It’s doing a research on Reddit, it’s doing a research on Twitter, telling you, hey, don’t buy this, buy this. Or you’re on this website, which is clearly mis -selling, which is fraudulent, the price looks too good to be true, so don’t buy it from here. I think having that intelligence available to every person on demand is a massive advantage. And I think the impact of it in society will be fairly positive. I think the people are worried about frauds happening because of AI.

And I think that’s true in the short term when the ecosystem is getting prepared. But in longer term, frauds and mis -selling and all of that will go down significantly because everyone will have an intelligent agent who’s extremely smart and who can tell things far better than a human can. So I think that can really bring a massive

Suvendu K. Pati

Just in case, Harshil, you ask your dad to be aware about hallucinations. . .

Bharat

Thank you, Harshil. Thank you, Harshil. Thank you. So innovation and commitment are key in any new technology, as we all know. So, Ashutosh, what are some of the promising business models you are excited about in FinTech and with the AI space? Which ones do you see gaining more traction in the global south? And in your view as an investor, do some key gaps still exist which are currently unaddressed and could benefit in some way from an AI solution?

Ashutosh Sharma

I’m always excited about interesting ideas, Bharat. Now, with that said, I think the adoption is all across the subsectors of financial services. In subsectors where India has naturally been at the forefront of innovation and payments come to mind, right? UPI is a very good example. I think India is leading the innovation wave. even with the advent of AI. Right about the time when the Indian e -commerce platforms were getting embedded or connected with the large foundational models, about the same evening, Indian payments companies were launching products, as Harsh said, that could enable you to buy from within the model or even within the chat experience that you are having in the e -commerce app, Swiggy or Flipkart, whatever you call it.

So within payments, we are at the forefront. Talking generally, I think the most use of AI I see today is in two areas. One is productivity. This is related to the unit economics point that I made previously. I think that’s happening. But more importantly, also in customer experience. And I’ll give you two examples. One is the use of AI. In UPI, we are now moving from this kind of OTP world to a biometric world, wherein you don’t need to just using your biometrics, you can make a payment, right? In part, that is enabled by AI. And imagine how nice the customer experience now will be with this, rather than waiting for something to come to you.

In lending, almost 60 to 70 % of collection for the first 30 days is now moved to an AI -led agent. Us as humans, we get irritated calling 20 people all day. And by the end of the day, the human agent is upset and the customer is upset and the conversation is like the collection is not happening. Whereas with an agent, the agent can be empathetic. Agent will call you, can remember. this is the time when Ashutosh is free let me call him and so I think we are seeing a lot of kind of movement there in the customer experience domain as well as for gaps I think there is one thing that I feel where India is slightly kind of behind is that the west has probably 50 60 years of customer data whereas in India UPI credit card all that is a new phenomenon so for us to there is no right answer for us to get to levels of underwriting which are closer to what west may enable with AI that ability of that availability of multi cyclical deep data maybe something we have a lot of data of data hundreds of millions of customers.

But the depth of that is something that I think we need to consider as

Bharat

Well, as they’re saying, those data is the new gold. So you need to keep it with you as much as possible. And I think that’s going to be something which is going to be challenging for a country of a billion and 400 million. So, Venduji, are there any engagement pathways which RBI is using to engage and partner with the industry to promote AI adoption in finance? And, you know, in the Indian startup ecosystem, are there any specific initiatives you’ve seen to promote AI adoption that banking sector can support this diffusion?

Suvendu K. Pati

Yeah, good point. And first of all, during the last couple of years, we have had multiple engagements. In fact, we have a scheduled monthly engagement with FinTech, and that’s titled as FinQuery and Finteract. So these events do take place at very regular intervals and across cities and through a hybrid channel as well. And roughly about 2 ,000 plus entities have engaged with us in the last one and a half years. And specifically on AI, we did a survey across more than close to 600 entities, including banks and NBFCs. That was a dipstick survey and deep engagement of about one hour each with around more than 75 entities to understand their adoption and what areas they see the potential implementation and what challenges they are witnessing.

So there is a constant. And after the report, free AI committee report has been released on our website in August, we have had around three rounds of consultations with various stakeholders, including FinTechs, to take their inputs on board. So it’s a continuous process. It’s a constant engagement. And I would also like to draw attention to the. regulatory sandbox framework which has been put in place since 2019 and entities are welcome to partner with us and experiment under the regulatory sandbox whenever they require any regulatory dispensation or a regulatory relaxation and as articulated in our recommendations we are one of the key constraints that we see especially the smaller fintechs is the lack of access to the affordable compute infra as well as the lack of access to data based on which they can you know innovate and build models so this is on top of our mind that we are sort of committed to design and operationalize what we would call that a ai sandbox that’s not exactly a regulatory sandbox but it will have access to the data and compute and sort of with the overall aim to democratize you know the data and compute and sort of with the overall aim to democratize you know the data and compute and sort of with the overall aim to democratize ai across you know smaller institutions A bank like JP Morgan or State Bank or HDFC may have enough data, bandwidth, and resources to build their models, but what about the smaller fintechs and other entities?

So with that vision, we would be operationalizing the AI sandbox, which would engage, put these people have access to those resources to innovate. And on top of that, we ourselves are building models like MuleHunter .ai, which is already implemented across 26 banks, and it’s getting implemented across other entities as well. And this engagement is a continuous process, and we would like them to partner with us, submit proposals, and work with us. And we also expect the industry bodies, like the self -regulatory organizations, which has already been recognized, one has been recognized, they have to come up with, we expect that they need to come up with the toolkits or benchmarking services. which the AI, you know, the models can sort of test themselves and see that whether they’re, you know, bias -free and they meet the expectation and transparency standards.

So it has to be expected that fintech industry itself comes up with those kind of standards and benchmarks and toolkits which would support the innovation.

Bharat

Thank you. As we all know, regulatory engagement is critical to promoting innovation. So, Harshal, for a company such as yours, what are the key regulatory challenges you are facing in the deployment of AI in finance? And how does your engagement with government and regulatory bodies actually address these? And do you find any public -private partnership model which could be helpful in taking the industry to the next level?

Harshil Mathur

See, I think the core aspects of regulation, as sir said, I generally don’t go into technology or which technology to use. I think there are general principles of regulation, and then you can use any technology to apply. the same principles. I think in most cases we have been fairly successful in deploying AI models and while meeting the requirements of regulators. I think the few areas where it sometimes becomes a challenge is I think we have a very strong data residency requirement in India which is rightly so and a lot of AI models are coming from the West which don’t meet the data residency requirements today for India. So I think in that context having and the open source models are all coming from China which makes it harder to deploy.

So I think we don’t have the right like we don’t have enough deployment of the cutting edge models in India data centers today and I think that sometimes delays deployments because we can’t really use them as a regulated company. I think the good part is I mean there are three language models that were announced in the AI summit today which are from India. So I think that can be a good way for at least financial companies in India who want to deploy models within India data centers and within Indian boundaries. I think they can at least those models are available and that can be a starting point and then we are hoping that the global companies will bring some of those the cutting edge models to India data centers.

centers as well, so they can be deployed. I think that’s one challenge just on the infrastructure itself, that the cutting -edge model infrastructure is not available. So we can use it for coding, we can use it for multiple internal purposes, but we can’t really use it for anything that touches customer data, anything that touches PII. We can’t use those models till they’re deployed in India data centers, and hopefully that is going to change. The second aspect is, like, related to it is, as a financial company, as she said, I think the biggest challenge for you is controlling where the data goes and where it flows out. I think AI models, as somebody said earlier, it’s a black box.

Once the data enters, you don’t know where it comes out and when, and I think drawing clear boundaries on that is hard. So that is one big challenge, just with LLMs, but there are other forms of AI where that works fine, because there are other forms of AI models or specific targeted models that you can apply where those guardrails are available. Just LLMs don’t have guardrails in terms of where data goes in and where it comes out. hallucinations. Anything to do with financial data, trust is very, very critical. So I’m okay if the system fails 10 % of the time, but it should not be wrong 10 % of the time. So it’s okay if the system says, hey, I can’t do this analysis.

But if it gives the wrong analysis and you use it as a source of truth and you act on it, and then you deliver that information to the customer and you say a commitment is successful, but it actually isn’t, even if it happens 1 % of the time, it creates a massive issue for you. So I think that’s the third piece and I think it’s less to do with regulation, it’s just how the, what is expected of financial players that you can’t be saying something that is not true. And LLM’s model by default can say things which are not true and even if it happens in 1 % or 2 % of cases, it can become a massive liability risk for financial companies.

So I think those are the three big aspects and I think the solutions available to the some of those, the first one is fairly easily solvable and global companies will probably solve it or Indian sovereign models will get there. The second is partly solvable because you can put guardrails around and use the right kind of AI models where that is possible. The third is a fundamental problem of how LLM models, LMs models work. So I think that that part is going to be harder to solve. Yes, there are newer models which hallucinate less. But as I said, even if it hallucinates less than 0 .1%, I still can’t deploy an LLM model till I’m certain about it.

And I think that part will require us to either use alternate means or wait for LLM models that can solve that boundary.

Bharat

Ashutosh, you know, because you’re looking at investing companies across the spectrum, not necessarily only in finance, but in other areas which are using artificial intelligence. In your view, what are some of the key regulatory gaps highlighted by your investing companies in the fintech sector? And going forward, what progressive regulatory measures can the government consider to promote this more smoothly?

Ashutosh Sharma

I think RBI has in general been a very kind of progressive. not regulator but guider in this in this situation the seven sutras have been really helpful for people to understand at least what the direction of travel is and also I think the one one acceptance we need to make is that just the way we all are learning about AI its use cases etc etc the regulators also learn and things are changing fast and therefore I think the end situation of what the regulation looks like may be very different. I have a slightly different sort of ask policy ask adding to what Harshal was saying I think compute for my companies is a bigger problem than regulation and researchers for my companies I don’t think we can solve this by the way I mean like through regulation or policy but I think you were asking what do companies struggle with I think those two things are are are bigger problems at this time.

Bharat

Thanks. I’m conscious of the time and I would use my moderator’s prerogative to ask one final round of questions I think to all our distinguished panelists. What is one big bet you would like to take on how AI will transform finance in the next five years? We start with Subinduji.

Suvendu K. Pati

Okay. I know that really time is up but yes it’s not a bet, it’s a wish list rather. Already I’m glad that Ashutosh has already covered some of that in a very, very elaborate way. One thing I would like to see is that how AI can bring about substantive improvement in financial inclusion. You know, bringing people to formal institutional credit which through alternate data analytics and bringing new underwriting models how we can bring them on board and it will be a big unlock. for a country like India. Second aspect I would like to emphasize is which already Harshal has also touched upon is all our fintech apps or everything are now designed for very, very digitally savvy people.

How do we use AI to bring language, voice -based banking, conversational banking, payments? I don’t have to fill a form. I just need to instruct and that translates. So uneducated but literate or sort of logical -minded people who are using WhatsApp voice and all that to transmit messages, they should be able to come on board and using AI come to the financial fold. We should focus more research on assistive technologies. For example, a disabled person, person who can’t see, can’t hear, how do we use AI to bring them or provide information? Make them access financial services in a more efficient way. manner. These are the areas where this technology is going to play a role and we would like to see this getting to that point where it really bridges this otherwise so -called digital divide which is the risk is widening.

We should bring it back to that and AI can prove it in a point and there I would say very, very optimistic about this but a lot of work needs to be done in these areas.

Bharat

Thank you. Harshil?

Harshil Mathur

I completely agree. I think the ability to bring the cost of servicing down significantly so that you can deliver personalization with the N of 1 at an individual level I think can have varied impacts. Like I said typically in India for example when HNI’s open a bank account you don’t fill a form. A guy comes to you fills the form for you, just asks for the 5 documents and asks you for signature and it’s done. But actually who needs this most is the villager. Because he really can’t fill a form. But he’s asked to stand in line, fill a form. AI can allow us to deliver that experience to the villager on the ground. And I think that is going to be the one biggest change that finance can do, is allow the cost of servicing to come down drastically, personalization to happen at an individual level, and then voice -based interactions to drive.

And as somebody said earlier, that’s what’s natural to us. That’s what’s natural to Indians, that if we can make it all voice -based.

Bharat

Arshatosh?

Ashutosh Sharma

I think AI -led financial services leading us to Vixit Bharat would be my bet.

Bharat

I think that’s an aspiration for all of us. And Tara, there’s a lady on the panel. The last word is yours.

Terah Lyons

I would underscore all the answers already provided. I think the financial inclusion potential, the accessibility potential here is massive. Imagine a world in which we can not just expand the credit envelope, but put a financial advisor in every single person’s pocket that normally only the wealthiest in society today are able to afford. So I look forward to that world being.

Bharat

Thank you.

Suvendu K. Pati

And the last word, if I may slip it, language. India is a country with diverse languages. We can leverage on our language, AI to play on the language.

Bharat

Well, I’d like to thank our distinguished panel for a truly enlightening discussion. And I think the topic was supercharging AI adoption in the global south. And I think many of the thoughts of this panel would go a very long way in achieving that goal. Thank you very much once again. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (10)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“John Tass‑Parker framed the financial sector’s AI journey as a shift from “frontier AI” to “institutional AI”, stating that trust is not a feature but the business model of finance.”

The knowledge base entry S1 repeats this framing, confirming the shift to institutional AI and the emphasis on trust as a business model.

Additional Contextmedium

“The RBI’s AI governance framework is built around seven principles referred to as “sutras”.”

S67 describes guiding principles called “sutras” in AI governance, showing that the terminology is used, though it does not specify the number of principles.

Additional Contextmedium

“The RBI adopts an enable‑by‑design, technology‑neutral, principle‑based approach to AI regulation.”

Both S63 and S15 discuss principle‑based, technology‑neutral regulatory designs in the financial sector, providing supporting context for the RBI’s stated approach.

Additional Contextmedium

“The RBI’s own AI model, MuleHunter.ai, is already deployed across 26 banks.”

S13 mentions the MuleHunter AI initiative within the financial sector, confirming the model’s existence and its use in governance, though it does not give the exact deployment count.

External Sources (70)
S1
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — -Ashutosh Sharma- Investor in India’s fintech ecosystem, described as one of the leading deployers of finance in fintech
S2
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — – John Tass-Parker- Terah Lyons – Terah Lyons- Harshil Mathur
S3
The Power of Satellites in Emergency Alerting and Protecting Lives — Alexandre Vallet: Thank you very much Dr. Zavazava. Thank you very much both of you for this introductory remark. I will…
S4
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — – Ashutosh Sharma- Harshil Mathur – Suvendu K. Pati- Ashutosh Sharma- Harshil Mathur – Terah Lyons- Harshil Mathur
S5
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -Suvendu Pati- Chief General Manager and Head of FinTech at the Reserve Bank of India
S6
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — – Suvendu K. Pati- Bharat – Suvendu K. Pati- Harshil Mathur
S7
Announcement of New Delhi Frontier AI Commitments — -Bharat: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S8
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S9
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — -Amish Devagon: Role/Title not explicitly mentioned, appears to be an interviewer or journalist conducting the discussio…
S10
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — 407 words | 122 words per minute | Duration: 199 secondss Hello everyone, my name is, oh sorry we’ve got a photographer…
S11
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — -John Tass-Parker- Leads policy partnerships at JPMorgan Chase
S12
Secure Finance Risk-Based AI Policy for the Banking Sector — Consumer centric safeguards obviously by way of transparent disclosure clear appeal processes and human intervention mec…
S13
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — Thank you. First of all, a very good evening to all of you and it’s indeed a great privilege to be here and I thank CBT …
S14
Who Watches the Watchers Building Trust in AI Governance — So there is no end to the story of how regulators should design the regulations. That is the main question. All countrie…
S15
Secure Finance Risk-Based AI Policy for the Banking Sector — The moderator emphasizes that AI governance should not be viewed through a completely different lens but should be integ…
S16
Scammers use fake celebrities to steal millions in crypto fraud — Fraudsters increasingly pretend to be celebrities to deceive people intofake cryptocurrency schemes. Richard Lyons lost …
S17
https://app.faicon.ai/ai-impact-summit-2026/how-the-global-south-is-accelerating-ai-adoption_-finance-sector-insights — And I think that’s a really good point. And I think that’s a really good point. And I think that’s a really good point. …
S18
AI tools influence modern personal finance practices — Personal finance assistants powered byAI toolsare increasingly helping users manage budgets, analyse spending, and organ…
S19
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — “The black box of data must become a glass box.”[11]. “the commander taking a decision based on an AI -enabled system bu…
S20
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — I am very pleased. I believe that our summit will play an important role in the creation of a human -centric, sensitive,…
S21
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — In conclusion, generative AI technology has the potential for positive impacts in multiple industries. It enhances commu…
S22
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S23
Global Perspectives on Openness and Trust in AI — So really competition is at the heart of it, and I don’t see any way where we can forget about market. Thank you. market…
S24
Shaping the Future AI Strategies for Jobs and Economic Development — Discussion point:Economic Growth and Development Applications
S25
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Context-specific deployment focusing on appropriate use cases can unlock both productivity and trust simultaneously
S26
AI That Empowers Safety Growth and Social Inclusion in Action — “investors should ask whether there is clear board level responsibility on AI risk whether executive incentives are alig…
S27
Closing remarks – Charting the path forward — Al Mesmar highlights that as AI systems become more powerful, governing access to computational infrastructure and large…
S28
Advancing Scientific AI with Safety Ethics and Responsibility — The moderator emphasizes the need to design AI safety measures that maintain high standards of rigor while being practic…
S29
AI adoption vs governance: A contradiction in Australian businesses — A study conducted by Datacom and engaged 318 business decision-makers working in Australian organisationshas unveiled a …
S30
How Trust and Safety Drive Innovation and Sustainable Growth — Explanation:Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was une…
S31
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Overall, the sentiment towards implementing principles and regulation in AI is positive. Although the analysis does not …
S32
Panel Discussion Inclusion Innovation &amp; the Future of AI — “We should presume that existing law is sufficient and that there is some sort of good solution.”[2]. “It should be, the…
S33
WS #187 Bridging Internet AI Governance From Theory to Practice — ### Transparency vs. Practicality Hadia Elminiawi: Thank you. Thank you so much. And I’m happy to be part of this very …
S34
Parliamentary Session 5 Parliamentary Exchange Enhancing Digital Policy Practices — Emphasis on not over-regulating or under-regulating, but having right mechanisms to encourage disclosure and transparenc…
S35
Building Indias Digital and Industrial Future with AI — Speaker 1 highlights a key regulatory challenge where AI systems need to be explainable and accountable, but in security…
S36
How Trust and Safety Drive Innovation and Sustainable Growth — Alexandra Reeve Givens This insight identifies a critical gap in current regulatory approaches – that AI creates an ‘en…
S37
Main Session 2: The governance of artificial intelligence — Claybaugh contends that there are already legal frameworks in place that pre-date ChatGPT covering issues like copyright…
S38
WS #162 Overregulation: Balance Policy and Innovation in Technology — Galvez suggests that countries should consider their local needs and existing regulations when developing AI governance …
S39
Do we really need specialised AI regulation? — History demonstrates the resilience of legal principles in adapting to new technologies. For example, when the internet …
S40
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Data residency requirements and lack of cutting-edge model infrastructure in India create deployment barriers Sharma id…
S41
Comprehensive Report: European Approaches to AI Regulation and Governance — This consensus was unexpected because it came from both the regulator’s perspective and practical implementation experie…
S42
Panel 3 – Legal and Regulatory Tools to Reduce Risks and Strengthen Resilience  — The level of disagreement was moderate and constructive. Speakers shared common goals of protecting submarine cable infr…
S43
Secure Finance Risk-Based AI Policy for the Banking Sector — Good afternoon to everyone. Distinguished policy makers, regulators, industry leaders, members of the FinTech community,…
S44
Secure Finance Risk-Based AI Policy for the Banking Sector — And these systems are fueled by vast data sets drawn from public and proprietary sources. On this foundation operate lar…
S45
AI driving transformation in financial services — At YourStory’s Tech Leaders’ Conclave, Ankur Pal, Chief Data Scientist at Aplazo,discussedhow AI is transforming the fin…
S46
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Trust and Legitimacy as Core Challenges: The discussion emphasized that while AI capability is advancing rapidly, the re…
S47
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S48
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 2 — The UN High Commissioner for Human Rights argues that AI systems should advance human rights by design, requiring alloca…
S49
The Global Power Shift India’s Rise in AI & Semiconductors — Consensus level:High level of consensus with complementary perspectives rather than conflicting views. The speakers come…
S50
Indias AI Leap Policy to Practice with AIP2 — Summary:The main areas of disagreement center around governance approaches (regulatory vs. flexible frameworks), investm…
S51
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Consensus level:Very high level of consensus with no significant disagreements identified. The alignment spans governmen…
S52
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Hello everyone, my name is, oh sorry we’ve got a photographer here now, so we’re going to take our photo. False start, s…
S53
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — “It’s the legitimacy that is the scarce attribute here.”[5]”It’s actually the business model.”[1]”Because in finance, tr…
S54
Driving Indias AI Future Growth Innovation and Impact — Dr. Mohindra argues that countries need to strike a careful balance between fostering innovation and ensuring responsibl…
S55
AI for Social Empowerment_ Driving Change and Inclusion — He asks how governments and institutions can govern AI responsibly to minimise labour market disruption and ensure a smo…
S56
Who Watches the Watchers Building Trust in AI Governance — Actually, could I? I can ask you to elaborate on that. So where might these financial incentives come from? You mentione…
S57
Are AI safety institutes shaping the future of trustworthy AI? — As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunit…
S58
Who Watches the Watchers Building Trust in AI Governance — Sure. My panelists have set me up very well to say this. So I think as the International AI Safety Report shows, the cap…
S59
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — Respected Chairman, Mr. Lahoti, Mr. Mittal, Mr. Tanglura fellow panelists, industry leaders, policy makers experts, ladi…
S60
Keynote by Naveen Tewari Founder &amp; CEO, inMobi India AI Impact Summit — Emergence of “agentic commerce” He describes a move from generic personalized feeds to truly personal, AI‑generated pro…
S61
Secure Finance Risk-Based AI Policy for the Banking Sector — Kamat explains that Gift City has interoperable sandbox mechanisms with RBI, SEBI, and IRDAI that allow solutions spanni…
S62
Demystifying AI: How to prepare international organisations for AI transformation? — Jovan Kurbalija, Director of Diplo, opened theconversationby framing AI as both a challenge and an opportunity. It’s not…
S63
Open Forum #18 Digital Cooperation for Development Ungis in Action — The WSIS framework was deliberately crafted with a technology-neutral and principle-based design that transcends specifi…
S64
About the Authors — – For instance, if the regulatory objective is to increase network coverage in a rural area and the policy is providing…
S65
Driving Indias AI Future Growth Innovation and Impact — Less regulation preferred to avoid curtailing innovation Rajgopal advocates for minimal regulation to avoid stifling in…
S66
Towards a Safer South Launching the Global South AI Safety Research Network — Mr. Singh emphasizes that while the primary objective is to ensure AI diffusion and benefit more users, this must be don…
S67
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Dr. Garg also referenced observations about the contrast between current AI systems requiring gigawatts of power and hum…
S68
WS #31 Cybersecurity in AI: balancing innovation and risks — Charbel Shbir: Hello. Yes, it is. Hello, my name is Charbel Shbir. I’m president of Lebanese ISOC. Regarding your q…
S69
State of Play: AI Governance / DAVOS 2025 — Arvind Krishna: Look, a simple perspective on this. If you lead to too much heavy-handed regulation, you will lead to…
S70
Importance of Professional standards for AI development and testing — Don Gotterbarn: Thank you, Stephen. The previous assertion to Stephen’s that says essentially, because there’s differenc…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
J
John Tass-Parker
1 argument122 words per minute407 words199 seconds
Argument 1
Trust as the core business model of finance, not just a feature
EXPLANATION
John emphasizes that in the financial sector, trust is not an optional attribute but the very foundation of the business model. Institutions will only adopt AI systems they can rely on, and senior leadership can only scale solutions that their boards can govern.
EVIDENCE
He states that “trust is not a feature” but “actually the business model” and notes that “Institutions only absorb systems they trust” while the C-suite, boards, and regulators can only act on systems that demonstrate reliability, auditability, and resilience [5-10].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of public trust for AI in finance is highlighted in discussions of consumer-centric safeguards and transparent disclosure as essential for maintaining trust [S12], and broader work on building trust in AI governance reinforces the view that trust is foundational rather than optional [S14].
MAJOR DISCUSSION POINT
Transition to Institutional AI and the Central Role of Trust
AGREED WITH
Terah Lyons, Suvendu K. Pati, Harshil Mathur, Ashutosh Sharma
T
Terah Lyons
4 arguments171 words per minute796 words277 seconds
Argument 1
Trusted AI enables fraud detection, compliance, and market operations
EXPLANATION
Terah outlines the most impactful AI use cases at JPMorgan Chase, highlighting how trusted AI is applied to combat fraud, ensure regulatory compliance, and improve market‑related processes. These applications demonstrate the tangible value of trustworthy AI in finance.
EVIDENCE
She lists “fraud and scams remediation”, “payments”, “markets” and “compliance” as the key areas where AI has been deployed at scale within the bank [78-81].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
JPMorgan Chase’s long-standing AI deployments across fraud detection, payments, markets and compliance are cited as concrete examples of trusted AI delivering operational value in finance [S10].
MAJOR DISCUSSION POINT
Transition to Institutional AI and the Central Role of Trust
AGREED WITH
John Tass-Parker, Suvendu K. Pati, Harshil Mathur, Ashutosh Sharma
Argument 2
Existing financial‑sector regulations (consumer protection, IT outsourcing) already cover many AI safety aspects
EXPLANATION
Terah points out that the principle‑based, technology‑neutral regulatory framework already in place for consumer protection and IT outsourcing provides a safety net for AI deployments, reducing the need for new AI‑specific rules.
EVIDENCE
She remarks that the “principles-based technology-neutral approach” of regulators has “allowed banks to experiment” and that existing consumer protection and IT-outsourcing guidelines already address many safety concerns [82-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulators emphasize that AI governance can be embedded within existing consumer-protection and technology-neutral frameworks, reducing the need for separate AI-specific rules [S15]; similar points about consumer-centric safeguards underline that many safety concerns are already addressed by current regulations [S12].
MAJOR DISCUSSION POINT
Regulatory Framework and Guidance for AI in Finance
AGREED WITH
Suvendu K. Pati, John Tass-Parker
DISAGREED WITH
Suvendu K. Pati, Harshil Mathur
Argument 3
AI drives fraud and scam remediation, payments optimization, and compliance monitoring
EXPLANATION
Terah reiterates that AI’s most impactful deployments in finance revolve around reducing fraud, streamlining payments, and strengthening compliance monitoring, underscoring AI’s operational importance across the sector.
EVIDENCE
She again cites the same set of use cases-fraud and scams remediation, payments, markets, and compliance-as the primary areas where AI adds value at JPMorgan Chase [78-81].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Evidence of AI use for fraud remediation, payments, markets and compliance at JPMorgan Chase supports the claim that these are the most impactful AI use cases in finance [S10].
MAJOR DISCUSSION POINT
Practical Use Cases and Business Impact of AI in Finance
Argument 4
Personalized, “N=1” financial advice delivered via AI assistants to every individual
EXPLANATION
Terah envisions a future where AI acts as a personal financial advisor for every person, delivering highly tailored advice that was previously affordable only to the wealthiest, thereby democratizing financial expertise.
EVIDENCE
She says, “Imagine a world in which we can not just expand the credit envelope, but put a financial advisor in every single person’s pocket that normally only the wealthiest in society today are able to afford” [371-372].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recent analyses of AI-powered personal finance assistants (e.g., ChatGPT, Gemini, Copilot) illustrate how AI can provide individualized budgeting, spending analysis and document organization for everyday users [S18].
MAJOR DISCUSSION POINT
Future Vision and Strategic Bets for AI in Finance
S
Suvendu K. Pati
6 arguments149 words per minute2325 words933 seconds
Argument 1
Deployers must move from “black‑box” to “glass‑box” AI, ensuring transparency for customers
EXPLANATION
Suvendu argues that regulated entities must make AI systems transparent to customers, turning opaque “black‑box” models into “glass‑box” ones where users are informed they are interacting with AI and can opt for non‑AI alternatives.
EVIDENCE
He explains that customers should be told upfront they are engaging with an AI system, offered a non-AI option, and that institutions must implement audit systems to monitor bias, drift, and degradation, turning the black box into a glass box [180-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for converting opaque AI systems into transparent “glass-box” solutions are echoed in discussions on trusted AI at scale, emphasizing visibility into data sources and model training [S19]; algorithmic transparency requirements are also highlighted in broader AI governance forums [S22].
MAJOR DISCUSSION POINT
Transition to Institutional AI and the Central Role of Trust
AGREED WITH
John Tass-Parker, Terah Lyons, Harshil Mathur, Ashutosh Sharma
DISAGREED WITH
Harshil Mathur, Ashutosh Sharma
Argument 2
RBI’s tech‑neutral, principle‑based approach that nudges innovation while managing risk
EXPLANATION
Suvendu describes the Reserve Bank of India’s (RBI) strategy of remaining technology‑agnostic while using principle‑based guidelines to encourage AI innovation and simultaneously mitigate risks such as bias and accountability.
EVIDENCE
He notes that the RBI’s approach is “tech neutral” and “tech agnostic”, focusing on safety and consumer protection irrespective of technology, thereby nudging innovation while managing risk [38-41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Reserve Bank of India’s progressive, technology-neutral, principles-based AI framework is documented as a key regulatory approach that balances innovation with risk mitigation [S1][S10].
MAJOR DISCUSSION POINT
Regulatory Framework and Guidance for AI in Finance
AGREED WITH
Terah Lyons, John Tass-Parker
DISAGREED WITH
Terah Lyons, Harshil Mathur
Argument 3
Adoption of the RBI’s seven “sutras” as a national AI governance standard
EXPLANATION
Suvendu highlights that the RBI’s seven AI governance principles, termed “sutras”, have been formally adopted by the Indian government and serve as a cross‑sectoral standard for responsible AI use.
EVIDENCE
He states that the seven principles have been adopted by the government of India for implementation across sectors, providing a generic yet accepted AI governance framework [56-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The RBI’s seven AI governance “sutras” have been formally adopted by the Indian government as a cross-sectoral standard, as reported in multiple sources describing the sutras and their adoption [S1][S5].
MAJOR DISCUSSION POINT
Regulatory Framework and Guidance for AI in Finance
Argument 4
Ongoing industry engagement through FinQuery/Finteract and the creation of an AI sandbox to democratize data and compute
EXPLANATION
Suvendu outlines RBI’s continuous outreach activities, including monthly FinQuery/Finteract events and a proposed AI sandbox that will give smaller fintechs access to data and compute resources, fostering inclusive AI innovation.
EVIDENCE
He describes monthly FinQuery/Finteract engagements with over 2,000 entities, a dipstick survey of 600+ firms, multiple consultations, and plans for an AI sandbox that provides data and compute to democratize AI for smaller institutions [281-294].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
RBI’s regular FinQuery/Finteract engagements with thousands of fintechs and its planned AI sandbox to provide data and compute resources are detailed in the discussion of RBI’s industry outreach activities [S10].
MAJOR DISCUSSION POINT
Regulatory Framework and Guidance for AI in Finance
AGREED WITH
Harshil Mathur, Ashutosh Sharma
Argument 5
Board‑level AI governance policies, audit frameworks, and “glass‑box” transparency requirements
EXPLANATION
Suvendu stresses that financial institutions need board‑level AI policies, robust internal audit mechanisms, and transparency measures to capture incremental AI risks and ensure accountability throughout the AI lifecycle.
EVIDENCE
He mentions the need for a liability and accountability framework, board-level AI governance, internal audit assurance, and checks for bias, drift, and degradation as part of a comprehensive governance structure [52-55].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recommendations for board-level AI policies, internal audit assurance and transparency (glass-box) align with guidance on embedding AI governance within existing risk frameworks and ensuring accountability [S12][S15][S19].
MAJOR DISCUSSION POINT
Best Practices and Governance for Responsible AI Deployment
AGREED WITH
Ashutosh Sharma, Harshil Mathur
Argument 6
AI as a catalyst for deep financial inclusion through language‑ and voice‑driven services
EXPLANATION
Suvendu envisions AI unlocking financial inclusion by providing language‑ and voice‑based banking services, enabling un‑educated or linguistically diverse populations to access formal credit and other financial products.
EVIDENCE
He calls for research on assistive technologies, language- and voice-driven banking, and notes that AI can bring un-educated but literate users into the financial fold via conversational interfaces [340-349].
MAJOR DISCUSSION POINT
Future Vision and Strategic Bets for AI in Finance
AGREED WITH
Ashutosh Sharma, Harshil Mathur, John Tass-Parker
DISAGREED WITH
Ashutosh Sharma, Harshil Mathur
H
Harshil Mathur
5 arguments216 words per minute2189 words606 seconds
Argument 1
AI’s ability to process massive data volumes must be paired with reliable, auditable outcomes
EXPLANATION
Harshil points out that finance deals with huge data sets, and while AI can handle them at scale, the outputs must be reliable, auditable, and subject to rigorous risk controls to be acceptable in regulated environments.
EVIDENCE
He explains that “large volumes of data is generally harder for humans to skim through” and that AI enables analysis at “1000x more” than traditional tools, emphasizing the need for reliable, auditable outcomes [134-141].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for reliable, auditable AI outputs in high-volume financial data processing is reinforced by calls for consumer-centric safeguards and integrated AI governance within existing regulatory structures [S12][S15].
MAJOR DISCUSSION POINT
Transition to Institutional AI and the Central Role of Trust
AGREED WITH
Suvendu K. Pati, Ashutosh Sharma
Argument 2
Agentic, voice‑first commerce unlocks mass‑market adoption for billions of Indian consumers
EXPLANATION
Harshil describes how current Indian e‑commerce experiences are not natural for many users, and proposes agentic, voice‑first commerce as a way to bring the remaining 90% of consumers online, especially for payments and insurance.
EVIDENCE
He cites that only 10 million of 300-400 million UPI users conduct 70 % of online commerce, explains the need for conversational commerce, and details how voice-first, multilingual agents can bridge the gap for travel, insurance, and other services [144-166].
MAJOR DISCUSSION POINT
Practical Use Cases and Business Impact of AI in Finance
AGREED WITH
Suvendu K. Pati, Ashutosh Sharma, John Tass-Parker
DISAGREED WITH
Suvendu K. Pati, Ashutosh Sharma
Argument 3
Data residency rules limit use of foreign LLMs; need for India‑based model infrastructure
EXPLANATION
Harshil highlights that Indian data‑residency regulations prevent the use of many foreign large language models, creating a need for domestically hosted AI models and compute infrastructure.
EVIDENCE
He notes that strong data-residency requirements block foreign LLMs, mentions the lack of cutting-edge models in Indian data centres, and points to recent Indian language models as a possible solution [303-308].
MAJOR DISCUSSION POINT
Challenges, Gaps, and Technical Constraints
AGREED WITH
Suvendu K. Pati, Ashutosh Sharma
DISAGREED WITH
Suvendu K. Pati, Terah Lyons
Argument 4
Democratizing intelligence to empower small retailers, farmers, and villagers, reducing service costs
EXPLANATION
Harshil argues that AI can level the playing field for small retailers and farmers by providing on‑demand intelligence, reducing costs, and enabling personalized services that were previously affordable only to large firms.
EVIDENCE
He gives examples of a small retailer competing with a supermarket using AI-generated business plans, and a farmer receiving AI-driven crop advice, illustrating how AI can democratize intelligence and lower service costs [214-218][219-224].
MAJOR DISCUSSION POINT
Future Vision and Strategic Bets for AI in Finance
Argument 5
Risk of model hallucinations and inaccurate outputs; even low error rates can create large liability
EXPLANATION
Harshil warns that large language models can produce hallucinated or incorrect answers, and even a 0.1 % error rate can translate into significant financial liability, making such models risky for regulated financial services.
EVIDENCE
He explains that while newer models hallucinate less, any false output-however rare-can lead to massive liability, especially when the model is used as a source of truth for customers [317-322].
MAJOR DISCUSSION POINT
Challenges, Gaps, and Technical Constraints
DISAGREED WITH
Suvendu K. Pati, Ashutosh Sharma
A
Ashutosh Sharma
6 arguments138 words per minute1228 words530 seconds
Argument 1
AI enables “thin‑file” underwriting for the un‑formalised Indian credit market, expanding financial inclusion
EXPLANATION
Ashutosh explains that AI can process unstructured data to enrich thin credit files, allowing lenders to underwrite a large informal segment of the Indian economy that previously lacked sufficient data.
EVIDENCE
He describes the “thin file” issue in Indian credit, and how AI’s ability to use unstructured data can quickly and cost-effectively create a “thick file” for better underwriting [115-118].
MAJOR DISCUSSION POINT
Practical Use Cases and Business Impact of AI in Finance
AGREED WITH
Suvendu K. Pati, Harshil Mathur, John Tass-Parker
DISAGREED WITH
Suvendu K. Pati, Harshil Mathur
Argument 2
Productivity gains from AI‑augmented analytics lower operating costs across banking and fintech
EXPLANATION
Ashutosh notes that AI improves unit economics by reducing operating expenses, thereby making financial businesses healthier and more productive.
EVIDENCE
He links AI to productivity improvements that affect operating costs, referencing the Indian credit market’s OPEX of $60-$100 billion and stating that AI’s impact on productivity is just the beginning of the journey [261-263].
MAJOR DISCUSSION POINT
Practical Use Cases and Business Impact of AI in Finance
Argument 3
Human‑in‑the‑loop model to retain oversight over AI‑generated decisions
EXPLANATION
Ashutosh advocates keeping a human reviewer in the decision loop, ensuring that AI‑generated outputs are validated before final action, especially in high‑impact financial transactions.
EVIDENCE
He states that good fintech practice is to keep a human in the loop, where technology prepares a file but a human makes the final decision, and also mentions the need to follow DPDP guardrails for sensitive data [127-132].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Best-practice guidance stresses the importance of human intervention mechanisms and appeal processes to maintain trust and oversight in AI-driven decisions [S12].
MAJOR DISCUSSION POINT
Best Practices and Governance for Responsible AI Deployment
AGREED WITH
Suvendu K. Pati, Harshil Mathur
DISAGREED WITH
Harshil Mathur
Argument 4
Strict adherence to data‑privacy regulations (e.g., DPDP) when handling sensitive financial data
EXPLANATION
Ashutosh stresses that fintechs must continuously comply with India’s data‑privacy and protection framework (DPDP) when processing sensitive financial information, as part of responsible AI deployment.
EVIDENCE
He emphasizes that fintechs must follow DPDP guardrails at all times while handling sensitive data, noting this as a core best practice [127-132].
MAJOR DISCUSSION POINT
Best Practices and Governance for Responsible AI Deployment
Argument 5
Limited access to affordable compute and high‑quality data hampers fintech innovation
EXPLANATION
Ashutosh points out that many fintechs struggle more with the lack of affordable compute resources and high‑quality data than with regulatory hurdles, limiting their ability to innovate with AI.
EVIDENCE
He remarks that “compute for my companies is a bigger problem than regulation” and that data availability is also a challenge for fintech innovation [333-334].
MAJOR DISCUSSION POINT
Challenges, Gaps, and Technical Constraints
AGREED WITH
Suvendu K. Pati, Harshil Mathur
Argument 6
AI‑led financial services (Vixit Bharat) as the next transformative business model
EXPLANATION
Ashutosh proposes that AI‑driven financial services, which he terms “Vixit Bharat”, will become the next major business model, reshaping how financial products are delivered in India.
EVIDENCE
He succinctly states, “AI-led financial services (Vixit Bharat) as the next transformative business model” [364].
MAJOR DISCUSSION POINT
Future Vision and Strategic Bets for AI in Finance
Agreements
Agreement Points
Trust, transparency and auditability are essential foundations for AI deployment in finance
Speakers: John Tass-Parker, Terah Lyons, Suvendu K. Pati, Harshil Mathur, Ashutosh Sharma
Trust as the core business model of finance, not just a feature Trusted AI enables fraud detection, compliance, and market operations Deployers must move from “black‑box” to “glass‑box” AI, ensuring transparency for customers AI’s ability to process massive data volumes must be paired with reliable, auditable outcomes Human‑in‑the‑loop model to retain oversight over AI‑generated decisions
All speakers stress that AI systems in the financial sector must be trustworthy, transparent and auditable – trust is the business model (John) [5-7]; trusted AI is needed for fraud, compliance and markets (Terah) [78-81]; regulators and deployers should turn opaque models into “glass-box” systems that disclose AI interaction to customers (Suvendu) [180-188]; large-scale data processing must produce reliable, auditable results (Harshil) [134-141][317-322]; and human oversight remains critical (Ashutosh) [127-132].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with regulator and industry emphasis on building trust through transparent, auditable AI systems, as highlighted in board-level governance guidance [S26] and the broader call for trust and legitimacy in financial AI [S46]; it also reflects consensus that transparency mechanisms are needed without over-regulation [S34].
A principle‑based, technology‑neutral regulatory approach is sufficient for governing AI in finance
Speakers: Terah Lyons, Suvendu K. Pati, John Tass-Parker
Existing financial‑sector regulations (consumer protection, IT outsourcing) already cover many AI safety aspects RBI’s tech‑neutral, principle‑based approach that nudges innovation while managing risk Regulators can only enable what they can supervise
Terah notes that current consumer-protection and IT-outsourcing guidelines already address AI safety concerns (Terah) [82-84]; Suvendu describes RBI’s tech-neutral, principle-based framework that encourages innovation while managing risk (Suvendu) [38-41]; John adds that regulators can only act within what they can supervise (John) [9-10]. Together they agree that new AI-specific rules are not immediately required.
POLICY CONTEXT (KNOWLEDGE BASE)
The view mirrors the consensus among regulators and industry that existing principle-based frameworks can cover AI risks, as noted in multi-jurisdictional discussions favouring targeted interventions over new legislation [S30], the advocacy for principle-based regulation [S31], and the argument that existing legal regimes are adaptable to AI [S37][S39].
AI is a catalyst for deep financial inclusion, especially through language, voice and thin‑file underwriting
Speakers: Suvendu K. Pati, Ashutosh Sharma, Harshil Mathur, John Tass-Parker
AI as a catalyst for deep financial inclusion through language‑ and voice‑driven services AI enables “thin‑file” underwriting for the un‑formalised Indian credit market, expanding financial inclusion Agentic, voice‑first commerce unlocks mass‑market adoption for billions of Indian consumers If we want AI to drive productivity for small business, for farmers, for teachers, for local government, for state government, for international, across the global south, then trusted deployment is what unlocks it
Suvendu envisions AI-driven language and voice services bridging the digital divide (Suvendu) [340-349]; Ashutosh explains AI can enrich thin credit files to reach the informal sector (Ashutosh) [115-118]; Harshil describes voice-first, conversational commerce as the next wave to bring billions online (Harshil) [144-166]; John links AI-enabled productivity for small businesses and the global south to trusted deployment (John) [16-18]. All agree AI will expand inclusion.
POLICY CONTEXT (KNOWLEDGE BASE)
This reflects industry narratives that AI can expand credit access via alternative data and conversational interfaces, exemplified by case studies on AI-driven inclusive finance solutions [S45] and strategic emphasis on inclusion in India’s AI agenda [S49].
Industry engagement, sandbox initiatives and compute infrastructure are needed to democratize AI innovation
Speakers: Suvendu K. Pati, Harshil Mathur, Ashutosh Sharma
Ongoing industry engagement through FinQuery/Finteract and the creation of an AI sandbox to democratize data and compute Data residency rules limit use of foreign LLMs; need for India‑based model infrastructure Limited access to affordable compute and high‑quality data hampers fintech innovation
Suvendu details regular FinQuery/Finteract events and plans for an AI sandbox to give smaller fintechs data and compute access (Suvendu) [281-294]; Harshil points out data-residency constraints that block foreign LLMs and call for India-hosted models (Harshil) [303-308]; Ashutosh highlights that compute affordability and data quality are bigger hurdles than regulation for fintechs (Ashutosh) [333-334]. They concur on the need for supportive infrastructure and engagement.
POLICY CONTEXT (KNOWLEDGE BASE)
The need for sandboxing and shared compute resources is echoed in calls for governing access to computational infrastructure [S27] and identification of compute and talent gaps as primary barriers in the Global South [S40]; regulators also stress collaborative innovation environments [S41].
Robust board‑level AI governance and audit frameworks are required for responsible AI use
Speakers: Suvendu K. Pati, Ashutosh Sharma, Harshil Mathur
Board‑level AI governance policies, audit frameworks, and “glass‑box” transparency requirements Human‑in‑the‑loop model to retain oversight over AI‑generated decisions AI’s ability to process massive data volumes must be paired with reliable, auditable outcomes
Suvendu calls for board-level AI policies, internal audit assurance and glass-box transparency (Suvendu) [52-55][180-188]; Ashutosh advocates keeping a human in the decision loop to maintain oversight (Ashutosh) [127-132]; Harshil stresses that AI outputs must be reliable and auditable, especially when handling large data sets (Harshil) [134-141]. All stress governance structures at the institutional level.
POLICY CONTEXT (KNOWLEDGE BASE)
Board-level responsibility and auditability are highlighted as essential in governance frameworks for AI risk management [S26] and in broader trust-and-legitimacy requirements for financial AI [S46].
Similar Viewpoints
Both see the current principle‑based, technology‑neutral regulatory regime as adequate for governing AI, reducing the need for new AI‑specific rules [82-84][38-41].
Speakers: Terah Lyons, Suvendu K. Pati
Existing financial‑sector regulations (consumer protection, IT outsourcing) already cover many AI safety aspects RBI’s tech‑neutral, principle‑based approach that nudges innovation while managing risk
Both identify compute and data infrastructure constraints—whether due to residency rules or lack of affordable compute—as the primary bottleneck for AI adoption in Indian fintechs [303-308][333-334].
Speakers: Harshil Mathur, Ashutosh Sharma
Data residency rules limit use of foreign LLMs; need for India‑based model infrastructure Limited access to affordable compute and high‑quality data hampers fintech innovation
Both stress that trust is foundational to finance and must be embedded through transparent, accountable AI systems [5-7][180-188].
Speakers: John Tass-Parker, Suvendu K. Pati
Trust as the core business model of finance, not just a feature Deployers must move from “black‑box” to “glass‑box” AI, ensuring transparency for customers
Unexpected Consensus
Regulators and private firms agree that existing principle‑based regulations are sufficient for AI governance
Speakers: Terah Lyons, Suvendu K. Pati, John Tass-Parker
Existing financial‑sector regulations (consumer protection, IT outsourcing) already cover many AI safety aspects RBI’s tech‑neutral, principle‑based approach that nudges innovation while managing risk Regulators can only enable what they can supervise
It is often assumed that new AI-specific regulations are needed, yet the regulator (Suvendu) and private sector representatives (Terah, John) all express confidence that the current principle-based, tech-neutral framework already addresses key AI risks, indicating an unexpected alignment of views across sectors [82-84][38-41][9-10].
POLICY CONTEXT (KNOWLEDGE BASE)
This agreement is documented in cross-sector dialogues where regulators and industry participants expressed confidence in existing principle-based regimes and opposed sweeping new AI statutes [S30][S31][S32].
Consensus on voice‑first, conversational commerce as the primary path to mass AI adoption in India
Speakers: Harshil Mathur, Ashutosh Sharma, Suvendu K. Pati
Agentic, voice‑first commerce unlocks mass‑market adoption for billions of Indian consumers AI as a catalyst for deep financial inclusion through language‑ and voice‑driven services AI enables “thin‑file” underwriting for the un‑formalised Indian credit market, expanding financial inclusion
While voice-first commerce was presented by Harshil as a product strategy, both Suvendu (language/voice inclusion) and Ashutosh (thin-file underwriting enabled by AI) converge on the idea that conversational, voice-driven interfaces are essential to bring the large unserved population into formal finance-an alignment not explicitly anticipated at the start of the discussion [144-166][340-349][115-118].
POLICY CONTEXT (KNOWLEDGE BASE)
Indian policy discussions emphasize voice-first interfaces as a driver of AI uptake, aligning with the consensus on India’s AI strategy that prioritises conversational commerce [S49].
Overall Assessment

The panel shows strong convergence around four core themes: (1) trust, transparency and auditability are non‑negotiable for AI in finance; (2) a principle‑based, technology‑neutral regulatory regime is viewed as adequate; (3) AI is seen as a powerful lever for financial inclusion through language, voice and thin‑file underwriting; (4) democratizing access to data, compute and sandbox environments, together with robust board‑level governance, is essential for responsible scaling.

High consensus – most speakers, across regulator, large bank, fintech and investor perspectives, reaffirm the same foundational principles, suggesting that coordinated policy and industry actions can move forward without major ideological friction.

Differences
Different Viewpoints
Extent of need for new AI‑specific regulation versus relying on existing principle‑based, tech‑neutral frameworks
Speakers: Suvendu K. Pati, Terah Lyons, Harshil Mathur
RBI’s tech‑neutral, principle‑based approach that nudges innovation while managing risk Existing financial‑sector regulations (consumer protection, IT outsourcing) already cover many AI safety aspects Data residency rules limit use of foreign LLMs; need for India‑based model infrastructure
Suvendu argues that the RBI’s technology-agnostic, principle-based guidance is sufficient and that regulation should remain limited to guidance for deployers [38-41][175-176]. Terah reinforces this view, saying that existing consumer-protection and IT-outsourcing guidelines already address most AI safety concerns, reducing the need for new AI-specific rules [82-84]. Harshil counters by highlighting practical regulatory constraints – strict data-residency requirements that block foreign large language models and the need for domestically hosted AI infrastructure – suggesting that additional, more specific regulatory measures may be required [303-308].
POLICY CONTEXT (KNOWLEDGE BASE)
The debate mirrors ongoing deliberations about whether to craft dedicated AI laws or adapt existing frameworks, as seen in regulator-industry consensus favouring principle-based approaches [S30][S31][S32] and arguments for complementing rather than replacing current regulations [S38][S39].
Approach to AI transparency – mandatory “glass‑box” disclosure versus less explicit transparency requirements
Speakers: Suvendu K. Pati, Harshil Mathur, Ashutosh Sharma
Deployers must move from “black‑box” to “glass‑box” AI, ensuring transparency for customers Risk of model hallucinations and inaccurate outputs; even low error rates can create large liability Human‑in‑the‑loop model to retain oversight over AI‑generated decisions
Suvendu calls for converting opaque models into “glass-box” systems, requiring that customers be told they are interacting with AI and be offered a non-AI alternative, with audit mechanisms for bias, drift and degradation [180-188]. Harshil focuses on the reliability dimension, warning that hallucinations-even at 0.1 % error-pose massive liability, but does not explicitly demand customer-facing disclosure, emphasizing internal safeguards instead [317-322]. Ashutosh advocates keeping a human reviewer in the decision loop to maintain oversight, a different method of ensuring trust without necessarily exposing the model internals to users [127-132].
POLICY CONTEXT (KNOWLEDGE BASE)
Tensions between full algorithmic disclosure and practical limits are discussed in transparency-versus-practicality debates [S33], calls for balanced disclosure mechanisms instead of black-box bans [S34], and concerns about security implications of explainability [S35][S36].
Level of human involvement in AI‑driven financial processes
Speakers: Ashutosh Sharma, Harshil Mathur
Human‑in‑the‑loop model to retain oversight over AI‑generated decisions Agentic, voice‑first commerce unlocks mass‑market adoption for billions of Indian consumers
Ashutosh stresses that fintechs should keep a human in the loop, with AI preparing files that are ultimately approved by a person, to preserve oversight and comply with data-privacy guardrails [127-132]. Harshil envisions a future where AI agents handle interactions directly-voice-first, conversational commerce that can serve millions without human mediation-arguing that this will dramatically expand adoption and lower service costs [144-166]. The two positions differ on how much human control should remain in the loop.
Primary strategy for achieving deep financial inclusion
Speakers: Suvendu K. Pati, Ashutosh Sharma, Harshil Mathur
AI as a catalyst for deep financial inclusion through language‑ and voice‑driven services AI enables “thin‑file” underwriting for the un‑formalised Indian credit market, expanding financial inclusion Agentic, voice‑first commerce unlocks mass‑market adoption for billions of Indian consumers
Suvendu highlights language- and voice-based conversational banking as the key to bring un-educated or linguistically diverse users into formal finance [340-349]. Ashutosh points to AI’s ability to enrich thin credit files using unstructured data, thereby underwriting the large informal sector [115-118]. Harshil focuses on agentic, voice-first commerce that can bring the remaining 90 % of Indian consumers online, especially for payments and insurance [144-166]. All agree on the inclusion goal but propose different technological pathways.
Unexpected Differences
Perceived regulatory sufficiency versus practical constraints of data residency and model hallucinations
Speakers: Suvendu K. Pati, Harshil Mathur
RBI’s tech‑neutral, principle‑based approach that nudges innovation while managing risk Risk of model hallucinations and inaccurate outputs; even low error rates can create large liability
Suvendu maintains that a technology-agnostic, principle-based framework is enough and that guidance, not regulation, should steer AI deployment [38-41][175-176]. Harshil, however, brings up concrete operational hurdles-strict data-residency rules that block foreign LLMs and the danger of hallucinations causing financial liability-even if the overall regulatory stance is permissive. The clash between a high-level regulatory philosophy and on-the-ground technical-legal constraints was not anticipated from the earlier parts of the discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
While regulators view existing rules as adequate, practitioners cite data residency mandates and model reliability issues as implementation challenges, highlighted in analyses of AI adoption barriers in the Global South [S40] and discussions on trust and legitimacy that stress practical constraints [S46].
Overall Assessment

The panel broadly concurs that trustworthy AI is essential for finance and that AI can drive inclusion, productivity and new business models. Disagreements surface around how much new regulation is needed, the degree of transparency required, the role of human oversight, and which technological pathway should lead inclusion efforts. While the disagreements are substantive, they do not fracture the shared vision; they reflect different operational priorities and risk appetites.

Moderate – the speakers diverge on implementation details (regulation, transparency, human‑in‑the‑loop, inclusion strategy) but share the same high‑level objectives. This suggests that policy and industry initiatives will need to accommodate multiple approaches, balancing principle‑based guidance with targeted regulatory adjustments and offering flexible pathways for inclusion.

Partial Agreements
All speakers share the overarching goal of building trustworthy AI in finance—recognising that trust is fundamental to the sector’s business model and that AI must be reliable, auditable and governed. However, they diverge on the mechanisms: John frames trust as the business model itself [5-7]; Suvendu pushes for “glass‑box” transparency and board‑level policies [180-188][52-55]; Terah argues that existing regulations already provide sufficient safety nets [82-84]; Ashutosh insists on a human‑in‑the‑loop safeguard [127-132]; Harshil stresses the need for auditable outcomes when handling massive data sets [134-141].
Speakers: John Tass‑Parker, Suvendu K. Pati, Terah Lyons, Ashutosh Sharma, Harshil Mathur
Trust as the core business model of finance, not just a feature Deployers must move from “black‑box” to “glass‑box” AI, ensuring transparency for customers Existing financial‑sector regulations (consumer protection, IT outsourcing) already cover many AI safety aspects Human‑in‑the‑loop model to retain oversight over AI‑generated decisions AI’s ability to process massive data volumes must be paired with reliable, auditable outcomes
All three aim to harness AI to broaden financial inclusion in India. Suvendu emphasizes conversational, multilingual interfaces; Ashutosh focuses on data‑driven credit underwriting for thin‑file borrowers; Harshil promotes agentic, voice‑first commerce to reach the unserved masses. The shared inclusion objective is clear, but the preferred technological lever differs.
Speakers: Suvendu K. Pati, Ashutosh Sharma, Harshil Mathur
AI as a catalyst for deep financial inclusion through language‑ and voice‑driven services AI enables “thin‑file” underwriting for the un‑formalised Indian credit market, expanding financial inclusion Agentic, voice‑first commerce unlocks mass‑market adoption for billions of Indian consumers
Takeaways
Key takeaways
Finance is moving from frontier AI to an era of institutional AI where trust, legitimacy, auditability and resilience are the primary differentiators. Regulators (RBI) adopt a tech‑neutral, principle‑based approach that nudges innovation while managing risk, embodied in the seven “sutras” now endorsed by the Indian government. Trusted AI use cases in banking include fraud and scam remediation, payments optimization, compliance monitoring and market operations. AI can dramatically improve financial inclusion in the Global South by enabling thin‑file underwriting, voice‑first conversational banking, and personalized “N=1” advice. Best‑practice governance requires board‑level AI policies, audit frameworks, glass‑box transparency, human‑in‑the‑loop oversight and strict data‑privacy compliance. Key technical constraints—data‑residency rules, limited affordable compute, and model hallucination risks—must be addressed before large‑scale LLM deployment in finance.
Resolutions and action items
RBI to operationalise an AI sandbox that provides fintechs with access to data and compute resources for model development and testing. Continuation of regular industry engagement through FinQuery/Finteract sessions and multi‑round consultations on AI governance. Financial institutions to adopt the RBI’s seven AI governance sutras, embed board‑level AI policies, and implement glass‑box transparency for customer‑facing AI systems. Fintechs and banks to retain human‑in‑the‑loop controls for critical decisions such as underwriting and fraud detection. RBI to encourage self‑regulatory organisations to develop toolkits, benchmarking services and audit standards for AI models.
Unresolved issues
How to ensure sufficient India‑based large language model infrastructure that complies with data‑residency requirements. Effective mitigation of LLM hallucinations and inaccurate outputs, especially where even low error rates can create large liability. Access to deep, multi‑cycle customer data for robust underwriting in India’s thin‑file market. Clarification of regulatory expectations for AI model developers versus deployers, and the extent of liability for model errors. Scalable solutions for providing affordable compute to smaller fintechs beyond the proposed AI sandbox.
Suggested compromises
RBI’s principle‑based, technology‑agnostic framework that encourages innovation while imposing risk‑based safeguards. Requirement that AI systems be presented as “glass‑box” rather than “black‑box” to customers, balancing transparency with operational efficiency. Human‑in‑the‑loop approach that allows automation benefits while preserving oversight for high‑risk decisions. Regulatory sandbox and AI sandbox mechanisms that give fintechs temporary regulatory relief and resource access while maintaining overall supervisory control.
Thought Provoking Comments
In finance, trust is not a feature. It’s actually the business model. Institutions only absorb systems they trust, and legitimacy—not just capability—is the scarce attribute.
Frames the entire discussion around legitimacy and trust rather than pure technical performance, shifting focus from model breakthroughs to institutional adoption.
Set the agenda for the panel, prompting regulators and practitioners to discuss how to build trustworthy AI infrastructure and influencing subsequent questions about governance, accountability, and inclusion.
Speaker: John Tass-Parker
Our approach is to enable responsible innovation rather than restrain it, using a technology‑neutral, principle‑based framework that nudges institutions toward innovation.
Highlights a proactive regulatory stance that balances risk mitigation with encouragement of AI experimentation, introducing the ‘innovation versus restraint’ principle.
Created a turning point where the conversation moved from abstract concerns to concrete regulatory philosophy, leading other panelists to reference the principle‑based approach and discuss its practical implications.
Speaker: Suvendu K. Pati
The principle‑based, technology‑neutral regulatory approach has allowed banks to experiment proportionately with risk, and finance’s AI governance offers lessons that can be exported to other sectors.
Reinforces the regulator’s stance and expands it by suggesting that the financial sector’s governance model can serve as a template for broader AI oversight.
Deepened the analysis of governance, prompting further discussion on how other industries might adopt similar frameworks and encouraging the regulator to elaborate on accountability mechanisms.
Speaker: Terah Lyons
AI can turn thin‑file borrowers into thick‑file ones, unlocking credit for the large un‑formalized segment of India’s economy.
Introduces a concrete, high‑impact use case—financial inclusion through alternative data—that ties AI capability directly to economic development.
Shifted the conversation toward social impact, leading participants to explore inclusion‑focused applications such as voice‑based banking and AI‑driven underwriting.
Speaker: Ashutosh Sharma
Agentic commerce – voice‑first, multilingual conversational interfaces – will unlock the 300‑400 million UPI users who currently don’t shop online, bridging the gap between digital payments and actual e‑commerce.
Presents a novel, large‑scale vision for AI‑driven commerce in India, moving beyond traditional fintech use cases to a new consumer interaction paradigm.
Opened a new topic on conversational AI, prompting discussion on language diversity, accessibility, and the need for AI‑enabled front‑ends for the mass market.
Speaker: Harshil Mathur
Regulated entities must treat AI systems as a ‘glass box’, giving customers clear notice of AI interaction and an opt‑out, shifting accountability from model developers to the deployers.
Proposes a concrete accountability shift that reframes responsibility, emphasizing transparency and consumer awareness.
Steered the dialogue toward practical governance measures, influencing later remarks about auditability, model drift monitoring, and the need for clear customer communication.
Speaker: Suvendu K. Pati
Even a 1 % hallucination rate is unacceptable for financial decisions; AI must not give wrong analysis, otherwise the liability risk is massive.
Raises a critical safety concern specific to large language models, highlighting the tension between innovation speed and reliability in a regulated sector.
Triggered a deeper examination of model risk, prompting the regulator to stress glass‑box transparency and prompting discussion on technical safeguards and model validation.
Speaker: Harshil Mathur
We are building an AI sandbox that provides affordable compute and data to smaller fintechs, democratizing AI access beyond large banks.
Introduces a concrete infrastructure initiative aimed at leveling the playing field, addressing a key barrier for innovation among smaller players.
Shifted the conversation from policy to implementation, leading participants to discuss practical support mechanisms, partnerships, and the role of self‑regulatory bodies.
Speaker: Suvendu K. Pati
Biometric payments powered by AI will replace OTPs, and AI‑led collection agents will handle 60‑70 % of early‑stage loan collections, improving both security and customer experience.
Provides tangible product‑level examples of AI improving both security (biometrics) and operational efficiency (AI agents), reinforcing the inclusion narrative.
Reinforced the earlier inclusion theme with specific use cases, prompting agreement from other panelists and highlighting immediate, deployable AI solutions.
Speaker: Ashutosh Sharma
AI can bridge the digital divide through language and assistive technologies, bringing financial services to un‑educated, disabled, or linguistically diverse populations.
Synthesizes earlier points into a forward‑looking vision that ties together inclusion, language diversity, and assistive tech.
Provided a unifying conclusion that tied together multiple strands of the discussion, reinforcing the overarching goal of AI‑driven financial inclusion.
Speaker: Suvendu K. Pati (closing wish list)
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the focus from abstract AI hype to concrete, trust‑centric governance and inclusive applications. John Tass‑Parker’s framing of legitimacy set the thematic foundation, which the RBI regulator reinforced with a principle‑based, innovation‑friendly stance. Terah Lyons amplified this by positioning finance’s governance model as a template for other sectors. Subsequent insights from Ashutosh Sharma and Harshil Mathur introduced high‑impact use cases—thin‑file credit underwriting and agentic, voice‑first commerce—that anchored the conversation in real‑world inclusion challenges. The regulator’s ‘glass‑box’ accountability proposal and the AI sandbox initiative provided actionable pathways for responsible deployment, while Harshil’s emphasis on hallucination risk reminded the panel of safety limits. Collectively, these comments redirected the dialogue toward practical governance, infrastructure support, and societal impact, shaping a cohesive narrative that AI’s next frontier in finance is not raw capability but trusted, inclusive, and democratized adoption.

Follow-up Questions
What are the detailed contents of the seven RBI AI principles (sutras) and how are they being operationalized across sectors?
Understanding these principles is essential for aligning industry practices with regulatory expectations and ensuring consistent implementation.
Speaker: Suvendu K. Pati
How will the RBI operationalize an AI sandbox that provides data and compute resources to smaller fintechs?
A functional AI sandbox could democratize access to AI tools, fostering innovation among smaller players that lack infrastructure.
Speaker: Suvendu K. Pati
What toolkits, benchmarking services, or standards should self‑regulatory organizations develop to help fintechs assess bias, transparency, and compliance of AI models?
Standardized assessment tools would enable fintechs to self‑evaluate AI systems and meet regulatory expectations without bespoke audits.
Speaker: Suvendu K. Pati
How can the industry mitigate hallucinations and ensure reliable outputs from large language models (LLMs) used in financial contexts?
Hallucinations pose significant liability risks; research into guardrails and validation methods is needed to make LLMs safe for finance.
Speaker: Harshil Mathur, Suvendu K. Pati
What strategies can satisfy India’s data‑residency requirements while still leveraging cutting‑edge AI models that are currently hosted abroad?
Regulatory data‑locality rules clash with the availability of advanced models; solutions are required to bridge this gap.
Speaker: Harshil Mathur
How can AI be used to improve underwriting for thin‑file customers by leveraging unstructured data?
Enhancing credit access for underserved populations hinges on effective AI‑driven data enrichment and risk assessment.
Speaker: Ashutosh Sharma
What design approaches enable voice‑first, multilingual, conversational banking for users with low digital literacy or limited education?
Creating inclusive banking experiences for billions of Indians requires AI that understands regional languages and natural conversation.
Speaker: Ashutosh Sharma, Suvendu K. Pati, Harshil Mathur
What is the potential impact of agentic commerce on expanding the user base beyond the current 10‑15 million active online shoppers in India?
Quantifying conversion and adoption rates will validate the business case for conversational commerce at scale.
Speaker: Harshil Mathur
What best‑practice frameworks should be adopted for human‑in‑the‑loop governance in fintech AI deployments?
Balancing automation with human oversight is critical for accountability and risk mitigation.
Speaker: Ashutosh Sharma
How effectively does AI reduce fraud and mis‑selling over time, and what metrics should be used to measure this impact?
Claims of reduced fraud need empirical evidence; systematic measurement will guide future investments.
Speaker: Harshil Mathur, Tara Lyons
What role can self‑regulatory organizations play in creating industry‑wide AI governance standards and certification processes?
SRIs could fill gaps between regulator guidance and practical implementation, fostering consistent compliance.
Speaker: Suvendu K. Pati
How can India develop deeper, multi‑cyclical customer data comparable to Western markets to enhance AI‑driven underwriting?
Data depth is a limiting factor for sophisticated AI models; research into data collection and sharing frameworks is needed.
Speaker: Ashutosh Sharma
Which public‑private partnership models are most effective for accelerating AI adoption in the financial sector?
Collaborative frameworks could streamline regulatory approvals, share resources, and speed up innovation.
Speaker: Harshil Mathur
What are the technical and regulatory pathways to deliver personalized, AI‑driven financial advisory services to every individual (financial advisor in every pocket)?
Scaling personalized advice requires robust AI models, trust mechanisms, and clear governance to ensure fairness and accuracy.
Speaker: Tara Lyons
How can AI lower the cost of servicing and enable true N‑of‑1 personalization for rural and underserved customers?
Reducing service costs while delivering individualized experiences could transform financial inclusion in remote areas.
Speaker: Harshil Mathur
How can AI assistive technologies be developed to provide accessible financial services for disabled persons (e.g., visual or hearing impairments)?
Ensuring accessibility aligns with inclusion goals and expands the market for financial products.
Speaker: Suvendu K. Pati

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.