Keynote Address_Revanth Reddy_Chief Minister Telangana

Keynote Address_Revanth Reddy_Chief Minister Telangana

Session at a glanceSummary, keypoints, and speakers overview

Summary

The opening remarks welcomed participants to the India AI Impact Summit 2026 and introduced a keynote on technology-led governance and AI-driven growth [1][3]. Chief Minister A. Revanth Reddy began by thanking the audience and noting the gathering of leading global minds at the summit [5]. He traced human progress from fire, the wheel and agriculture through democracy and electricity, concluding that artificial intelligence now represents the most transformative invention [7-9][13-15]. Reddy emphasized that AI possesses not only cognitive abilities but also agency, enabling autonomous decision-making when combined with robotics [18-22]. He warned that an AI race is already under way, with a few countries and companies taking the lead, while India missed earlier industrial and manufacturing revolutions and has so far contributed only services rather than global products [23-25][30-31]. To compete, India must both use and produce AI technologies across all layers-including chips, green energy, data storage, platforms, applications and services-and develop a roadmap targeting leadership in the top three layers [32-35][36]. Reddy proposed establishing a national AI war room linking centre and states, with Hyderabad as a possible hub, and creating a world-class AI university focused on original research [37-40]. He called for domestic production of GPU chips and full participation in the supply chain, including securing rare minerals, to reduce dependence on foreign technology [42-44]. Recognising potential job displacement, Reddy urged the creation of a system to estimate AI-induced job losses and a massive reskilling programme for affected workers [45-49]. He also advocated an AI fund for startups and a dedicated AI start-up village in Telangana to nurture future unicorns [50-51]. To sustain momentum, Reddy suggested holding AI summits every six months in rotating cities such as Hyderabad [52-54]. He requested the formation of a national AI council and an AI ministry at both centre and state levels to draft regulations, prevent misuse and harness AI for social justice, inclusion and poverty eradication [55-57]. The session closed with the organizers thanking the chief minister, praising Telangana’s initiatives, and inviting further collaboration, accompanied by a round of applause [62-66][67].


Keypoints

India must transition from a consumer of foreign AI products to a global producer across the entire AI stack. The speaker notes that India “missed the industrial…manufacturing revolution” and “use…but not own” the major AI-driven services, urging the country to “become a leader in all layers of AI…chips, green energy, data storage, platforms, applications and services” and to create a roadmap for top-tier leadership[25-32][33-36].


Establish dedicated institutional mechanisms to steer AI development and governance. Proposals include an “AI war room” at the centre and states, a world-class AI university, a national AI council and an AI ministry to draft laws against misuse, as well as a dedicated AI fund for startups[37-40][55-57].


Invest heavily in workforce reskilling and ecosystem support to mitigate AI-driven job displacement. The speaker calls for a system to “estimate job losses because of AI,” massive investment in “reskilling of people who lose their jobs,” an AI fund for startups, and a state-level “AI start-up village” to nurture future unicorns[45-49][50-51].


Create a regular, region-focused AI summit circuit to foster collaboration and showcase Telangana’s initiatives. Recommendations include holding AI summits every six months in different cities (e.g., Hyderabad), establishing an “AI start-up village” and inviting global institutions to work in Telangana, thereby positioning the state as a hub for AI partnership[52-55][58-60].


Overall purpose:


The discussion is a strategic call-to-action urging the Indian government and the state of Telangana to accelerate AI leadership through comprehensive policy frameworks, infrastructure investment, talent development, and regular collaborative events, thereby positioning India as a dominant player in the global AI ecosystem.


Overall tone:


The tone is consistently upbeat and visionary, beginning with a formal welcome, moving into an enthusiastic and persuasive articulation of challenges and opportunities, and culminating in a supportive, celebratory applause. While the speech maintains optimism throughout, it grows increasingly urgent as concrete policy and investment measures are proposed.


Speakers

A. Revanth Reddy


– Role/Title: Chief Minister of Telangana [S1]


– Area of Expertise:


Speaker 1


– Role/Title: Event moderator/host (introduces speakers) [S2]


– Area of Expertise:


Additional speakers:


Shri A. Nivant Reddy


– Role/Title: Honourable Chief Minister of Telangana (as referenced in the transcript)


– Area of Expertise:


Full session reportComprehensive analysis and detailed insights

The ceremony opened with Speaker 1 welcoming delegates on behalf of the India AI Impact Summit 2026 and formally inviting the chief guest to deliver the keynote on “technology-led governance and harnessing the power of AI in the state’s growth” [1][2][3].


Chief Minister A. Revanth Reddy began by greeting the audience, acknowledging the presence of leading global minds, and congratulating Prime Minister Narendra Modi and Minister Ashwini Vaishnavi for their support of the event [5][6]. He noted, “We have seen many controversies in this event” [68].


Reddy then traced the arc of human progress, recalling how fire, the wheel and agriculture reshaped societies [7], how ideas such as democracy, rule of law and universal voting rights altered civic life [8], and how technologies like electricity, the aeroplane, vaccines and the internet transformed daily existence [9]. He declared artificial intelligence the most consequential invention, observing that AI has made a GPU chip “more intelligent than humans”, can compose poetry, generate reports, produce films and presentations, and “knows almost everything” [13-15]. He highlighted the growing perception on social media that humans are no longer the most intelligent beings [16-17] and stressed that AI possesses agency – the capacity to decide autonomously – especially when coupled with robotics [18-22].


Reddy warned that an international AI race is already under way, with a few countries, companies and individuals taking the lead [23-24]. He candidly stated that India “missed the industrial revolution and the manufacturing revolution” and that, although the nation has excelled in the services sector-particularly software and telecom-it has largely consumed rather than created global AI-driven products such as Google Search, Google Maps, Twitter, Facebook, YouTube and WhatsApp [25-31]. He argued that a country can either “use” a technology or “produce” it; with AI India must do both [32-33].


To move from a consumer to a producer role, Reddy outlined a series of concrete actions:


* Lead the entire AI stack-chips, green energy, data storage, platforms, applications and services-and devise a roadmap targeting leadership in the top three layers [34-36].


* Create a national “AI war-room” linking the centre and the states to monitor rapid AI developments; Hyderabad was suggested as a possible hub [37-39].


* Establish a world-class AI university with top-tier facilities and a focus on original research [40].


* Manufacture domestic GPU chips and participate fully in the AI supply chain, including securing rare minerals [42-44].


* Set up a system to estimate AI-induced job losses and invest massively in reskilling displaced workers [45-49].


* Launch an AI fund to support start-ups and, with central support, create an “AI start-up village” in Telangana [50-51].


* Hold AI summits every six months in rotating Indian cities, with Hyderabad as a prime candidate [52-55].


* Request the Prime Minister to establish a national AI council (modelled on the GST Council or NITI Aayog) and an AI ministry at both central and state levels to draft legislation that prevents misuse of AI, especially against national security, while promoting AI for social justice, inclusion and poverty eradication [55-57].


Reddy concluded by inviting participants to Telangana for discussions and partnerships, welcoming global and national institutions to collaborate on AI projects, and ending with the slogans “Jai Bharat. Jai Telangana.” [58-60][69].


Speaker 1 thanked the chief minister, highlighted the inspiration drawn from Telangana’s AI initiatives, invited attendees to some of the more interesting sessions, and led a round of applause for Shri A. Revanth Reddy [62-66][67].


Session transcriptComplete transcript of the session
Speaker 1

So I’d like to welcome you on behalf of the India mission and the India AI impact summit 2026. Your leadership is exemplary and we’ve been honored to have you here. So I would like to invite you to the to the dais to deliver a keynote session on technology -led governance and harnessing the power of AI in the state’s growth. Thank you. you

A. Revanth Reddy

Good afternoon, friends. My pleasure to address this event because of some of the best of minds from all over the world have come together at the Artificial Intelligence Summit in India. I congratulate the Government of India, Honorable Prime Minister Narendra Modi and Ashwini Vaishnavi Minister for Electronics and IT for making this Across human history, great ideas, discoveries and inventions have changed our lives. Discoveries of fire, wheels and agriculture, changed our lives. Ideas like democracy, rule of law, universal voting rights and reservations changed our lives. Technology like electricity, aeroplane, vaccines and internet changed our lives. In the past, inventions added to human physical strength and innovation. After industrial revolution, our bodies never matched machines. We cannot fly like a plane, swim like a ship, or run at the speed of motorcycle or car.

Today, we are witnessing the rise of our greatest invention, that is AI. Artificial intelligence has made a GPU chip more intelligent than humans. It can write poetry and reports, make films and presentations, and it knows almost everything. These days, people say on social media that humans are not the most intelligent anymore. AI is more intelligent. AI also has agency, power to decide. An aeroplane can fly only if it has the power to decide. We tell it. A car will move or stop only if we tell it. AI can order to itself. Combined AI and robotics Machines have both physical and mental capabilities. In this context is important when we set an agenda for the future and AI race has already begun.

We see leadership of a few countries, companies and people. India missed the industrial revolution and the manufacturing revolution. We played a role in services revolution, especially software and telecom. But even in software, we created services but not global products. Google search, Google maps, Twitter, Facebook, YouTube and WhatsApp. We Indians use them. We Indians worked in these companies, but we don’t own them. We did not create them. There are two ways any country can influence a global trend. Use or produce. With AI, we have to both produce and use. India must become a leader in all layers of AI. Chips, green energy, data storage, platforms, applications and services. We must create a roadmap to ensure real leadership in top three layers.

Secondly, India must create a war room with center and states to monitor and respond to AI developments. Thank you. An AI war room for India is crucial. because development in AI can be very quick. Hyderabad can build an AI war room for India with support of the government of India. We need to establish an AI university of global standards with top facilities focusing on original research. We have seen many controversies in this event. Fourthly, to lead an AI revolution, we have to manufacture GPU chips. We have to become part of the entire supply chain. We must get rare minerals. Fifth, we have to put a system to estimate job losses because of AI. India cannot delay this anymore.

We have to invest massively We have to invest in the future. We have to invest in the future. We have to invest in the future. in reskilling of people who lose their jobs. India needs an AI fund for start -ups so our youth can work on all areas of AI and aim to become unicorns. Telangana can establish an AI start -up village for entire country with support of government of India. We need more AI summits. Not once a year, but every six months. Different cities can host them, like Hyderabad. I request Honourable Prime Minister Sri Narendra Modi ji to establish a national AI council, like GST Council or NITI IO. We need an AI ministry both at centre and state level.

to help make laws to prevent misuse of AI, especially against national security and interests. We need to use AI strongly for achievements of social justice, inclusion and removal of poverty. Finally, I invite you to Telangana for discussions, for partnerships. I welcome global and national institutions to work in my state in AI. Thank you. Jai Bharat. Jai Telangana.

Speaker 1

Thank you, sir. Thank you very much. On behalf of the organizers, I would like to invite you to some of our more interesting sessions. Thank you for the insightful speech. And we are all inspired by the work which is being done in Telangana under your leadership. Thank you very much. Please, audience please, a big round of applause for Shri A. Nivant Reddy, the Honourable Chief Minister of Telangana.

Related ResourcesKnowledge base sources related to the discussion topics (14)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Chief Minister A. Revanth Reddy congratulated Prime Minister Narendra Modi and Minister Ashwini Vaishnavi for their support of the event.”

The knowledge base lists Prime Minister Narendra Modi delivering the welcome address [S54] and Minister Ashwini Vaishnav as a key ministerial participant in the summit [S50] and [S51], confirming their involvement.

Additional Contextmedium

“Reddy traced the arc of human progress, recalling how fire, the wheel and agriculture reshaped societies.”

A background on technology-driven societal change, including the wheel and agricultural tools, is documented in the knowledge base [S58], providing contextual support for this historical framing.

Additional Contextmedium

“AI possesses agency – the capacity to decide autonomously – especially when coupled with robotics.”

The knowledge base discusses AI agents that can act on behalf of humans and the need for accountability mechanisms, highlighting the notion of agency in AI systems [S64].

Confirmedhigh

“An international AI race is already under way, with a few countries, companies and individuals taking the lead.”

The AI race narrative, describing competition among leading nations such as the United States and China, is covered in the knowledge base [S68].

Confirmedhigh

“Create a national “AI war‑room” linking the centre and the states to monitor rapid AI developments; Hyderabad was suggested as a possible hub.”

The knowledge base explicitly calls for an AI war-room involving centre and states and mentions Hyderabad as a suitable location for it [S12].

External Sources (70)
S1
Keynote Address_Revanth Reddy_Chief Minister Telangana — No disagreements identified in the transcript These key comments transformed what could have been a routine political s…
S2
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S3
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S4
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S5
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Artificial Intelligence (AI) has emerged as a revolutionary technology with the potential to transform various aspects o…
S6
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Examples include children with disabilities being provided with non-inclusive educational materials, political participa…
S7
AI in justice: Bridging the global access gap or deepening inequalities — At least5 billion people worldwide lackaccess to justice, a human right enshrined in international law. In many regions,…
S8
Lightning Talk #245 Advancing Equality and Inclusion in AI — Bjorn Berge: Thank you very much, Sara, and very good afternoon to all of you. Let me first start by congratulating Norw…
S9
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — Actually, clearly in the first group, and the reason for that is there are five layers in the AI architecture. The appli…
S10
AI 2.0 Reimagining Indian education system — It cannot give you emotion into that, personalized flavor to that. So research ethics, when you are doing any research f…
S11
Fixing Healthcare, Digitally — In conclusion, Revanth Reddy Anumula’s vision and actions underscore his dedication to leveraging technology to improve …
S12
https://dig.watch/event/india-ai-impact-summit-2026/keynote-address_revanth-reddy_chief-minister-telangana — Good afternoon, friends. My pleasure to address this event because of some of the best of minds from all over the world …
S13
UK security minister raises alarm on potential misuse of AI technology — Tom Tugendhat, the UK’s Minister of State for Security,has warned of the dangers posed by the malicious use of AI techno…
S14
Artificial intelligence (AI) and cyber diplomacy — This indicates the potential exploitation of regions with weaker data protection laws and raises significant ethical que…
S15
Rethinking AI regulation: Are new laws really necessary? — Specialised AI regulation may not be necessary, as existing laws already cover many aspects of AI-related concerns. Jova…
S16
Leveraging the UN system to advance global AI Governance efforts — Gilbert Houngbo from the International Labour Organization (ILO) discussed the impact of AI on jobs, acknowledging both …
S17
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — This comment elevated the discussion from technical implementation to geopolitical strategy. It influenced the final que…
S18
AI/Gen AI for the Global Goals — Priscilla Boa-Gue argues for the creation of supportive policy environments to foster AI startups. This includes develop…
S19
AI Governance Dialogue: Steering the future of AI — The discussion aims to advocate for comprehensive, inclusive AI governance that ensures the benefits of AI are shared gl…
S20
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Legal and regulatory | Development Risks and Future Challenges
S21
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — Institutional Governance & Policy Frameworks
S22
Why science metters in global AI governance — “But if your potential or probable outcome is the end of jobs, then you need to think about universal basicism.”[113]. “…
S23
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — Economic | Development | Sociocultural The argument emphasizes that the primary threat to employment is not AI replacin…
S24
Policymaker’s Guide to International AI Safety Coordination — Establishing regular six-month global convenings to maintain faster tempo for safety discussions than traditional policy…
S25
Open Forum #30 High Level Review of AI Governance Including the Discussion — ## Introduction and Context Lucia Russo: Thank you, Yoichi. Good morning and thank you my fellow panelists for this int…
S26
Open Forum: A Primer on AI — Another concern is the potential impact of AI on the job market. As AI capabilities advance, certain professions may bec…
S27
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — You’ll be very pleased to hear. We have a lot to talk about. I’m going to very briefly introduce the very distinguished …
S28
The Global Power Shift India’s Rise in AI & Semiconductors — High level of consensus with complementary perspectives rather than conflicting views. The speakers come from different …
S29
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S30
The Role of Government and Innovators in Citizen-Centric AI — The discussion maintained an optimistic and collaborative tone throughout, with speakers expressing enthusiasm about AI’…
S31
Building Indias Digital and Industrial Future with AI — The discussion maintained a collaborative and forward-looking tone throughout, with industry experts, regulators, and po…
S32
Telangana launches Aikam to scale AI deployment — The Telangana government haslaunchedAikam, a new autonomous body aimed at positioning the state as a global proving grou…
S33
GermanAsian AI Partnerships Driving Talent Innovation the Future — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers demonstrated mutual resp…
S34
From Technical Safety to Societal Impact Rethinking AI Governanc — You know, their language is not represented in Gemini or anything, right? And I know everybody wants to impose Hindi on …
S35
Keynote Address_Revanth Reddy_Chief Minister Telangana — Artificial intelligence | Capacity development Recognition of the insightful speech and work being done in Telangana un…
S36
Keynote Address_Revanth Reddy_Chief Minister Telangana — “India missed the industrial revolution and the manufacturing revolution.”[20]”But even in software, we created services…
S37
The Global Power Shift India’s Rise in AI & Semiconductors — India should transition from being a fast follower to a global leader by moving from compute to capability, focusing on …
S38
Indias Roadmap to an AGI-Enabled Future — This opening statement reframes the entire AI development paradigm from a sovereignty perspective, challenging the commo…
S39
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — This comment elevated the discussion from technical implementation to geopolitical strategy. It influenced the final que…
S40
Designing Indias Digital Future AI at the Core 6G at the Edge — India is positioning itself to move from being a consumer of global technology to a shaper of next-generation intelligen…
S41
AI Governance Dialogue: Steering the future of AI — Doreen Bogdan Martin: Thank you. And we now have a chance together to reflect on AI governance with someone who has a un…
S42
What is it about AI that we need to regulate? — UN-Led Initiatives:The United Nations is establishing multiple mechanisms. In theOpening Ceremony, Antonio Guterres anno…
S43
Models and functions for international governance of advanced AI — Thewhite paper, endorsed by Google’s DeepMind, addresses the need for international governance structures to manage the …
S44
Leveraging the UN system to advance global AI Governance efforts — Tshilidzi Marwala:Thanks very much, Doreen. Turning to the United Nations University, Mr. Chiditsi Marwada, so how can t…
S45
How to make AI governance fit for purpose? — Invest in talent development, training, and reskilling programs to address job displacement concerns
S46
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — Economic | Development | Sociocultural The argument emphasizes that the primary threat to employment is not AI replacin…
S47
Why science metters in global AI governance — “But if your potential or probable outcome is the end of jobs, then you need to think about universal basicism.”[113]. “…
S48
Fixing Healthcare, Digitally — Anumula argues that affordable and high-quality healthcare is essential for the development and progress of any society….
S49
AI Collaboration Across Borders_ India–Israel Innovation Roundtable — “We have active partnerships going on in the field of water conservation, defense, agriculture, and so on, smart cities …
S50
Keynote Adresses at India AI Impact Summit 2026 — -Ashwini Vaishnav- Minister (India) -Participant- Event moderator/host -S. Krishnan- Secretary (India) -Sergio Gore- …
S51
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Minister Vaishnav, Excellencies, ladies and gentlemen, let me begin by giving our thanks and expressing our sincere appr…
S52
WAIGF Opening Ceremony & Keynote — WAIGF Opening Ceremony & Keynote
S53
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you….
S54
Welcome Address — Prime Minister Narendra Modi
S56
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-democracy_-reimagining-governance-in-the-age-of-intelligence — Nor do we hold identical views on democratic institutions. Thank you. We face a choice, either we step back or allow the…
S57
https://dig.watch/event/india-ai-impact-summit-2026/mahaai-building-safe-secure-smart-governance — Thanks, Suresh. I think very valid and meaningful points. I’ll come to Ranjit. Now, we’re talking about AI. There are a …
S58
7th edition — Technology has been the main driver of societal changes throughout history (the wheel, agricultural tools, the printing …
S59
DiploFoundation — Enablers – ways of doing things and organising ourselves – are among the major causes of change. Someone learned to ride…
S60
Strategy outline — Over the course of history, knowledge has always been essential for the growth of civilizations. Knowledg…
S61
Living in an Unruly World: The Challenges We Face — To many this vision is frightening – probably rightly so. Elon Musk considers artificial intelligence as more dangerous …
S62
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — And that is why a student who studies in the political fields of spiritual, religion and culture can use AI technology w…
S63
AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence — Piyush Nangru articulated the transformation in educational terms, stating that “coding is no longer a skill. It’s table…
S64
The Future of the Internet: Navigating the Transition to an Agentic Web — Prakash emphasizes that as AI agents scale intelligence and take actions on behalf of humans, there must be accountabili…
S65
Keynote-N Chandrasekaran — “It is the age of abundant intelligence where the scarce resources are trust, stewardship, and human capability.”[39]. “…
S66
Main Session on Artificial Intelligence | IGF 2023 — Recognizing these differences can lead to more informed and effective decision-making. Another noteworthy observation is…
S67
Military AI: Operational dangers and the regulatory void — While international forums are yet to find consensus on key issues, many states are straying further from regulation to …
S68
AI race shows diverging paths for China and the US — The US administration’s new AI action plan frames global development as anAI racewith a single winner. Officials argue A…
S69
Interim Report: — 27. Other risks are more a product of humans than AI. Deep fakes and hostile information campaigns are merely the l ates…
S70
Accelerating Structural Transformation and Industrialization in Developing Countries: Navigating the Future with Advanced ICTs and Industry 4.0 — Amir (participant from Iran) This comment introduces a critical geopolitical dimension often overlooked in technical di…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
A. Revanth Reddy
11 arguments95 words per minute697 words437 seconds
Argument 1
AI as a transformative invention surpassing past breakthroughs (A. Revanth Reddy)
EXPLANATION
The speaker positions artificial intelligence as humanity’s greatest invention, arguing that it exceeds earlier breakthroughs such as fire, the wheel, electricity, and the internet. He suggests that AI now outperforms human intelligence in many tasks, marking a new era of capability.
EVIDENCE
He references historic inventions that changed lives—discoveries of fire, wheels, agriculture, democracy, electricity, airplanes, vaccines, and the internet—to set a baseline of past breakthroughs (sentences 7-10). He then declares AI as the current greatest invention, noting its ability to write poetry, generate reports, create films, and possess extensive knowledge, and cites social media commentary that humans are no longer the most intelligent (sentences 13-18).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Generative AI is described as a revolutionary technology comparable to historic breakthroughs, supporting the claim of AI’s transformative impact [S5].
MAJOR DISCUSSION POINT
AI as a transformative invention
AGREED WITH
Speaker 1
Argument 2
AI as a tool for social justice, inclusion, and poverty alleviation (A. Revanth Reddy)
EXPLANATION
The speaker calls for AI to be deliberately applied to advance social justice, broaden inclusion, and eradicate poverty in India. He frames AI not just as a technological advance but as a means to achieve equitable development.
EVIDENCE
He explicitly states the need to use AI strongly for achievements of social justice, inclusion and removal of poverty (sentence 57).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Studies on AI improving digital accessibility for persons with disabilities, bridging access to justice, and advancing equality and inclusion provide context for AI’s role in social justice [S6][S7][S8].
MAJOR DISCUSSION POINT
AI for social justice and poverty reduction
Argument 3
Develop all AI layers: chips, green energy, data storage, platforms, applications, services (A. Revanth Reddy)
EXPLANATION
The speaker argues that India must become a leader across every layer of the AI value chain, from hardware to software services. He emphasizes a comprehensive roadmap to secure leadership in the top three layers.
EVIDENCE
He outlines the need for leadership in all AI layers—chips, green energy, data storage, platforms, applications and services—and calls for a roadmap to ensure real leadership in the top three layers (sentences 34-36).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A comprehensive AI architecture comprising chip, infra, energy, model and application layers is outlined, confirming the need to develop all layers [S9]; the keynote also stresses domestic GPU chip production [S1].
MAJOR DISCUSSION POINT
Comprehensive AI ecosystem development
Argument 4
Establish a world‑class AI university focused on original research (A. Revanth Reddy)
EXPLANATION
The speaker proposes creating a globally competitive AI university equipped with top‑tier facilities to conduct original research. This institution would serve as a hub for cultivating advanced AI talent in India.
EVIDENCE
He calls for establishing an AI university of global standards with top facilities focusing on original research (sentence 40).
MAJOR DISCUSSION POINT
World‑class AI university
Argument 5
Manufacture GPU chips and secure rare minerals to complete the supply chain (A. Revanth Reddy)
EXPLANATION
The speaker stresses that leadership in AI requires domestic production of GPU chips and control over critical raw materials. He links this to becoming part of the entire AI supply chain.
EVIDENCE
He states that to lead an AI revolution India must manufacture GPU chips, become part of the whole supply chain, and obtain rare minerals (sentences 42-45).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote explicitly calls for domestic GPU chip manufacturing and securing rare mineral supply chains [S1]; the AI layer analysis highlights the chip layer as essential [S9].
MAJOR DISCUSSION POINT
Domestic AI hardware and mineral security
Argument 6
Create a national AI war room linking centre and states for rapid response (A. Revanth Reddy)
EXPLANATION
The speaker recommends establishing a coordinated AI war room that brings together central and state authorities to monitor and swiftly react to AI developments. He suggests Hyderabad could host this facility with governmental support.
EVIDENCE
He proposes an AI war room with centre and states to monitor and respond to AI developments, and mentions Hyderabad can build it with support of the Government of India (sentences 37-39).
MAJOR DISCUSSION POINT
National AI war room
Argument 7
Form a national AI council and an AI ministry at both centre and state levels (A. Revanth Reddy)
EXPLANATION
The speaker urges the Prime Minister to set up a dedicated AI council, similar to existing GST or NITI Aayog structures, and to create an AI ministry at both central and state levels to steer policy and coordination.
EVIDENCE
He requests the establishment of a national AI council and an AI ministry at centre and state levels to help make laws preventing AI misuse (sentences 55-56).
MAJOR DISCUSSION POINT
AI governance institutions
Argument 8
Enact laws to prevent AI misuse and protect national security (A. Revanth Reddy)
EXPLANATION
The speaker calls for legislation that safeguards against the malicious use of AI, particularly where national security and strategic interests are concerned. This legal framework would be part of the broader AI ministry’s remit.
EVIDENCE
He emphasizes the need for laws to prevent misuse of AI, especially against national security and interests (sentence 56).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Security officials warn about weaponisation of AI and stress the need for legal safeguards, while discussions on AI regulation and cyber-diplomacy underline the importance of new laws [S13][S14][S15].
MAJOR DISCUSSION POINT
Legal safeguards for AI
Argument 9
Set up a system to estimate AI‑induced job losses and invest heavily in reskilling (A. Revanth Reddy)
EXPLANATION
The speaker proposes creating a mechanism to quantify employment impacts of AI and to allocate substantial resources for retraining displaced workers. This reflects a proactive approach to the future of work.
EVIDENCE
He mentions the need for a system to estimate job losses because of AI and calls for massive investment in reskilling of people who lose their jobs (sentences 45-49).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
International Labour Organization analysis highlights AI’s impact on employment and the necessity of skilling, reskilling and lifelong learning programs [S16].
MAJOR DISCUSSION POINT
Job impact assessment and reskilling
Argument 10
Launch an AI fund and a start‑up village in Telangana to foster AI entrepreneurship (A. Revanth Reddy)
EXPLANATION
The speaker advocates for a dedicated AI fund to support startups nationwide and proposes an AI startup village in Telangana as a hub for innovation. This aims to nurture homegrown AI enterprises and unicorns.
EVIDENCE
He calls for an AI fund for startups so youth can work on all AI areas and suggests Telangana can establish an AI start‑up village for the entire country with government support (sentences 50-51).
MAJOR DISCUSSION POINT
AI funding and startup ecosystem
Argument 11
Organise AI summits biannually across different Indian cities to encourage collaboration (A. Revanth Reddy)
EXPLANATION
The speaker recommends holding AI summits every six months in rotating cities to promote knowledge sharing and partnerships across India. This regular cadence would sustain momentum in AI development.
EVIDENCE
He states the need for more AI summits, not once a year but every six months, and suggests different cities like Hyderabad can host them (sentences 52-55).
MAJOR DISCUSSION POINT
Regular AI summits for collaboration
S
Speaker 1
1 argument92 words per minute136 words87 seconds
Argument 1
Acknowledge Telangana’s AI initiatives and invite the audience to further sessions (Speaker 1)
EXPLANATION
The host thanks the speaker, praises the AI work being done in Telangana, and invites the audience to attend additional sessions. This serves to highlight regional achievements and maintain engagement.
EVIDENCE
The host expresses gratitude, notes inspiration from the work in Telangana under the speaker’s leadership, and calls for a round of applause, effectively acknowledging the state’s AI initiatives and inviting further participation (sentences 62-66).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote address recognises Telangana’s AI work and invites further engagement, providing direct context [S1].
MAJOR DISCUSSION POINT
Recognition of Telangana’s AI work and invitation to sessions
AGREED WITH
A. Revanth Reddy
Agreements
Agreement Points
Both speakers emphasize AI as a key driver for India’s and Telangana’s growth and development.
Speakers: Speaker 1, A. Revanth Reddy
AI as a transformative invention surpassing past breakthroughs (A. Revanth Reddy) Acknowledge Telangana’s AI initiatives and invite the audience to further sessions (Speaker 1)
Speaker 1 invites the keynote on “technology-led governance and harnessing the power of AI in the state’s growth” [3] and later praises the work being done in Telangana under the chief minister’s leadership [65]; Reddy describes AI as humanity’s greatest invention that will shape future governance and calls for a comprehensive AI ecosystem in India [13-18][34-36]. Both therefore present AI as central to economic and social progress.
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus reflects India’s national AI strategy and the broader view of AI as a catalyst for economic and industrial growth articulated in recent high-level assessments of India’s AI rise and digital future [S28][S29][S31].
Both speakers recognize and commend the specific AI initiatives being pursued in Telangana.
Speakers: Speaker 1, A. Revanth Reddy
Acknowledge Telangana’s AI initiatives and invite the audience to further sessions (Speaker 1) Create a national AI war room… Hyderabad can build an AI war room… (A. Revanth Reddy) Establish an AI university… (A. Revanth Reddy) Launch an AI fund and a start‑up village in Telangana… (A. Revanth Reddy) Organise AI summits biannually across different Indian cities… (A. Revanth Reddy)
Speaker 1 explicitly thanks the chief minister and says the audience is inspired by the work being done in Telangana [65][66]; Reddy lists concrete Telangana-based actions such as an AI war room in Hyderabad [37-39], an AI university [40], an AI fund and start-up village in the state [50-51], and regular AI summits hosted in Hyderabad [52-55]. Both therefore affirm Telangana’s leading role in India’s AI agenda.
POLICY CONTEXT (KNOWLEDGE BASE)
The commendation aligns with Telangana’s launch of the autonomous Aikam body to scale AI deployment and the Chief Minister’s emphasis on AI capacity development, both highlighted in official announcements and keynote remarks [S32][S35].
Similar Viewpoints
Both view AI as a strategic technology that can reshape governance and drive growth, with the host highlighting the need for AI‑led governance [3] and Reddy positioning AI as the greatest invention that will underpin future development [13-18].
Speakers: Speaker 1, A. Revanth Reddy
AI as a transformative invention surpassing past breakthroughs (A. Revanth Reddy) Acknowledge Telangana’s AI initiatives and invite the audience to further sessions (Speaker 1)
Both endorse the idea that Telangana should host national‑level AI infrastructure (war room, university, startup village) and that this will serve as a model for the country, as reflected in the host’s praise of Telangana’s work [65] and Reddy’s specific proposals for Hyderabad‑based facilities [37-39][40][50-51].
Speakers: Speaker 1, A. Revanth Reddy
Create a national AI war room linking centre and states for rapid response (A. Revanth Reddy) Acknowledge Telangana’s AI initiatives and invite the audience to further sessions (Speaker 1)
Unexpected Consensus
Recognition of AI’s transformative potential by a brief opening host.
Speakers: Speaker 1, A. Revanth Reddy
AI as a transformative invention surpassing past breakthroughs (A. Revanth Reddy) Acknowledge Telangana’s AI initiatives and invite the audience to further sessions (Speaker 1)
It is unexpected that the host, whose remarks are limited to a welcome and applause, nonetheless aligns with Reddy’s sweeping claim that AI is humanity’s greatest invention and the engine of future governance [3][13-18]. This convergence suggests strong political endorsement of the narrative.
POLICY CONTEXT (KNOWLEDGE BASE)
The host’s framing mirrors the global narrative that AI is a transformative economic and geopolitical force, as discussed in analyses of AI’s impact on power dynamics and growth prospects [S27][S28].
Overall Assessment

The two speakers show a clear convergence on AI as a cornerstone for India’s socio‑economic progress and on Telangana’s role as a pilot hub for national AI initiatives.

High consensus on the strategic importance of AI and on Telangana’s leadership, indicating political will that could accelerate policy implementation and investment in the AI ecosystem.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The exchange shows virtually no substantive disagreement. The host merely acknowledges and applauds the keynote’s vision, while the keynote outlines an extensive policy agenda. Their positions converge on the desirability of advancing AI in India, especially in Telangana.

Minimal – the interaction is largely complementary, indicating strong alignment rather than conflict. This suggests that, at least in this forum, there is consensus on the strategic priority of AI, which may facilitate coordinated action across the identified policy areas.

Partial Agreements
Both speakers share the goal of promoting AI activity in Telangana and encouraging broader participation, though Speaker 1’s role is limited to acknowledgment and invitation, while Reddy proposes concrete institutional engagement [59][65-66].
Speakers: A. Revanth Reddy, Speaker 1
Both acknowledge the importance of AI for India’s development and specifically highlight Telangana’s initiatives. Speaker 1 praises the work being done in Telangana under the Chief Minister’s leadership and invites the audience to further sessions [65-66]. Reddy welcomes global and national institutions to work in his state on AI, signalling openness to collaboration [59].
Takeaways
Key takeaways
AI is positioned as a transformative invention that can drive national development, social justice, inclusion, and poverty alleviation. India must build a full‑stack AI ecosystem covering chips, green energy, data storage, platforms, applications, and services. Establishing world‑class research infrastructure, such as an AI university, is essential for original AI research. Strategic governance mechanisms—including a national AI war room, AI council, and dedicated AI ministries at centre and state levels—are required to monitor, respond to, and regulate AI developments. Economic safeguards are needed: a system to estimate AI‑induced job losses, massive investment in reskilling, and creation of an AI fund and start‑up village to foster entrepreneurship. Regular biannual AI summits across Indian cities are proposed to promote collaboration and showcase progress.
Resolutions and action items
Proposal to create a national AI war room linking the centre and states for rapid response to AI developments. Suggestion to establish a national AI council and an AI ministry at both central and state levels. Call for the creation of a world‑class AI university with a focus on original research. Recommendation to develop domestic GPU chip manufacturing capabilities and secure rare mineral supply chains. Plan to set up a system for estimating AI‑related job losses and to invest heavily in reskilling programs. Proposal to launch an AI fund for start‑ups and to create an AI start‑up village in Telangana. Suggestion to hold AI summits every six months in different Indian cities.
Unresolved issues
Specific funding mechanisms and budget allocations for the AI war room, AI university, chip manufacturing, and reskilling initiatives. Detailed roadmap, timelines, and responsible agencies for implementing the proposed AI council and ministries. Procedures for securing rare minerals and establishing a complete AI hardware supply chain. Methodology for accurately estimating AI‑induced job losses and measuring the effectiveness of reskilling programs. Legal and regulatory frameworks needed to prevent AI misuse and protect national security.
Suggested compromises
None identified
Thought Provoking Comments
Artificial intelligence has made a GPU chip more intelligent than humans. It can write poetry and reports, make films and presentations, and it knows almost everything. AI also has agency, power to decide.
Highlights a paradigm shift where AI is not just a tool but an autonomous decision‑maker, challenging the conventional view of technology as purely supportive of human intent.
This statement reframed the conversation from AI as a passive technology to a potentially self‑directed entity, prompting the audience to consider regulatory and ethical safeguards. It set the stage for later proposals about AI governance and monitoring.
Speaker: A. Revanth Reddy
India missed the industrial revolution and the manufacturing revolution. We played a role in the services revolution, especially software and telecom, but we used global products like Google and Facebook rather than creating them.
Provides a candid self‑assessment of India’s historical technological positioning, challenging any complacent narrative about the country’s tech leadership.
This admission shifted the tone toward urgency and self‑improvement, leading directly to suggestions for building indigenous AI capabilities across the stack and influencing subsequent calls for a national AI strategy.
Speaker: A. Revanth Reddy
There are two ways any country can influence a global trend: use it or produce it. With AI, we have to both produce and use.
Offers a clear strategic framework that simplifies complex policy choices into a binary decision, prompting a more focused discussion on production capabilities.
The framework guided the subsequent enumeration of specific layers (chips, green energy, data storage, platforms, applications, services) where India must invest, steering the dialogue toward concrete sectoral priorities.
Speaker: A. Revanth Reddy
India must create a war room with centre and states to monitor and respond to AI developments.
Introduces an innovative governance mechanism—a dedicated AI war room—to address the rapid pace of AI evolution, a concept not previously mentioned.
This proposal opened a new line of thought about real‑time policy response and inter‑governmental coordination, influencing the audience’s perception of AI as a national security issue and justifying the call for an AI ministry.
Speaker: A. Revanth Reddy
We have to put a system to estimate job losses because of AI and invest massively in reskilling of people who lose their jobs.
Brings the socioeconomic consequences of AI to the forefront, moving the conversation beyond technology to human impact and workforce transformation.
The comment deepened the discussion by adding a social dimension, prompting acknowledgment of the need for a comprehensive AI fund, start‑up village, and education initiatives later in the speech.
Speaker: A. Revanth Reddy
I request Honourable Prime Minister Narendra Modi to establish a national AI council, like the GST Council or NITI Aayog, and an AI ministry at both centre and state level to make laws preventing misuse of AI.
Proposes concrete institutional structures modeled on successful Indian governance bodies, linking AI policy to existing frameworks and emphasizing legal safeguards.
This call to action crystallized the earlier strategic points into actionable policy steps, culminating the speech with a clear demand that the audience (including the organizers) could rally behind, as reflected in the applause and gratitude expressed by Speaker 1.
Speaker: A. Revanth Reddy
Overall Assessment

The keynote was driven by a series of strategically layered comments that moved the discussion from a broad celebration of AI’s potential to a focused roadmap for India’s AI future. Early remarks about AI’s agency set a tone of urgency, while the candid critique of India’s past tech role created a sense of necessity for change. The binary ‘use or produce’ framework and the proposal of an AI war room introduced concrete policy directions, which were reinforced by socioeconomic considerations such as job displacement and reskilling. The final appeal for a national AI council and ministry provided a tangible institutional endpoint. Together, these comments redirected the conversation toward actionable governance, economic, and social measures, prompting the audience’s supportive response and framing AI as both an opportunity and a national priority.

Follow-up Questions
How can an AI war room be established at the national level, involving both centre and states, and what should its operational framework be?
A war room is deemed crucial for rapid monitoring and response to fast‑evolving AI developments, requiring clear structure and coordination.
Speaker: A. Revanth Reddy
What roadmap should India follow to achieve leadership in the top three layers of AI (chips, green energy, data storage, platforms, applications, services)?
A strategic plan is needed to guide investment and capability building across the AI value chain.
Speaker: A. Revanth Reddy
How can an AI university of global standards with top‑tier facilities and a focus on original research be created?
Such an institution would develop homegrown talent and research capacity essential for AI leadership.
Speaker: A. Revanth Reddy
What steps are required to develop domestic GPU chip manufacturing and secure the supply chain for rare minerals?
Self‑reliance in AI hardware depends on building chip production and accessing critical raw materials.
Speaker: A. Revanth Reddy
What methodology should be used to estimate AI‑induced job losses and design effective reskilling programs?
Accurate impact assessment is necessary to mitigate socioeconomic disruption and upskill the workforce.
Speaker: A. Revanth Reddy
How should an AI fund for start‑ups be structured to enable youth‑led ventures to become unicorns?
A dedicated fund would catalyse innovation and entrepreneurship across AI domains.
Speaker: A. Revanth Reddy
What model should be adopted for an AI start‑up village in Telangana, and how can government support be operationalised?
A concentrated ecosystem could accelerate startup growth and attract national participation.
Speaker: A. Revanth Reddy
What governance model and functions should a national AI council (similar to the GST Council or NITI Aayog) have?
A coordinated council is needed to steer AI policy, standards, and inter‑governmental collaboration.
Speaker: A. Revanth Reddy
What legal framework and ministry structure at centre and state level are required to prevent misuse of AI, especially concerning national security?
Robust regulations and dedicated ministries are essential to safeguard against harmful AI applications.
Speaker: A. Revanth Reddy
How can AI be leveraged effectively for social justice, inclusion, and poverty alleviation?
Targeted AI applications can address inequities and drive inclusive development.
Speaker: A. Revanth Reddy
What should be the schedule, frequency, and logistical plan for bi‑annual AI summits across different Indian cities?
Regular summits would sustain momentum, foster collaboration, and disseminate best practices.
Speaker: A. Revanth Reddy
What metrics and monitoring systems should the AI war room employ to track AI developments and emerging risks?
Effective monitoring requires clear indicators to inform timely policy and operational responses.
Speaker: A. Revanth Reddy

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT

Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT

Session at a glanceSummary, keypoints, and speakers overview

Summary

The discussion centered on how artificial intelligence can address Indian citizens’ and organisations’ challenges, with Ankush Sabharwal outlining his company’s evolution from a 2016 conversational-AI startup to the launch of Bharat GPT, which now serves over 1.3 billion users [1-10]. He stressed that AI should serve “Manav” – humans – and be built on safety and inclusivity, noting that AI already underpins everyday apps and that his co-founder Manav Gandotra embodies this human-first ethos [13-22].


When asked whether AI will eliminate jobs, Ankush argued that automation simply speeds problem-solving, allowing businesses to deliver more value and prompting a shift from hourly billing to value-based pricing in India’s IT services sector [29-36][38-48]. Amish highlighted that most global AI models are English-only, whereas Bharat GPT operates in multiple Indian languages, a distinction Ankush described as crucial given that only about 10 % of Indians are fluent in English while the majority rely on Hindi and regional tongues [49-56][58-60]. He further portrayed Bharat GPT as a collective national product, developed with contributions from speakers of every Indian language and presented to the Prime Minister in a multilingual demonstration [78].


Both participants agreed that AI will become a daily convenience within five to six years, comparable to the transformation brought by smartphones, and that sovereign AI is essential for reducing dependence on foreign technology [79-80][81-83]. Ankush identified data as the raw material for AI, emphasizing that India’s 1.4-1.5 billion-person population continuously generates massive datasets that can fuel home-grown models [90-99]. He projected that India will emerge as a global hub for AI solutions, with Indian-built applications serving both domestic and international markets [110-119].


The recent AI summit in Delhi was described as highly successful, attracting ministers from the UK, Canada and France and underscoring India’s growing stature in the AI ecosystem [122-133]. Ankush called for a dedicated AI ministry and noted that the government is already ahead by providing free GPUs and funding to innovators, a policy he said no other country matches [134-139][167-176]. Looking ahead, he listed four priority areas-energy-efficient compute infrastructure, talent development, skilling programmes and foundational model research-as key to sustaining India’s AI ambitions [142-157].


Regarding sovereign AI, he argued that owning the technology will generate long-term economic benefits and could be exported as a business model to other nations [190-197]. In the rapid-fire segment, Ankush affirmed that India has the most AI users, that Bangalore is likely to become the AI capital, and that founders will remain indispensable despite automation, concluding that Bharat GPT belongs to the whole country and its data is a shared national resource [204-209][213-219][226-227][239-245].


Keypoints


Major discussion points


AI as a tool for inclusive national development – Ankush emphasizes that technology should serve “humans only” and improve everyday life, citing the launch of Bharat GPT and its massive user base (1.3 billion) as evidence of AI’s reach in India [5-9]. He stresses the importance of multilingual capability, noting that most global models are English-only while Bharat GPT supports Indian languages, which is crucial for a country where only ~10 % are fluent in English [49-56][58-60].


Job displacement and the need to rethink business models – The conversation acknowledges fears that AI will automate work and eliminate jobs [29-31]. Ankush argues that automation will instead accelerate problem-solving and proposes a shift from hourly/mandate-based pricing to value-based pricing for IT services, because fewer people can deliver the same value [38-47].


Sovereign AI, data ownership, and strategic advantage – Both speakers stress that India must develop “sovereign AI” to avoid dependence on foreign providers [81-84]. Ankush identifies data as the raw material for AI, highlighting India’s huge, continuously generated data pool (population ≈ 1.5 billion) as a competitive edge [88-100].


India’s emerging role as a global AI hub – The participants point to recent AI summits, international ministerial participation, and government initiatives as signs that India is becoming a focal point for AI development [122-131]. They outline the four pillars needed for leadership: energy & compute infrastructure, foundational models, talent/skilling, and application ecosystems [142-156]; and predict that India will soon be the preferred destination for AI solutions worldwide [110-119].


Future outlook and rapid-fire predictions – In the closing rapid-fire segment, Ankush and Amish name Bangalore as the likely “AI capital,” assert that founders will remain indispensable, and reaffirm that Bharat GPT belongs to the whole nation [205-213][239-244].


Overall purpose / goal


The discussion aims to showcase India’s AI ecosystem-particularly the Bharat GPT initiative-as a catalyst for inclusive socioeconomic transformation, to address concerns about job loss by proposing new business models, and to argue for a sovereign, data-driven AI strategy that positions India as a global AI hub.


Tone of the conversation


The tone is largely optimistic and promotional, highlighting achievements, scale, and national pride. It becomes defensive when confronting criticisms about AI risks and sovereign concerns, and shifts to a light-hearted, informal style during the rapid-fire Q&A. Throughout, the speakers maintain an enthusiastic, forward-looking attitude.


Speakers

Speaker 3


– Area of expertise:


– Role / Title:


Amish


– Area of expertise: Journalism / Interviewing


– Role / Title: Interviewer / Journalist (Amish Devagon) [S4]


Ankush Sabharwal


– Area of expertise: Conversational AI, Generative AI, Technology Entrepreneurship


– Role / Title: Co-founder & CEO of CoRover (company behind Bharat GPT) [S5][S6]


Kanha AI Voiceover


– Area of expertise: AI-generated voice assistant for children


– Role / Title: AI voice persona for the Kanha platform


Additional speakers:


(None identified beyond the listed speakers)


Full session reportComprehensive analysis and detailed insights

The conversation opened with Ankush Sabharwal asking what problems Indian citizens and organisations face and how technology, especially artificial intelligence, can address them [1-4]. He positioned his firm, founded in 2016 around conversational AI, as a vehicle for improving “ease of living and ease of doing” through AI [5-6]. The launch of Bharat GPT was presented as an evolutionary step that leveraged the company’s existing conversational-AI platform and a user base that had already exceeded one billion, quickly growing to 1.3 billion users after the generative-AI rollout [7-10].


Sabharwal repeatedly stressed that AI must be human-first, safe, inclusive and invisible, echoing Prime Minister Modi’s call for AI that is moral, accountable, national, accessible and valid [11-13][15-16]. He highlighted that his co-founder is named Manav Gandotra, noting the coincidence with the human-first theme [21-22].


When the discussion turned to employment, Amish warned that AI could trigger a mass exodus of jobs in corporates [26-28]. Sabharwal counter-argued that automation merely accelerates problem-solving, allowing businesses to deliver more value faster [29-33]. He suggested that the traditional per-hour or per-mandate pricing model for Indian IT services will become obsolete because the same output can now be produced by far fewer people [38-41]. Consequently, he advocated a shift to value-based pricing, where clients pay for outcomes rather than time, arguing that this will ultimately increase revenue for service providers while delivering greater benefit to customers [38-48][42-48].


Both speakers highlighted the strategic importance of multilingual AI. Amish noted that most global models operate only in English, whereas Bharat GPT supports Hindi and a host of regional languages-a crucial advantage given that only about ten percent of Indians are fluent in English while the majority rely on Hindi or other local tongues [49-56][58-60]. Sabharwal reinforced this by describing the launch of Bharat GPT as a collective national product, citing the Prime Minister’s multilingual demonstration in Tamil and Bengali, which underscored the model’s pan-Indian relevance [78].


Amish repeatedly emphasized the need for sovereign AI to avoid dependence on foreign providers [81-85]. Ankush agreed, noting that data is the raw material for such independence and that India’s population of 1.4-1.5 billion continuously generates massive datasets that can fuel home-grown models [88-100]. He argued that this data advantage, combined with government support, positions India to build sovereign AI without needing large financial investment [84-85].


Regarding policy, Sabharwal praised the Indian government for providing free GPUs and funding to innovators-measures he claimed no other country offers [167-176]. Both participants agreed that a dedicated AI ministry would formalise these efforts; Sabharwal suggested adding a new role to the existing ministerial portfolio [134-139][135-139]. Amish explicitly asked for a critical appraisal of the AI policy, but Sabharwal responded that the government is already “ahead” and offered no substantive critique [158-166][167-176].


The recent AI summit in Delhi was portrayed as a landmark success, attracting ministers from the United Kingdom, Canada and France and signalling India’s emergence as a focal point for global AI dialogue [122-133]. Building on this momentum, Sabharwal outlined four pillars required for India to become a world-leading AI hub: (1) energy-efficient compute infrastructure, (2) scalable GPU and cloud resources, (3) talent development and skilling programmes, and (4) foundational model research and application ecosystems [142-157].


Both speakers envisaged AI becoming a routine convenience within the next five to six years, likening its trajectory to the evolution of mobile phones from simple communication devices to all-purpose platforms [79-82]. Sabharwal urged developers to start creating daily-use AI applications immediately, asserting that the technology and talent are already in place [80-82].


The session also introduced “Kanha AI”, a child-focused, screen-free companion built on Bharat GPT. The voice-over positioned it as a learning and emotional-support tool for children aged three to thirteen, emphasizing safety, privacy and responsible-AI design [288-304].


In the rapid-fire segment, Ankush confirmed that India has the most AI users globally and identified Bangalore as the likely “AI capital”, noting that India already has an IT capital and questioning which city will become the AI capital [204-219]. He declared founders as the jobs that will never disappear [219-220] and reiterated that Bharat GPT belongs to the nation as a whole, with its data contributed by every citizen [190-197][239-245].


The closing narration, delivered by a separate voice-over (Speaker 3), frames the AI as a friendly, privacy-first companion for children and parents [306-311].


Overall, the dialogue displayed strong consensus on several fronts: AI must be human-centred, inclusive and multilingual; sovereign AI is essential for national self-reliance; and India should transition to value-based pricing while preparing for AI-driven everyday convenience. Points of divergence emerged around the magnitude of investment required for sovereign AI (Ankush’s “we would be better without investing that much money” versus Amish’s emphasis on heavy sovereign investment) [84-85][81-85], the willingness to critique government policy (Amish’s request for a critical view versus Sabharwal’s uncritical praise) [158-166][167-176], and the net impact of automation on employment, with Amish foregrounding potential job loss and Sabharwal focusing on business-model evolution [26-28][29-35].


Key take-aways include: (i) AI should serve humanity, be safe and inclusive; (ii) Bharat GPT’s multilingual capability is a strategic differentiator; (iii) value-based pricing can reconcile AI-driven efficiency with client value; (iv) India’s vast data pool underpins sovereign AI ambitions; (v) government initiatives such as free GPU provision are pioneering, though a dedicated AI ministry is still advocated; (vi) AI is expected to become a ubiquitous daily tool within five to six years; and (vii) Kanha AI exemplifies child-centric, privacy-first AI applications. Unresolved issues remain around the precise scale of job displacement, detailed policy recommendations, funding mechanisms for an AI ministry, metrics for sovereign-AI success, and strategies for meeting the energy demands of large-scale models.


Overall, the discussion underscored a vision of AI that is inclusive, multilingual, sovereign, and integrated into everyday life, while also highlighting challenges around employment, pricing models, and the need for supportive policy and infrastructure.


Session transcriptComplete transcript of the session
Ankush Sabharwal

Our citizens, our organizations in India, what problems do they have? And how can we solve them with technology, with AI? So, I think that with technology, we can make humankind better. Ease of living and ease of doing, we can do with AI. We started the company in 2016, Core over Conversational AI. And our invention of Bharat GPT, I won’t even say its invention, its launch was evolutionary. We were already in Conversational AI, we had 1 billion plus users, we already had data. When people got ready to use Gen AI, we gave them. So, now 1 .3 billion users are using it. And more than 50 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000 ,000

Amish

Prime Minister Modi said one thing, Manav, moral, accountable, national, accessible, valid. What do you think about this?

Ankush Sabharwal

Absolutely, whatever we do, it should be for humans only. If we are doing something for businesses, ultimately these businesses are helping humans, which means we are benefiting citizens. And safety, inclusivity should remain and I think technology should be the way it should be, which is not visible. And we are saying AI again and again, we are not afraid of it. Some people say, or the job will go away or what is the risk? So we are using AI knowingly or unknowingly. If you look at any app, it has AI intervention. Now if there is any product at home, TV, fridge, eventually we are bringing AI there too. So technology is for everyone. and Manav is our co -founder.

Manav Gandotra, it’s a big coincidence for us.

Amish

Yes, Manav is your co -founder. But there is a big question that I want to ask all of this on behalf of the Indians. AI will not eliminate opportunity. It will redefine opportunity because there is a question that is being raised again and again. AI will come, jobs will go, mass exodus will happen in corporates. What do you think about it?

Ankush Sabharwal

Sir, everyone is saying that AI is automating work. That’s why we think that jobs will go. What does it mean that if it is automating, our problems that we are solving with technology are getting solved quickly. So why don’t we think that we will solve problems with AI quickly? We will solve more problems. If they provide solutions for businesses, then those solutions will be made quickly, will be good, and those businesses will make more solutions so that they can give more benefit to enterprises. So, I think that the work will be done and maybe the business model will have to be changed. I think that the effort -based IT services in India…

Amish

What do you mean by the business model will have to be changed?

Ankush Sabharwal

I think that the maximum IT services in India are rated per mandate, per hour. Rates are there, right? $20 per hour, $40 per hour. So, if I do the same work, if earlier 100 people used to work, now maybe 10 people will do it, even 2 people will do it, right? So, that business model… I am giving the same value. If my rate is fixed per mandate, then accordingly I will get less money as a company. But I am giving the same value for my client. Right? So, if you have to discuss this with them… I will provide you more value. Don’t give me per hour or per day basis pricing. We will do value based pricing. So what will happen with that?

Our clients will get more solutions, they will get more benefits, and eventually we will be able to make more money.

Amish

An interesting fact is that most of the AI models in the world work in English. But your AI model works in Indian languages. This is very very important for India. Because we have a lot of languages here. We have two languages in Bihar, three languages in UP, two languages in Tamil Nadu. How do you see this? Do you think this is very important to grow the Indian AI story? Indian languages are Indian languages. What’s your view?

Ankush Sabharwal

Absolutely. In India, I think only 10 % of people know English. Or speak it. The other 90 % even Hindi, around 40 -45%. I think only 80 % know it. Hindi, India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India. India has more knowledge than India.

Amish

very great very very Now you see the person of Tamil Nadu the Prime Minister is speaking in Hindi, the person of Tamil Nadu wants to hear the person of Assam wants to hear, he is coming in their language the person of Gujarat wants to hear, he is coming in their language so this is a kind of convenience

Ankush Sabharwal

Absolutely, when we met the Honorable Prime Minister in JIPA I was speaking in Hindi and when he was very delicate, what I was saying he was coming in Tamil and Bengali at the same time so we if we want to give benefit to everyone of technology and that is becoming a collective product I think that Bharat GPT is not ours, it belongs to the whole of India everyone has contributed by giving their language and voice so we are helping them back

Amish

Do you think AI will enable people to do their daily work and will it add convenience in their lives we all know that mobile when mobile phone came it was a communication machine today it is a convenience with all the apps and everything so this ai will also be a convenience in the next five years six years

Ankush Sabharwal

i think it will happen in five six years i think it will happen from today if so many companies have come here i think you are all seeing which is better technology which is better platform i think you should be next session just about use cases now we have technology and talent so people are also ready to use ai products so from today i think people will have to start making daily use apps daily use products so that everyone will benefit

Amish

sovereign ai means we are not dependent on any other country prime minister modi was also talking about this again and again in such a situation sovereign ai is very important for us to understand that we have to make sure that we are not dependent on any other country prime minister modi was also talking about this again and again in such a situation sovereign about this again and again in such a situation sovereign ai means we are not dependent on any other country prime minister modi was also talking about this again and again in such a situation about this again and again in such a situation sovereign ai means we are not dependent on any other ai means we are not dependent on any other country prime minister modi was also talking about this again and again in such a situation sovereign ai means we are not dependent on any other country prime minister modi was also talking about this again and again in such a situation Why did you make this issue so big?

Everyone is talking about sovereign AI. We

Ankush Sabharwal

Absolutely. And let me tell you the truth. We would be better without investing that much money.

Amish

Okay. Why do you think so? Why do you think so?

Ankush Sabharwal

What raw material is needed for AI?

Amish

Data.

Ankush Sabharwal

Yes. Data. And right now, we are producing data. Even audiences are producing data. You produce content in your channel. Not just you speaking and creating content. The people who are listening, they are also creating content. With the virtue of this beautiful number of people, a population of 1 .4, 1 .5 billion, we are all producing data. But just living life. So, we have this much data. So, will it make a model? It will create a platform. And we Indians are very aspirational. We want to grow fast. We want to use every technology. Now if there are foreign apps, most of the users are in India. So it means if we create a platform app, then it will be used in India too.

Amish

So what direction do you see the AI story of India going in? And in the next two years, where do you see it standing? What is your view? Be very very rational.

Ankush Sabharwal

First of all, I think the whole world, the AI has to be adopted. Let’s be the users of AI. Enough of the platform. Tell the platform that where is this solving real world problem? If we solve the real world problem with AI, in the whole world, I think India will have a huge contribution to create real world AI. I think India will have a huge contribution to create real world applications with AI. I think India will have a huge contribution to create real world applications with AI. The AI applications we make, the products we make, will of course be used in India. I think India will be considered a hub for AI solutions. If someone wants AI solutions all over the world, I think India would be the preferred choice in the next few months.

Amish

Right now in Delhi, all the CEO heads of big tech companies are here. Do you think AI Summit has been successful?

Ankush Sabharwal

Yes, absolutely. When I was coming here, a foreigner was telling me that he has attended many summits in the US and UK. But he has never seen a better summit than this. This is big, big, quite big. Absolutely. We are the leaders of India. We are the leaders of India. So, we meet daily. AI Minister of UK is here. We met him. AI Minister of Canada is here. We met him. AI Minister of France is here. AI Ministries are here. They all are here. So, I think now India is a focal point.

Amish

So, there should be an AI Ministry in India too.

Ankush Sabharwal

I think it should be soon. I think there are ministers, Ashniv Ashton sir. We should add one more role to them. We should add one more role. They already have a lot of roles.

Amish

But, in the next 3 -5 years, what are the main targets for India to become the first AI country? What’s your view on this?

Ankush Sabharwal

I think whatever is there, first, energy. Our brain is very useful. It only runs on 20 watts. But, the GPU doesn’t run on 20 watts. It runs on 1000 watts. So, any AI model, to run this model, we need to have a lot of people. Infra and energy are needed. Honorable Prime Minister’s vision is working well. It is visible but it is working well. In energy, infrastructure, compute, as you saw in the DIA mission how many GPUs are coming. And after that, foundational models have also been launched. Applications are also being launched. So these are 4 -5 things. Talent, I think there should be a sector for AI skilling. I think they are also doing it in the education department or MSME.

So all the other factors that are important for AI I think are being focused in India.

Amish

Please answer this question a bit critically. What would you say about the AI policy of the Government of India? What do you think? They are on right track. They are on the right track but they should make this change. What is your take on this? And be critical on this. Your government should give advice. Sir, may I

Ankush Sabharwal

ask I don’t think there is any country in the world whose government has given its citizens… In India’s context. Yes, first I am saying that no country has given its technologists, innovators, entrepreneurs free GPUs. And on top of that, GPUs were given, then money was given to make models. And opening up doors for us to adopt our application. I think that what we, entrepreneurs and techies, want, the government is already giving. And I think they are thinking ahead. I mean, whatever policy they launch, I think I was thinking this. I haven’t articulated it yet. If I haven’t articulated it yet, if the government hasn’t given a request, they launch it before that. They are ahead.

Indian government is already ahead. Indian government is already ahead. This is a

Amish

politically correct answer. I told them to give advice. You are saying they are doing everything right. Okay, I will

Ankush Sabharwal

give advice. Now stop scolding us. Okay. Okay. You fund them who use our applications. Okay. That’s a

Amish

good one. That’s a good one. That’s a very, very good one. If India is successful in making sovereign AI, will India get a lot of benefit in the long term? Absolutely. If

Ankush Sabharwal

sovereign AI comes to India, we’ll have the control. We’ll get the benefit. But see it as a business model angle. We can provide sovereign AI to other countries as well. And that work has started. We’re making our own sovereign AI for ourselves. We’re making it for others as well. We’re making it for others as well. Okay. We’ve

Amish

come to the end of the conversation. So let’s do a rapid fire round. Quickly. A few questions. Which country has the most AI users in the world? India. India.

Ankush Sabharwal

India?

Amish

Which city do you think will be the AI capital? We have IT capital. Which city will be

Ankush Sabharwal

AI capital? I think

Amish

it will be Bangalore. I can be biased. Bangalore. You are saying Bangalore. Okay. Now,

Ankush Sabharwal

a personal question. Durandhar film or cricket match? Neither. Neither? So, even after AI, which

Amish

job will not end? Any job. Any job. One word. Which will not end? Yes. I think it will be founders. Founders. Wow. Wow. Yes. AI. Yes. Yes. Yes. Yes. So who will control AI in the world after 50 years? I think the fear is that AI will not

Ankush Sabharwal

control but we will control. I believe that AI was created

Amish

by human intelligence. So that’s why human intelligence will control. Well said. After AI

Ankush Sabharwal

comes,

Amish

whose job will be easier? Doctor’s or engineer’s? What do

Ankush Sabharwal

you think?

Amish

The engineer will be biased because

Ankush Sabharwal

he is making it himself. Yes, he

Amish

will be biased because he is making it himself. Last question. What do you want to say about

Ankush Sabharwal

Bharat GPT? Do you think that the time has come for Bharat GPT? Absolutely. I am saying that it is not ours, it is of the whole country. All the

Amish

data in it is of the whole country. All the languages that have contributed to it are

Ankush Sabharwal

of the whole country. We have not given any money to

Amish

anyone to procure data. So it is… Now we are doing it free. If he is in a hugging phase… so all of us can use it freely. What would you like to

Ankush Sabharwal

say to the critics? Sir, we don’t have time to think about them. We don’t have time. We don’t have time to think about them. You don’t want to waste your time? Yeah. One line, what will be one line? For them? Yes. I am saying that start making yourself a fool. Okay, that’s a good one. A big round of applause. Thank you so much. Thank you so much. You came here

Amish

and spoke for yourself. Thank you. Pleasure talking to

Ankush Sabharwal

you. Thank you so much. Thank you. Amish ji, thank you very much. And thank you for

Amish

keeping my words. You actually bombarded him.

Ankush Sabharwal

Yeah,

Amish

yeah, please, please, please. Come, come, come. Come. This

Ankush Sabharwal

is a lot of magic. This is a lot of

Amish

magic. I will teach you new things. And I will guide you to take your ideas to the reality. You can share everything you want with me. Whether it is about school or your fear. I will

Speaker 3

always listen to you like a true friend. Whether it is an alarm to wake up early or to get stuck in school’s homework. Just call this friend of yours. We will solve everything together. And mummy, papa, you don’t have to worry. I am a friend of the children. But I am also a companion for your upbringing. You have full control over me through my parent

Kanha AI Voiceover

governance app. I have been made with the safety, privacy and responsible AI technique of my core rover .ai and Bharat GPT. The safety of the children is my biggest responsibility. So come on. Let’s start a smart and safe childhood. I am Kanha, your new friend. Radhe Radhe! I am Kanha, your new friend. There is no mobile screen in my world. There are only talks, stories and a lot of magic. I will teach you new things. And I will guide you to take your ideas to reality. You can share everything with me. Whether it is school talks or fear of mind, I will always listen to you like a true friend. Whether it is an alarm to wake up early in the morning or getting stuck in school homework, just call your friend.

We will solve everything together. Ladies and gentlemen, in addition to the launch of Kanha AI, we would also like to request Sudheesh ji who is… from IRCTC and it’s a privilege to have you on stage sir to unveil KANA AI. An amazing buddy for 3 to 13 year of age where you can interact without giving a screen to your child. Let me also share a story behind this. This was conceptualized in 10 days time by Corovar Bharat GPT for this launch. Thank you. Thank you. Thank you. Thank you. Thank you.

Speaker 3

you.

Related ResourcesKnowledge base sources related to the discussion topics (14)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Co‑founder of Ankush Sabharwal’s company is named Manav Gandotra.”

The knowledge base explicitly lists Manav Gandotra as the co-founder of the firm discussed [S4].

Confirmedmedium

“AI must be human‑first, safe, inclusive and invisible, echoing Prime Minister Modi’s call for AI that is moral, accountable, national, accessible and valid.”

Prime Minister Modi has advocated for responsible, human-centered AI principles, emphasizing safety, inclusivity and ethical accountability, which aligns with the report’s description [S85] and [S86].

Confirmedhigh

“AI could trigger a mass exodus of jobs in corporates.”

Experts such as Geoffrey Hinton and AI impact discussions have warned that AI may cause widespread unemployment and large-scale job displacement [S92] and [S91].

Confirmedmedium

“Traditional per‑hour or per‑mandate pricing for Indian IT services will become obsolete; a shift to value‑based (outcome‑based) pricing is needed.”

The knowledge base notes a broader industry move from input-based to outcome-based pricing models for AI services [S12].

External Sources (93)
S1
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S2
S3
Advancing Scientific AI with Safety Ethics and Responsibility — – Speaker 1- Speaker 2- Speaker 3 – Speaker 1- Speaker 3- Moderator
S4
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — -Amish Devagon: Role/Title not explicitly mentioned, appears to be an interviewer or journalist conducting the discussio…
S5
From KW to GW Scaling the Infrastructure of the Global AI Economy — – Ankush Sabharwal- Sudeesh VC Nambiar
S7
Keynote by Dr. Pramod Varma Co-founder & Chief Architect NFH India AI Impact Summit — -Moderator: Session moderator (no specific expertise, role, or title mentioned beyond moderating the discussion) And it…
S8
The perils of forcing encryption to say “AI, AI captain” | IGF 2023 Town Hall #28 — Namrata Maheshwari:Next, the organizers to help us see the speakers on the screen. The two speakers joining online. Oh, …
S9
We are the AI Generation — ## Introduction and Context Doreen Bogdan Martin: Thank you. Good morning and welcome to Geneva for the AI for Good Glo…
S10
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Technology should be invisible to users while improving experiences and outcomes
S11
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Advising country partners to consider environmental implications in digitalization is a key recommendation. Technology s…
S12
The Future of the Internet: Navigating the Transition to an Agentic Web — Business models must evolve from input-based to outcome-based pricing Example of customer experience solutions being pr…
S13
Panel Discussion Data Sovereignty India AI Impact Summit — By domestic, which is because in the age of AI, I strongly believe that the sovereign AI compute infrastructure has beco…
S14
Shaping the Future AI Strategies for Jobs and Economic Development — “They are giving GPUs available at 65 rupees per month.”[119]. “so there are quite a few no no it’s public it’s all publ…
S15
Open Forum #33 Building an International AI Cooperation Ecosystem — Kurbalija argues that AI has transformed from being a mysterious technology controlled by a few developers and top labs …
S16
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Just one thing I want to just say. Watch on 21st, the PM is inaugurating a new JV which HCL is announcing with Foxconn. …
S17
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — For law enforcement, this means we can strengthen how we prevent, detect, and respond, but only if we build the right pr…
S18
Ethics and AI | Part 6 — Even if the Act itself does not make direct reference to “ethics”, it is closely tied to the broader context of ethical …
S19
AI and the moral compass: What we can do vs what we should do — If technology reshapes what we can do, moral education must reshape how we decide. Ethics cannot be outsourced to compli…
S20
Open Forum: A Primer on AI — In conclusion, AI holds great promise in reshaping industries and driving innovation. It has the potential to create new…
S21
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — Fink acknowledged that while some jobs may be displaced, new opportunities are simultaneously created. Both speakers agr…
S22
AI could replace 2.4 million jobs in US by 2030| Forrester’s report — According to a recent report from Forrester, an influential analyst firm, it is projected that Generative AI will replac…
S23
AI for Social Good Using Technology to Create Real-World Impact — The language accessibility challenge emerged as a critical theme throughout the discussion. India’s linguistic diversity…
S24
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S25
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — India’s deployment of technology as an inclusive, developmental resource was highlighted. Here, the national AI strategy…
S26
Open Internet Inclusive AI Unlocking Innovation for All — Anandan presented concrete evidence of India’s success with this approach, highlighting multiple companies achieving bre…
S27
Why science metters in global AI governance — “But if your potential or probable outcome is the end of jobs, then you need to think about universal basicism.”[113]. “…
S28
How AI Drives Innovation and Economic Growth — Of course, at the same time, on the flip side, AI also creates a number of challenges. One of them is there will be some…
S29
Welcome Address — This comment introduces a major policy position that distinguishes India’s approach from other major powers. It shifts t…
S30
Designing Indias Digital Future AI at the Core 6G at the Edge — This sovereignty imperative, according to Saluja, stems from both economic and strategic considerations. The token econo…
S31
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S32
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — And I have a deep belief that the entrepreneurial ecosystem in India is going to deliver some incredible global leaders …
S33
https://dig.watch/event/india-ai-impact-summit-2026/need-and-impact-of-full-stack-sovereign-ai-by-corover-bharatgpt — Absolutely, when we met the Honorable Prime Minister in JIPA I was speaking in Hindi and when he was very delicate, what…
S34
Keynote-Demis Hassabis — This discussion features a keynote address by Sir Demis Hassabis, co-founder and CEO of Google DeepMind and Nobel laurea…
S35
AI Infrastructure and Future Development: A Panel Discussion — And I think it’s limitless in terms of at least my career. The opportunity for transformation will be enough to keep me…
S36
Open Forum #33 Building an International AI Cooperation Ecosystem — Kurbalija argues that AI has transformed from being a mysterious technology controlled by a few developers and top labs …
S37
AI Algorithms and the Future of Global Diplomacy — I do think this is going to happen over the next five years. And I don’t believe for a second that this is only going to…
S38
Agents of Change AI for Government Services & Climate Resilience — Srinivas Tallapragada introduced an important distinction between strategic sovereignty and technical sovereignty that p…
S39
UK AI plan calls for AI sovereignty and bottom-up developments — The UK government has launched an ambitiousAI Opportunities Action Planto accelerate the adoption of AI to drive economi…
S40
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “What raw material is needed for AI?”[9]. “sovereign AI comes to India, we’ll have the control”[56]. “Indian government …
S41
The impact of AI on jobs and workforce — The ILO’s webinar was triggered by the recent impact of ChatGPT on our society and jobs. OpenAI’s ChatGPT, in particular…
S42
AI for Social Empowerment_ Driving Change and Inclusion — This discussion focused on the impact of artificial intelligence on labor markets and employment, featuring perspectives…
S43
Future of work — AI technology has the potential to be misused by employers in a variety of ways. For example, some employers may use AI-…
S44
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — Overall, the session provided a nuanced understanding of the impact of digitalisation on employment and the critical rol…
S45
From India to the Global South_ Advancing Social Impact with AI — Low level of disagreement with high convergence on AI’s transformative potential. Differences are primarily tactical rat…
S46
Driving Indias AI Future Growth Innovation and Impact — These key comments fundamentally shaped the discussion by expanding it beyond technical infrastructure to encompass trus…
S47
Secure Finance Risk-Based AI Policy for the Banking Sector — -India’s Strategic AI Positioning: Discussion centered on how India should position itself globally in AI governance, le…
S48
Building Indias Digital and Industrial Future with AI — These key comments fundamentally elevated the discussion from surface-level policy rhetoric to deep, nuanced analysis of…
S49
The Global Power Shift India’s Rise in AI & Semiconductors — High level of consensus with complementary perspectives rather than conflicting views. The speakers come from different …
S50
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Central to Taneja’s argument is India’s unique positioning for AI leadership. He identified key advantages: India as the…
S51
Keynote Adresses at India AI Impact Summit 2026 — Thank you so much, Mr. Sundar Pichai, for all those motivating and inspiring words. And ladies and gentlemen, today mark…
S52
The perils of forcing encryption to say “AI, AI captain” | IGF 2023 Town Hall #28 — The importance of children’s rights is acknowledged, with a recognition that the protection of children is a shared goal…
S53
eXtensible Access Control Markup Language (XACML) Version 2.0 — The <Resource> element contains one or more attributes of the resource to which the subject (or 882 subjects ) has…
S54
Diplomatic policy analysis — Global collaboration:Policy analysis helps identify shared interests and opportunities for cooperation, fostering consen…
S55
Beyond the New Public Diplomacy — Political correctness and professional survival instincts are silencing most professional critics, who even tend to stay…
S56
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Voice technology and multilingual capabilities were highlighted as crucial horizontal solutions for healthcare AI in Ind…
S57
Empowering India & the Global South Through AI Literacy — The programme has been implemented across multiple states, with specific mentions of Odisha, Kerala, and Jharkhand. The …
S58
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — The panelists concluded with consensus around AI’s transformative potential for financial inclusion. Suvendu emphasized …
S59
AI adoption reshapes UK scale-up hiring policy framework — AI adoption is prompting UK scale-ups torecalibrateworkforce policies. Survey data indicates that 33% of founders antici…
S60
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — While AI has streamlined and facilitated certain programming tasks, human developers are still required for further deve…
S61
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “An interesting fact is that most of the AI models in the world work in English”[41]. “But your AI model works in Indian…
S62
Open Internet Inclusive AI Unlocking Innovation for All — “I think, firstly, it’s important that India is not trying to get to AGI”[30]. “We need to uplift 100 million farmers, a…
S63
Inclusive AI Starts with People Not Just Algorithms — Artificial intelligence | Social and economic development
S64
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S65
AI could replace 2.4 million jobs in US by 2030| Forrester’s report — According to a recent report from Forrester, an influential analyst firm, it is projected that Generative AI will replac…
S66
AI, automation, and human dignity: Reimagining work beyond the paycheck — When AI and automation threaten to displace workers, they threaten all of these dimensions of human experience. Recentre…
S67
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — This comment is exceptionally insightful because it cuts through the doomsday rhetoric with concrete data, reframing the…
S68
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Julie Sweet from Accenture highlighted another crucial advantage: India’s human capital. With over 350,000 employees in …
S69
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — -Data Sovereignty and Monetization: The conversation addressed the importance of data ownership, with references to the …
S70
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — This discussion features an 8-year-old prodigy presenting their perspective on global AI development and India’s strateg…
S71
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — The discussion highlighted India’s emerging role as a global telecommunications leader. Dr. Upadhyay’s willingness to sh…
S72
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — Economic | Development | Infrastructure Five layers identified: application, model, chip, infrastructure, and energy. I…
S73
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Algorithms are not just applications of mathematical codes that support the digital world. They are part of a complex po…
S74
How AI Drives Innovation and Economic Growth — In rapid-fire predictions for 2035, panellists identified both opportunities and risks:
S75
Harnessing Collective AI for India’s Social and Economic Development — The discussion began by examining whether societal problems stem from lack of intelligence or coordination failures. Pro…
S76
Artificial Intelligence & Emerging Tech — Issues mentioned include transparency, explainability, discrimination, data governance, etc.
S77
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Anshul Sonak: Yeah, my minute, I mean, this requires a balanced, responsible public-private partnership and a great lead…
S78
Conversational AI in low income & resource settings | IGF 2023 — Prominent figure Ashish Atreja advocates for a global thought leadership group on generative AI in healthcare. He believ…
S79
Sticking with Start-ups / DAVOS 2025 — Bhatnagar explains how AI is transforming content creation and enabling new business models. He highlights the reduced c…
S80
Multistakeholder Partnerships for Thriving AI Ecosystems — We’re also joined by Nakul Jain, who’s the CEO and managing director of Wadwani AI Global. Nakul is a mission -driven te…
S81
“Re” Generative AI: Using Artificial and Human Intelligence in tandem for innovation — Audience:I am dealing. I’m a professor of ethics. And I’m dealing with AI and ethics in some years. And I’m struggling a…
S82
ChatGPT and the rising pressure to commercialise AI in 2026 — The moment many have anticipated with interest or concern has arrived. On 16 January, OpenAI announced the global rollou…
S83
Open AI holds a leading position in AI race despite challenges — Open AI, a startup supported by Microsoft, has established itself as a leading player in the race to dominate the future…
S84
AI in education: Leveraging technology for human potential — Kevin Mills: Hello. It’s an incredible honor to be here with you today. The last UN gathering I attended was almost exac…
S85
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Strong consensus emerged around human-centered AI principles. Austria’s State Secretary Alexander Perol articulated the …
S86
(Plenary segment) Summit of the Future – General Assembly, 5th plenary meeting, 79th session — The Prime Minister advocates for the responsible development and use of artificial intelligence. This argument stresses …
S87
AI Collaboration Across Borders_ India–Israel Innovation Roundtable — Thank you. Firstly, it’s been one of a kind of an experience to be part of this AI impactor. In fact, I’ve been around t…
S88
https://dig.watch/event/india-ai-impact-summit-2026/from-india-to-the-global-south_-advancing-social-impact-with-ai — AI is the new electricity. The question is who has the switch? And today that’s what we will be discussing. You know, if…
S89
https://dig.watch/event/india-ai-impact-summit-2026/ai-collaboration-across-borders_-india-israel-innovation-roundtable — This is one of the premier AI startups from Israel. I met him at a family office conference in the Bay Area sometime bac…
S90
AI job interviews raise concerns among recruiters and candidates — As AI takes on a growing share of recruitment tasks,concernsare mounting that automated interviews and screening tools c…
S91
Comprehensive Discussion Report: The Future of Artificial General Intelligence — -Labor Market Disruption and Economic Impact: The conversation extensively covered the potential displacement of jobs, p…
S92
AI pioneer warns of mass job losses — Geoffrey Hinton, often called the godfather of AI, haswarnedthat the technology could soon trigger mass unemployment, pa…
S93
Keynote-Bejul Somaia — “In 2008, a small number of entrepreneurs and investors in India looked at a world with very limited internet penetratio…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Ankush Sabharwal
6 arguments161 words per minute1868 words694 seconds
Argument 1
AI should serve humans, be invisible technology, and Bharat GPT is an evolutionary launch (Ankush Sabharwal)
EXPLANATION
Ankush emphasizes that AI must be designed primarily for human benefit and operate behind the scenes without being intrusive. He also describes the launch of Bharat GPT as a major evolutionary step for their conversational AI platform.
EVIDENCE
He states that AI should be for humans only, safe, inclusive and invisible, noting that AI is already embedded in many everyday products and that they are not afraid of it [13-16]. He then describes Bharat GPT’s launch as evolutionary, mentioning their prior conversational AI experience, a user base of over a billion, and the rapid adoption of the new model [6-9].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for AI to operate invisibly and prioritize human benefit is emphasized in [S10] and reinforced in [S4], which call for technology that is safe, inclusive and behind the scenes.
MAJOR DISCUSSION POINT
Human‑centric AI and Bharat GPT launch
AGREED WITH
Amish
Argument 2
Automation will reshape business models, moving from hourly rates to value‑based pricing (Ankush Sabharwal)
EXPLANATION
Ankush argues that AI‑driven automation will reduce the number of people needed for tasks, making traditional per‑hour billing unsustainable. He proposes shifting to value‑based pricing to reflect the higher efficiency and client value delivered.
EVIDENCE
He explains that most Indian IT services are priced per hour or per mandate, and that automation could reduce a team of 100 people to just a few, which would lower revenue under the current model [38-42]. He then suggests moving to value-based pricing, promising greater client benefits and higher company earnings [43-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift from input-based to outcome-based (value-based) pricing in AI-driven services is discussed in [S12], supporting the claim that automation will drive new business models.
MAJOR DISCUSSION POINT
Business model transformation due to AI automation
AGREED WITH
Amish
DISAGREED WITH
Amish
Argument 3
Supporting Hindi and regional languages is crucial; Bharat GPT enables multilingual access for the majority non‑English population (Ankush Sabharwal)
EXPLANATION
Ankush points out that only a small fraction of Indians are proficient in English, while the majority speak Hindi or other regional languages. He claims Bharat GPT addresses this gap by supporting multiple Indian languages, thereby reaching the vast non‑English user base.
EVIDENCE
He notes that only about 10 % of people know English, while 90 % speak Hindi or other regional languages, highlighting the need for language-inclusive AI [58-60]. He also recounts speaking with the Prime Minister in Hindi while the Prime Minister responded in Tamil and Bengali, illustrating the multilingual capability of their technology [78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of multilingual AI for India, given that most global models are English-only, is highlighted in [S4] and [S23], underscoring the need for Hindi and regional language support.
MAJOR DISCUSSION POINT
Multilingual AI for Indian population
AGREED WITH
Amish
Argument 4
Sovereign AI is needed; India’s massive data generation can fuel independent models, reducing reliance on foreign AI (Ankush Sabharwal)
EXPLANATION
Ankush contends that data is the raw material for AI and that India’s huge population generates vast amounts of data, enabling the creation of sovereign AI models. This would lessen dependence on foreign AI providers and support national self‑reliance.
EVIDENCE
He identifies data as the essential raw material for AI and states that India’s 1.4-1.5 billion people are continuously producing data through content creation and consumption [84-88]. He further explains that this data can be used to build models and platforms, emphasizing India’s aspirational drive to grow fast with its own data [90-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sovereign AI and the role of India’s massive data generation as raw material are examined in [S4], [S13], and [S16], which argue for independent domestic models.
MAJOR DISCUSSION POINT
Data‑driven sovereign AI for self‑reliance
AGREED WITH
Amish
Argument 5
Indian government is proactive: providing free GPUs, funding, and staying ahead of other nations in AI support (Ankush Sabharwal)
EXPLANATION
Ankush praises the Indian government for offering free GPUs and financial support to innovators, positioning the country ahead of many others in AI policy. He also highlights the presence of foreign AI ministers at an Indian AI summit as evidence of India’s leadership.
EVIDENCE
He claims no other country gives free GPUs and money to technologists, noting that the Indian government already provides these resources and launches policies before they are requested, indicating a forward-looking approach [167-176]. He adds that the recent AI summit attracted ministers from the UK, Canada, France and others, underscoring India’s emerging role as an AI hub [122-133].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Government provision of low-cost GPU access and proactive AI policy are documented in [S14] and reinforced in [S4], indicating India’s ahead-of-the-curve support.
MAJOR DISCUSSION POINT
Government support and leadership in AI ecosystem
AGREED WITH
Amish
DISAGREED WITH
Amish
Argument 6
AI will become a daily convenience within 5‑6 years; focus should shift to building everyday‑use apps (Ankush Sabharwal)
EXPLANATION
Ankush predicts that AI will become as ubiquitous as smartphones within the next five to six years. He urges developers to start creating daily‑use applications now to capitalize on this imminent shift.
EVIDENCE
He states that the transition to everyday AI will happen in five to six years and that many companies are already showcasing superior platforms, implying that the market is ready for daily-use AI products [80-82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The rapid democratization of AI and its trajectory toward everyday use, likened to a new convenience platform, is noted in [S15].
MAJOR DISCUSSION POINT
AI as everyday convenience
AGREED WITH
Amish
A
Amish
6 arguments153 words per minute969 words378 seconds
Argument 1
AI must be moral, accountable, national, accessible, and valid (Amish)
EXPLANATION
Amish cites Prime Minister Modi’s five guiding principles for AI, emphasizing that AI systems should be ethical, accountable, serve national interests, be widely accessible, and maintain validity.
EVIDENCE
He repeats Modi’s five attributes-moral, accountable, national, accessible, valid-directly quoting the Prime Minister’s statement [11].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Modi’s five AI principles align with ethical AI guidelines that stress morality, accountability, national relevance, accessibility and validity, as outlined in [S18] and [S19].
MAJOR DISCUSSION POINT
Ethical and national standards for AI
AGREED WITH
Ankush Sabharwal
Argument 2
AI may displace jobs, redefining opportunities and causing corporate shifts (Amish)
EXPLANATION
Amish raises concerns that AI could lead to job losses and corporate restructuring, but frames this as a redefinition of opportunities rather than an outright elimination of work.
EVIDENCE
He asks whether AI will cause jobs to disappear and a mass exodus in corporates, highlighting the recurring question about AI’s impact on employment [26-28].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Potential job displacement and the redefinition of opportunities are discussed in [S20], with further analysis of net effects in [S21] and quantitative estimates in [S22].
MAJOR DISCUSSION POINT
Job displacement and opportunity redefinition
AGREED WITH
Ankush Sabharwal
DISAGREED WITH
Ankush Sabharwal
Argument 3
Multilingual AI is essential for India’s linguistic diversity and growth of the AI story (Amish)
EXPLANATION
Amish stresses that most global AI models operate only in English, whereas India’s linguistic diversity requires AI that supports multiple Indian languages to drive the nation’s AI narrative.
EVIDENCE
He notes that most AI models work in English, but their model works in Indian languages, and lists the many languages spoken across Indian states, underscoring the importance of multilingual capability [49-57].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The critical need for Indian-language AI, given linguistic diversity, is emphasized in [S4] and [S23].
MAJOR DISCUSSION POINT
Importance of multilingual AI for India
AGREED WITH
Ankush Sabharwal
Argument 4
Emphasis on avoiding dependence on other countries, echoing Modi’s call for sovereign AI (Amish)
EXPLANATION
Amish repeats the call for sovereign AI, emphasizing that India must not rely on foreign AI technologies and should develop its own independent capabilities.
EVIDENCE
He repeatedly states that sovereign AI means not being dependent on any other country, echoing Modi’s repeated messages about self-reliance [81-83].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for sovereign AI to reduce foreign dependence are echoed in [S4], [S13], and [S16].
MAJOR DISCUSSION POINT
Sovereign AI and self‑reliance
AGREED WITH
Ankush Sabharwal
DISAGREED WITH
Ankush Sabharwal
Argument 5
A critical view suggests the need for an AI ministry and more targeted policy advice (Amish)
EXPLANATION
Amish proposes the creation of a dedicated AI ministry and calls for more specific policy guidance, indicating that current structures may need refinement to support AI development effectively.
EVIDENCE
He asks whether there should be an AI ministry in India [134] and earlier requests a critical assessment of government AI policy, suggesting the need for focused governance [158-166].
MAJOR DISCUSSION POINT
Need for AI‑specific governance structures
AGREED WITH
Ankush Sabharwal
DISAGREED WITH
Ankush Sabharwal
Argument 6
AI’s trajectory will mirror the mobile phone’s evolution, adding convenience to daily life (Amish)
EXPLANATION
Amish draws an analogy between the evolution of mobile phones—from simple communication devices to all‑purpose tools—and the expected path of AI becoming a daily convenience.
EVIDENCE
He compares AI to the mobile phone’s transformation, stating that just as phones became a convenience platform, AI will similarly embed itself in everyday life over the next five to six years [79].
MAJOR DISCUSSION POINT
AI as a convenience driver
AGREED WITH
Ankush Sabharwal
S
Speaker 3
1 argument178 words per minute69 words23 seconds
Argument 1
Kanha AI is presented as a safe, privacy‑focused, screen‑free friend for children, built on Bharat GPT (Speaker 3)
EXPLANATION
Speaker 3 introduces Kanha AI as a child companion that operates without a screen, emphasizing safety, privacy, and its foundation on Bharat GPT technology.
EVIDENCE
The speaker describes Kanha as a friend for children, safe, privacy-focused, and built on Bharat GPT, positioning it as a screen-free companion for kids [280-287].
MAJOR DISCUSSION POINT
Introduction of child‑focused AI product
K
Kanha AI Voiceover
1 argument54 words per minute232 words255 seconds
Argument 1
The voiceover stresses child safety, learning support, and no‑screen interaction, highlighting the product’s purpose (Kanha AI Voiceover)
EXPLANATION
The Kanha AI voiceover outlines the product’s safety and privacy features, its role in education and companionship, and its design as a screen‑free interaction tool for children.
EVIDENCE
The voiceover lists safety, privacy, no screen, learning assistance, and invites children to interact, describing the product as a smart, safe childhood companion and detailing its launch context [288-300].
MAJOR DISCUSSION POINT
Features and purpose of Kanha AI
Agreements
Agreement Points
AI should be human‑centric, safe, inclusive and ethically grounded
Speakers: Ankush Sabharwal, Amish
AI should serve humans, be invisible technology, and Bharat GPT is an evolutionary launch (Ankush Sabharwal) AI must be moral, accountable, national, accessible, and valid (Amish)
Both speakers stress that AI must be designed primarily for the benefit of people, be safe, inclusive and adhere to ethical principles such as morality and accountability [13-16][11].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with emerging AI governance guidelines emphasizing ethics, safety and inclusivity, as highlighted in discussions on outdated governance thinking and the need for trust in AI deployments [S36][S44][S46].
Multilingual AI is essential for India’s linguistic diversity
Speakers: Ankush Sabharwal, Amish
Supporting Hindi and regional languages is crucial; Bharat GPT enables multilingual access for the majority non‑English population (Ankush Sabharwal) Multilingual AI is essential for India’s linguistic diversity and growth of the AI story (Amish)
Both highlight that most Indians are not English-speaking and that AI models supporting Hindi and regional languages are critical for reaching the majority of the population [58-60][49-57][78].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of multilingual, voice-first AI for India’s diverse languages has been underscored in health and finance pilots and in broader inclusive AI initiatives across the Global South [S45][S56][S57][S58].
Sovereign AI and data‑driven self‑reliance are strategic priorities
Speakers: Ankush Sabharwal, Amish
Sovereign AI is needed; India’s massive data generation can fuel independent models, reducing reliance on foreign AI (Ankush Sabharwal) Emphasis on avoiding dependence on other countries, echoing Modi’s call for sovereign AI (Amish)
Both agree that India must develop its own AI capabilities to avoid dependence on foreign providers, leveraging the country’s huge data generation as the raw material for sovereign models [84-88][81-83][190-197].
POLICY CONTEXT (KNOWLEDGE BASE)
Government frameworks distinguishing strategic from technical sovereignty and advocating full-stack sovereign AI reflect India’s policy push for data-driven self-reliance [S38][S40][S48][S49].
India needs a dedicated AI governance structure (AI ministry) and proactive policy support
Speakers: Ankush Sabharwal, Amish
Indian government is proactive: providing free GPUs, funding, and staying ahead of other nations in AI support (Ankush Sabharwal) A critical view suggests the need for an AI ministry and more targeted policy advice (Amish)
Both see the necessity of a focused governmental body for AI and commend the Indian government’s forward-looking policies such as free GPU access and early policy launches [134][135-139][167-176].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for a dedicated AI ministry echo analyses that current governance models are outdated and that coordinated policy is needed to steer AI development and avoid regulatory gaps [S36][S47][S48][S49].
AI will become a daily convenience within the next five to six years
Speakers: Ankush Sabharwal, Amish
AI will become a daily convenience within 5‑6 years; focus should shift to building everyday‑use apps (Ankush Sabharwal) AI’s trajectory will mirror the mobile phone’s evolution, adding convenience to daily life (Amish)
Both predict that AI will embed itself in everyday life in the near term, similar to how mobile phones evolved from communication tools to all-purpose platforms [79][80-82].
POLICY CONTEXT (KNOWLEDGE BASE)
Forecasts that AI-driven services will be commonplace within a half-decade appear in forward-looking panels discussing rapid adoption timelines for AI in everyday life [S37].
AI will reshape employment and business models, requiring new pricing approaches
Speakers: Ankush Sabharwal, Amish
Automation will reshape business models, moving from hourly rates to value‑based pricing (Ankush Sabharwal) AI may displace jobs, redefining opportunities and causing corporate shifts (Amish)
Both acknowledge that AI will change the nature of work and the economics of service delivery, prompting a shift from traditional hourly billing to value-based models and a redefinition of job opportunities [26-28][38-48].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple studies and policy briefs note AI’s disruptive impact on jobs, the need for new labor regulations, and novel pricing models for AI-enabled services [S41][S42][S43][S44][S60].
Similar Viewpoints
Both see AI’s near‑term impact as a mass‑market convenience platform that will transform daily routines, akin to the evolution of the mobile phone [79][80-82].
Speakers: Ankush Sabharwal, Amish
AI will become a daily convenience within 5‑6 years; focus should shift to building everyday‑use apps (Ankush Sabharwal) AI’s trajectory will mirror the mobile phone’s evolution, adding convenience to daily life (Amish)
Both stress the strategic importance of building indigenous AI capabilities to ensure national self‑reliance [81-83][190-197].
Speakers: Ankush Sabharwal, Amish
Sovereign AI is needed; India’s massive data generation can fuel independent models, reducing reliance on foreign AI (Ankush Sabharwal) Emphasis on avoiding dependence on other countries, echoing Modi’s call for sovereign AI (Amish)
Unexpected Consensus
Both speakers view AI as a catalyst for creating a new Indian AI hub that will serve global markets
Speakers: Ankush Sabharwal, Amish
India will be considered a hub for AI solutions (Ankush Sabharwal) Question about the direction of India’s AI story and its future standing (Amish)
While Amish only asked about India’s AI trajectory, his inquiry aligns with Ankush’s confident claim that India will become the preferred global AI solutions hub, revealing an unanticipated shared optimism about India’s international AI leadership [106-108][118-119].
POLICY CONTEXT (KNOWLEDGE BASE)
High-level statements at the India AI Impact Summit and analyses of India’s emerging AI ecosystem position the country as a future global AI hub for both domestic and export markets [S34][S49][S50].
Overall Assessment

The discussion shows strong convergence among Ankush and Amish on several core themes: human‑centric and ethical AI, multilingual support for India’s diverse population, the pursuit of sovereign AI powered by domestic data, the need for dedicated AI governance, and the expectation that AI will become a ubiquitous daily tool within five to six years. They also concur that AI will reshape jobs and business models, though they differ on the tone of impact. This high level of consensus signals a unified vision for India’s AI future, reinforcing policy priorities around inclusivity, self‑reliance, and ecosystem support.

High consensus across ethical, linguistic, strategic, and societal dimensions, suggesting that stakeholders are aligned on the direction of India’s AI development and the policy measures required to achieve it.

Differences
Different Viewpoints
Impact of AI on employment and corporate structure
Speakers: Ankush Sabharwal, Amish
AI may displace jobs, redefining opportunities and causing corporate shifts (Amish) Automation will reshape business models, moving from hourly rates to value‑based pricing (Ankush Sabharwal)
Amish worries that AI will cause jobs to disappear and trigger a mass exodus in corporates, framing it as a redefinition of opportunity [26-28]. Ankush counters that AI automation will simply change how work is delivered, enabling faster problem solving and prompting a shift to value-based pricing rather than eliminating jobs [29-35][38-48].
POLICY CONTEXT (KNOWLEDGE BASE)
International labour forums and research highlight AI’s dual potential to augment workforces while reshaping corporate hierarchies, prompting policy attention to workforce transitions [S41][S42][S44][S60].
Amount of investment needed for sovereign AI and government support
Speakers: Ankush Sabharwal, Amish
Emphasis on avoiding dependence on other countries, echoing Modi’s call for sovereign AI (Amish) Indian government is proactive: providing free GPUs, funding, and staying ahead of other nations in AI support (Ankush Sabharwal)
Amish repeatedly stresses the need for sovereign AI so India is not dependent on foreign providers, implying substantial national investment and policy focus [81-83]. Ankush, while praising the government’s provision of free GPUs, later says “we would be better without investing that much money” [84-85], suggesting a more modest investment stance that conflicts with the emphasis on building a fully sovereign AI ecosystem.
Critical assessment of Indian AI policy
Speakers: Ankush Sabharwal, Amish
A critical view suggests the need for an AI ministry and more targeted policy advice (Amish) Indian government is proactive: providing free GPUs, funding, and staying ahead of other nations in AI support (Ankush Sabharwal)
Amish asks for a critical appraisal of the government’s AI policy, seeking specific advice and pointing out possible gaps [158-166]. Ankush responds by stating that the Indian government is already ahead, giving free GPUs and launching policies before they are requested, offering no substantive criticism [167-176]. This reflects a divergence between a demand for critique and a wholly positive appraisal.
POLICY CONTEXT (KNOWLEDGE BASE)
Commentators have warned that India’s AI policy risks lagging behind fast-moving global standards and that a more evidence-based, critical review is needed to avoid outdated governance models [S36][S47][S54].
Unexpected Differences
Contradictory stance on the scale of investment for sovereign AI
Speakers: Ankush Sabharwal, Amish
Emphasis on avoiding dependence on other countries, echoing Modi’s call for sovereign AI (Amish) Indian government is proactive: providing free GPUs, funding, and staying ahead of other nations in AI support (Ankush Sabharwal)
While Amish repeatedly stresses the need for a strong sovereign AI push-implying substantial national investment-Ankush unexpectedly says “we would be better without investing that much money” [84-85], creating a tension between the perceived need for heavy investment and a minimalist spending approach.
Dismissal of policy criticism versus request for critical advice
Speakers: Ankush Sabharwal, Amish
A critical view suggests the need for an AI ministry and more targeted policy advice (Amish) Indian government is proactive: providing free GPUs, funding, and staying ahead of other nations in AI support (Ankush Sabharwal)
Amish explicitly asks for a critical assessment of the government’s AI policy [158-166]. Ankush responds by stating the government is already ahead and offers no critique, even saying “we don’t have time to think about them” regarding critics [250-254], which is unexpected given the request for constructive criticism.
POLICY CONTEXT (KNOWLEDGE BASE)
Observations on the silencing of dissenting voices in policy debates illustrate the tension between open critique and political correctness in AI governance discussions [S55].
Overall Assessment

The conversation shows broad alignment on the strategic importance of multilingual, sovereign AI and its future as a daily convenience. However, clear disagreements emerge around the impact of AI on jobs, the required level of national investment for sovereign AI, and the need for a critical policy review versus a wholly positive appraisal of government actions.

Moderate – while the participants share common goals (e.g., AI for humans, multilingual access, sovereign capability), they diverge on how to manage workforce transitions, the scale of public investment, and the openness to policy critique. These divergences could affect policy formulation, funding allocations, and workforce reskilling strategies in India.

Partial Agreements
Both agree that supporting Hindi and regional languages is crucial for reaching the majority of Indians and for the growth of India’s AI narrative. Ankush cites the low English proficiency and the multilingual interaction with the Prime Minister [58-60][78], while Amish highlights the many Indian languages and the importance of non‑English models [49-57].
Speakers: Ankush Sabharwal, Amish
Multilingual AI is essential; Bharat GPT enables multilingual access for the majority non‑English population (Ankush Sabharwal) Multilingual AI is essential for India’s linguistic diversity and growth of the AI story (Amish)
Both envision AI becoming a ubiquitous, everyday convenience similar to smartphones within the next five to six years and call for developers to create daily‑use applications. Ankush predicts this shift and urges immediate app development [80-82], while Amish draws the analogy with mobile phones becoming a convenience platform [79].
Speakers: Ankush Sabharwal, Amish
AI will become a daily convenience within 5‑6 years; focus should shift to building everyday‑use apps (Ankush Sabharwal) AI’s trajectory will mirror the mobile phone’s evolution, adding convenience to daily life (Amish)
Both stress the strategic importance of sovereign AI for national self‑reliance. Ankush points to India’s huge data generation as the raw material for independent models [84-99], while Amish repeats the sovereign‑AI mantra to avoid foreign dependence [81-83].
Speakers: Ankush Sabharwal, Amish
Sovereign AI is needed; India’s massive data generation can fuel independent models (Ankush Sabharwal) Emphasis on avoiding dependence on other countries, echoing Modi’s call for sovereign AI (Amish)
Both acknowledge the role of government in shaping the AI ecosystem. Ankush praises the provision of free GPUs and proactive policies [167-176]; Amish calls for a dedicated AI ministry and more focused policy guidance [134][158-166].
Speakers: Ankush Sabharwal, Amish
Indian government is proactive: providing free GPUs, funding, and staying ahead of other nations in AI support (Ankush Sabharwal) A critical view suggests the need for an AI ministry and more targeted policy advice (Amish)
Takeaways
Key takeaways
AI should serve humanity, be invisible, moral, accountable, national, accessible and valid. Bharat GPT is positioned as an evolutionary launch that leverages existing conversational AI experience and massive user base. Automation will reshape business models, prompting a shift from hourly/mandate pricing to value‑based pricing for IT services. Multilingual AI (support for Hindi and regional languages) is essential for reaching the majority of India’s population. Sovereign AI is critical for reducing dependence on foreign models; India’s large data generation can fuel independent AI development. The Indian government is proactive—providing free GPUs, funding, and staying ahead of other nations—but an explicit AI ministry is still desired. AI is expected to become a daily convenience within 5‑6 years, similar to the evolution of mobile phones. Kanha AI, built on Bharat GPT, is introduced as a safe, screen‑free, privacy‑focused companion for children.
Resolutions and action items
Adopt value‑based pricing models for AI‑enabled IT services rather than per‑hour rates. Advocate for the creation of a dedicated AI ministry or an expanded role for existing ministers. Accelerate development of everyday‑use AI applications and platforms for Indian users. Leverage India’s massive data generation to build sovereign AI models and reduce reliance on foreign AI. Continue promotion and rollout of Bharat GPT and Kanha AI as flagship products.
Unresolved issues
The extent and timeline of job displacement due to AI automation remain unclear. Specific policy recommendations or regulatory frameworks needed to ensure AI safety, inclusivity, and accountability were not detailed. How to operationalize and fund a dedicated AI ministry has not been resolved. Metrics for measuring the success of sovereign AI initiatives and their global competitiveness are not defined. Long‑term strategies for managing energy and compute infrastructure requirements were discussed but not concretized.
Suggested compromises
Transition from hourly/mandate pricing to value‑based pricing to balance client cost expectations with AI‑driven efficiency gains. Emphasize AI safety and inclusivity while still promoting rapid adoption, acknowledging both concerns and opportunities.
Thought Provoking Comments
If we do the same work that used to need 100 people, now maybe 10 or 2 people will do it, so the traditional per‑hour pricing model will break. We need to move to value‑based pricing where we charge for the outcome, not the time spent.
This reframes the economic impact of AI from a threat of job loss to an opportunity to redesign service contracts, highlighting a concrete strategic shift for Indian IT firms.
It shifted the conversation from abstract worries about automation to a practical business‑model discussion. Amish followed up asking for clarification, prompting Ankush to elaborate, which opened a new sub‑topic about how Indian IT services must evolve to stay competitive.
Speaker: Ankush Sabharwal
AI models worldwide are mostly English‑centric, but Bharat GPT works in Indian languages. That’s crucial because only about 10 % of Indians speak English; the rest need AI in Hindi, Tamil, Bengali, etc.
Raises the unique linguistic challenge of India and positions multilingual AI as a strategic differentiator, moving the focus from technology to societal inclusion.
Prompted Ankush to stress the national ownership of Bharat GPT and the importance of language diversity, deepening the discussion about AI’s role in bridging regional gaps and reinforcing the narrative of a ‘collective product’ for India.
Speaker: Amish
Data is the raw material for AI. With a population of 1.4‑1.5 billion constantly generating content, India already has the scale of data needed to build sovereign AI models without relying on foreign providers.
Links India’s demographic advantage directly to strategic AI independence, providing a clear argument for why sovereign AI is feasible and essential.
This comment pivoted the dialogue toward the concept of ‘sovereign AI’, leading Amish to press for a critical view of government policy and setting up a later exchange about free GPU allocations and policy leadership.
Speaker: Ankush Sabharwal
No other country gives its innovators free GPUs and funding to build models; India is already ahead of the curve in policy support for AI development.
Offers a bold claim about governmental support that challenges typical narratives of bureaucratic lag, positioning India as a proactive AI hub.
Triggered a brief tension where Amish demanded a more critical assessment, but Ankush’s insistence reinforced the positive framing of Indian policy, influencing the tone to remain optimistic about governmental role.
Speaker: Ankush Sabharwal
Founders will be the jobs that never disappear; they will continue to create and steer AI development.
Identifies entrepreneurship as the enduring human role amidst automation, providing a hopeful counterpoint to fears of mass unemployment.
Served as a concluding rallying point in the rapid‑fire segment, shifting the mood to empowerment and reinforcing the earlier theme that AI should augment, not replace, human initiative.
Speaker: Ankush Sabharwal
Bangalore will become the AI capital of India, building on its status as the IT capital.
Specifies a geographic focal point for AI growth, grounding the abstract national vision in a concrete location that stakeholders can rally around.
Anchored the broader discussion about India’s AI future to a tangible hub, prompting agreement from Amish and reinforcing the narrative of a centralized ecosystem.
Speaker: Ankush Sabharwal
AI will be a convenience in five to six years, just like mobile phones evolved from communication tools to all‑purpose platforms.
Draws a relatable analogy that frames AI adoption as an inevitable, user‑centric evolution, making the technology’s impact more accessible to a non‑technical audience.
Helped transition the conversation from policy and business models to everyday user experience, setting the stage for the final rapid‑fire questions about daily life integration.
Speaker: Ankush Sabharwal
Overall Assessment

The discussion was driven forward by a handful of pivotal remarks that repeatedly shifted the focus from high‑level optimism to concrete challenges and back again. Ankush’s articulation of a value‑based pricing model reframed AI’s economic threat into a strategic opportunity, while the emphasis on multilingual capability highlighted India’s unique advantage. The data‑centric argument for sovereign AI and the claim of unprecedented government support introduced a nationalistic, self‑reliant narrative that steered the dialogue toward policy and infrastructure. References to Bangalore as the AI capital and the founder’s role as the enduring job provided concrete anchors for the abstract vision. Together, these comments created a dynamic flow: moving from problem definition, to economic implications, to cultural relevance, to policy critique, and finally to a hopeful, actionable outlook for India’s AI future.

Follow-up Questions
How will the shift from hourly/mandate pricing to value‑based pricing affect Indian IT service firms and their clients?
Understanding this transition is crucial to gauge the economic impact of AI automation on the traditional IT services business model in India.
Speaker: Amish
What are the measurable impacts of AI‑driven automation on employment in India’s IT sector, especially regarding job displacement versus new opportunities?
Assessing workforce implications will help policymakers and companies plan for reskilling and job creation.
Speaker: Amish
How effective is Bharat GPT across the diverse Indian languages, and what is the adoption rate among non‑English speaking users?
Evaluating language coverage and user uptake is essential to determine the inclusivity and reach of the platform.
Speaker: Amish
What specific steps are required for India to achieve sovereign AI, including infrastructure, GPU access, and policy measures?
Identifying concrete actions will guide strategic independence from foreign AI ecosystems.
Speaker: Amish
What concrete recommendations can be made to improve the Indian government’s AI policy beyond the current measures?
A critical review can provide actionable advice to strengthen policy effectiveness and support innovation.
Speaker: Amish
How can India address the high energy consumption of AI models compared to the human brain’s efficiency?
Energy sustainability is a major concern for scaling AI workloads nationally.
Speaker: Amish
What safety, privacy, and responsible‑AI considerations are needed for child‑focused AI like Kanha, and how should they be regulated?
Ensuring protection of children is vital as AI companions become more prevalent.
Speaker: Speaker 3, Kanha AI Voiceover
What metrics should be used to evaluate the success and impact of AI summits such as the one held in Delhi?
Clear evaluation criteria will help determine the effectiveness of such events in advancing India’s AI agenda.
Speaker: Amish
Which Indian city has the potential to become the AI capital, and what criteria should be used to determine this?
Identifying the future AI hub can focus investment, talent, and infrastructure development.
Speaker: Amish
Which country actually has the most AI users, and how is this measured?
Accurate data on AI user distribution is needed for global benchmarking.
Speaker: Amish
Which jobs are least likely to be automated by AI, and why?
Understanding resilient occupations helps shape education and career guidance.
Speaker: Amish
Whose job (doctor’s or engineer’s) will become easier with AI, and what are the implications for each profession?
Analyzing sector‑specific benefits informs targeted AI integration strategies.
Speaker: Amish
How can Bharat GPT be made freely available while ensuring data privacy and sustainability?
Balancing openness with responsible data handling is key for widespread adoption.
Speaker: Amish
What are the best practices for creating daily‑use AI applications for the Indian population?
Guidelines will accelerate the development of consumer‑focused AI solutions.
Speaker: Amish
How can India become a global hub for AI solutions within the next few years?
Strategic insights are needed to position India as a preferred destination for AI services.
Speaker: Amish
What are the challenges and opportunities in scaling AI talent‑skilling programs across India?
Effective skilling is essential to meet the growing demand for AI expertise.
Speaker: Amish

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote Adresses at India AI Impact Summit 2026

Keynote Adresses at India AI Impact Summit 2026

Session at a glanceSummary, keypoints, and speakers overview

Summary

The AI Impact Forum in Delhi marked a historic milestone as the United States and India signed the Pax Silica Declaration, a partnership aimed at securing technology supply chains and advancing responsible AI [1-2][81-82]. The ceremony was framed as a roadmap for a shared future, emphasizing that the agreement goes beyond a paper document to shape economic and security cooperation [82][86-88].


Google CEO Sundar Pichai highlighted that the U.S.-India partnership is entering an era of “hyper-progress” in AI, but warned that benefits are not automatic without joint effort [4-5][6-7]. He detailed a full-stack commitment that includes launching AI-enhanced products such as Google Pay, AI-driven search features, and the Gemini app in multiple Indian languages [10-13][15-19]. Pichai also announced a $15 billion investment in an AI hub in Vizag that will host gigawatt-scale computing and create jobs across the country [22-24]. Complementing the hub, Google unveiled the India-America Connect Initiative, which will lay new subsea cable routes linking the U.S., India and the Southern Hemisphere, thereby expanding digital trade routes [25-27][28-30].


Micron CEO Sanjay Mehrotra stressed that memory and storage are critical for AI workloads and showcased Micron’s extensive R&D presence in Bangalore and Hyderabad, employing about 4,000 staff [58-60][62-63]. He announced a $2.75 billion investment in advanced packaging, assembly and test facilities in Sanand, Gujarat, describing the 500,000-square-foot plant as the size of ten cricket fields and built with steel equivalent to three and a half Eiffel Towers [64-66][70-74]. U.S. Undersecretary Jacob Helberg framed the Pax Silica Declaration as a response to over-concentrated supply chains, asserting that economic security is national security and that the pact will build a new AI-enabled architecture from minerals to silicon wafers [92-99][106-108]. He called for a “pro-innovation” stance that rejects weaponised dependency and positions the United States and India as builders of a resilient, future-proof technology stack [104-108][110-112].


Ambassador Sergio Gore linked the signing to the recently concluded interim trade agreement and described the U.S.-India coalition as a strategic “silicon stack” that secures critical minerals, chip fabs and AI data centers [133-138][141-148]. India’s Minister Ashwini Vaishnav highlighted the country’s trusted status, deep engineering talent and emerging mineral processing capacity, which he said reinforce India’s strategic contribution to Pax Silica [178-184][186-188]. The event concluded with the formal signing by Jacob Helberg, Ambassador Gore and U.S. Secretary S. Kratios, followed by a photo-op that symbolized the new era of U.S.-India technological cooperation [190-198][202-204].


Keypoints


Major discussion points


Deepening U.S.-India AI collaboration and infrastructure – Sundar Pichai highlighted Google’s AI product roll-outs for Indian consumers, contributions of Gemini models, AI-driven services such as voice/visual search and multilingual support, and a $15 billion AI-hub investment in Vizag together with new subsea cable routes that will physically link the two economies. [4-7][12-19][22-27]


Signing of the Pax Silica Declaration to secure the full silicon stack – The ceremony marked the formal launch of a coalition aimed at building resilient, trusted supply chains for semiconductors, critical minerals and AI, rejecting “weaponised dependency” and positioning the partnership as a strategic bulwark against coercive economic practices. [32-36][81-110][116-124][141-148][155-168]


Micron’s concrete investment in Indian semiconductor manufacturing – Micron underscored its role as a memory-and-storage leader, noting over 4,000 R&D staff in India, a portfolio of ~2,000 patents from Indian inventors, and a $2.75 billion, 500,000-sq-ft advanced packaging and test facility in Gujarat that will expand domestic chip production. [56-66][70-75]


Focus on talent development and skilling for AI leadership – Google announced the AI Skill House targeting 10 million future Indian AI leaders and a partnership with Badwani AI for a Google AI certificate, while Indian officials emphasized the country’s large pool of engineers and trusted status in the global tech ecosystem. [20-22][174-180]


Shared democratic values and strategic autonomy – Both U.S. and Indian speakers invoked the historic spirit of self-determination, framed economic security as national security, and stressed that free societies must control the “commanding heights” of technology rather than be subject to authoritarian coercion. [84-92][95-100][152-160]


Overall purpose / goal


The gathering served to cement a next-generation U.S.-India partnership that couples AI innovation, massive infrastructure investment, and a secure semiconductor supply chain (Pax Silica) with a shared commitment to democratic values and economic sovereignty. By signing the declaration, announcing sizable private-sector investments, and highlighting talent-building programs, the participants aimed to create a resilient, mutually beneficial technology ecosystem that can compete with and counteract coercive external pressures.


Overall tone


The discussion began with an upbeat, celebratory tone emphasizing opportunity and collaboration (e.g., “hyper-progress,” “extraordinary trajectory”) [4-7]. As the agenda shifted to the Pax Silica signing, the tone grew more solemn and strategic, focusing on security, resilience, and the need to “say no to weaponised dependency” [81-110][155-168]. Throughout, the speakers maintained a respectful and optimistic demeanor, but the narrative moved from optimism about AI benefits to a resolute, security-focused stance underscoring shared democratic principles.


Speakers

Sundar Pichai – Role/Title: CEO of Google (Alphabet); Area of Expertise: Technology leadership, AI, cloud services, digital products.


Participant – Role/Title: Moderator/Host (specific title not specified); Area of Expertise: Event facilitation.


Jacob Helberg – Role/Title: Undersecretary of State for Economic Affairs, United States; Area of Expertise: Economic policy, international trade, technology cooperation [S10][S11].


Sanjay Mehrotra – Role/Title: CEO of Micron Technology; Area of Expertise: Semiconductor memory and storage, AI infrastructure [S12].


Sergio Gore – Role/Title: U.S. Ambassador to India; Area of Expertise: Diplomatic relations, technology collaboration [S4][S5].


Ashwini Vaishnav – Role/Title: Honorable Minister for Electronics and Information Technology, Government of India; Area of Expertise: Electronics, semiconductor policy, technology development [S7][S8][S9].


Additional speakers:


Michael Kratios – Role/Title: Director, Office of Science and Technology Policy (OSTP), United States; Area of Expertise: Science policy, AI strategy.


Randhir Thakur – Role/Title: CEO of Tata Electronics; Area of Expertise: Electronics manufacturing, technology solutions.


Mike Krohn – Role/Title: CEO of General Catalyst; Area of Expertise: Venture capital, technology investment.


Sajiv Garb – Role/Title: (Title not specified); Area of Expertise: (Not specified).


S. Krishnan – Role/Title: Secretary (government department, not specified); Area of Expertise: (Not specified).


Full session reportComprehensive analysis and detailed insights

Opening remarks – Sundar Pichai


Pichai thanked the host and described the summit as occurring at a “profound moment with AI” [3]. He warned that the world stands on the “cusp of an era of hyper-progress and new discoveries, but the best outcomes are not guaranteed” [4-5] and stressed that realising AI’s benefits will require joint effort [4-5]. Positioning the U.S.-India partnership as a critical driver, he said Google is a “connection point between them, both figuratively and literally” [6-7]. He highlighted concrete product initiatives: Google Pay, which originated in India and now serves users worldwide [10]; AI-enhanced search, voice and visual tools such as Circle-to-Search and Lens that are heavily used by Indian users [15-18]; the Gemini app expanding into ten Indian languages [19]; and YouTube’s vibrant Indian creator ecosystem [20]. To empower developers, Google contributed 22 Gemini models to AI Coach and is collaborating with the Indian government on applications ranging from monsoon-forecasting for farmers to diabetic-retinopathy screening and multilingual information services [13-14].


Full-stack commitment


Google announced a $15 billion investment in an AI Hub at Vizag that will house gigawatt-scale computing and generate jobs across the country [22-24]; the India-America Connect Initiative will lay new subsea-cable routes linking the United States, India and the Southern Hemisphere, expanding digital trade routes [25-27]. He underscored the need for stable, trusted supply chains, citing Axilica’s role in securing cross-border component flows [28-30]. The programme also includes talent development: the AI Skill House aims to equip ten million future Indian AI leaders, and a partnership with Badwani AI will deliver a Google AI certificate to students and early-career professionals [20-22]. The recent Interim Trade Agreement was referenced as a foundation for deeper cooperation [133-138].


Moderator introduces Pax Silica


The moderator explained that the Pax Silica Declaration is intended as a “road-map for a shared future” and invited Jacob Helberg to outline its significance [81-88].


Jacob Helberg’s speech


Helberg framed the declaration as a decisive response to the over-concentration of global supply chains. He invoked an Alexander-the-Great analogy, noting that both nations were forged by the word “no” and must now say “no to weaponised dependency” and “no to blackmail” [84-90][96-97]. He warned that “global governance and sovereignty are being mis-used in an Orwell-like manner” and reiterated that “economic security is national security.”[92-99][104-108] He called for a precise, pro-innovation approach that builds a new AI-enabled architecture from minerals to silicon wafers [92-99][104-108] and concluded by thanking Ambassador Sergio Gore for acting as the diplomatic bridge that made the agreement possible [112-115].


Ambassador Sergio Gore’s remarks


Gore linked the signing to the Interim Trade Agreement and described the U.S.-India coalition as a strategic “silicon-stack” that secures critical minerals, chip fabs and AI data centres [141-148]. He portrayed Pax Silica as a positive-sum alliance that replaces coercive dependencies, stating that “peace comes through strength” [152-160] and that the coalition will define the 21st-century economic and technological order[161-168]. He completed the previously unfinished thought: “…adversaries will use technology to monitor and control their populations, so we must build resilient, trusted industrial bases.” [161-168]


Ashwini Vaishnav’s address


Vaishnav highlighted India’s reputation as a trusted partner, rooted in its 5 000-year civilisation [178-183]. He presented a concrete statistic: India has 315 design/EDA tools, compared with fewer than 20 globally, underscoring the nation’s capability to contribute to Pax Silica [174-177][180-184]. He expressed gratitude to the U.S. dignitaries and formally invited the signing ceremony to proceed [186-188].


Signing ceremony


The declaration was signed by Jacob Helberg, Ambassador Sergio Gore and Secretary S. Krishnan (representing the United States)[190-198].


Photo-op


Following the signing, a photo-op featured the CEOs of Micron, Tata Electronics (Dr. Randhir Thakur), General Catalyst (Mike Krohn), and other dignitaries [190-198].


Fireside conversation – moderated by Jacob Helberg, with participants Jacob Helberg, Secretary S. Krishnan, Sanjay Mehrotra, Mike Krohn and Randhir Thakur [202-207].


Key additional points


Sanjay Mehrotra emphasized that Micron is the only company in the Western Hemisphere that designs and manufactures memory and storage, and noted Micron’s 60 000 patents worldwide[101-103]. He also highlighted Micron’s $2.75 billion investment in an advanced-packaging, assembly and test facility in Sanand, Gujarat – a 500 000-square-foot plant described as “the size of ten cricket fields… steel three-and-a-half times the Eiffel Tower” [70-74][75-76].


– All speakers agreed on the necessity of a robust, diversified AI and semiconductor infrastructure-from the Vizag AI Hub and subsea cables [22-27] to full-stack supply-chain security [92-99] – and on India’s talent pool and trusted status as essential to the coalition [174-180][146-148].


Conclusion


The AI Impact Forum demonstrated a multi-layered consensus that the U.S.-India partnership must rest on three pillars-product innovation, talent development, and infrastructure investment-while safeguarding AI from coercive dependencies. The summit produced concrete actions: the signing of the Pax Silica Declaration, Google’s commitments to the Vizag AI Hub, subsea cables, and AI Skill House, and Micron’s $2.75 billion Gujarat facility. The next step is the fireside conversation, which will translate these commitments into detailed implementation road-maps.


Session transcriptComplete transcript of the session
Sundar Pichai

Thank you, Director Kratzios. Thank you for the opportunity to return to this stage and to mark this important occasion in U .S.-India relations. Yesterday, at the opening session, I shared some thoughts on this profound moment with AI. I said we are on the cusp of an era of hyper -progress and new discoveries, but the best outcomes are not guaranteed. We must work together to ensure the benefits of AI are available to everyone and everywhere. The U .S.-India partnership has a critical role to play. Google is proud to serve as a connection point between them, both figuratively and literally. More on this later. We have teams across both countries working seamlessly together on some of our most important initiatives.

Thank you. Innovations that start in India, like Google Pay. are making products better for people all over the world. I believe India is going to have an extraordinary trajectory with AI, and we are supporting with a full -stack commitment, including products, scaling, and infrastructure. First, products. We are working on building AI products and solutions for Indian consumers and businesses. To empower India’s incredible developer community, we have already contributed 22 Gemma models to AI Coach, and we are working closely with the government to bring AI applications with real -world impact, be it through delivering timely monsoon forecasts to farmers, helping healthcare workers screen for diseases like diabetic retinopathy, or making information and services accessible in more languages.

Our commitment extends to reimagining the products people use every day. As one example, AI is changing the way people use search. Indian users are amongst the highest adopters of voice and visual search globally. Our scan detection features with circle to search and lens are used in India more than anywhere else. The Gemini app is growing rapidly across the world, and it’s available in 10 languages spoken in India. And YouTube supports a vibrant ecosystem of Indian content creators sharing music, arts, and culture with the world. Second, skilling. Through the AI skill house, we are working to equip 10 million future Indian leaders with the tools to drive global progress. We are also partnering with Badwani AI to reach students and early career professionals with a Google AI certificate, which we announced earlier this week.

Third, infrastructure. Last year, we announced a $15 billion investment in Indian infrastructure with the AI Hub in Vizag at the center. This hub will house gigawatt -scale computing. When finished, it will bring jobs and the benefits of cutting -edge AI to people and businesses across India. Building on this, we recently announced the India -America Connect Initiative, which will deliver new subsea cable routes to connect the U .S., India, and multiple locations across the Southern Hemisphere. Combined with our existing cable systems, this initiative will significantly expand the digital trade routes and serve as a literal bridge between our two countries. Of course, none of this would be possible without stable supply chains built on a foundation of shared trust.

Products, subsea cables, AI hubs are all dependent on a complex flow of goods and components across borders. Axilica focuses on making sure that the supply chains are safe and secure and encourages greater commercial partnerships across key technologies. So let me congratulate the U .S. and India on this historic moment. Alongside the recent trade agreement, this will lay a strong foundation for a robust U .S.-India tech

Participant

Thank you so much, Mr. Sundar Pichai, for all those motivating and inspiring words. And ladies and gentlemen, today marks an important milestone as India formally joins Pax Silica, a forward -looking partnership aimed at strengthening secure and resilient technology ecosystems at a time when emerging technologies are reshaping global competitiveness and economic security. Trusted partnerships are essential. This declaration reflects a shared commitment by India and the United States to advance responsible innovation and resilient infrastructure. We are honored to have with us senior leadership from both the governments, alongside distinguished representatives from industry and also the diplomatic community. Without any further ado, may I now respectfully invite our distinguished dignitaries to please join us on stage. Ladies and gentlemen, please join me in extending a warm welcome as they make their way to the stage.

It’s an honor to have such distinguished leadership this morning, Excellencies. Thank you so much for joining us. We’ll proceed with brief remarks ahead of the signing ceremony. May I please invite Honorable Jacob Helberg, Undersecretary of State for Economic Affairs, the United States, to deliver his remarks. Thank you. I request Honourable Jacob Helberg, Under -Secretary of State for Economic Affairs, to please present his address. Ladies and gentlemen, please welcome. Ladies and gentlemen, we would like to wait for a couple of minutes for Under -Secretary Mr. Jacob Helberg. He is on his way and he would be here with us very soon. It’s an important occasion, especially when we talk of Pax Silica. It’s a historic agreement between the two governments, between the two biggest and the oldest democracies of the world.

And so we are here to listen to our distinguished guests as they present their views, their remarks on Pax Silica. This is one agreement which would change the way both the countries work in this particular domain. Ladies and gentlemen, we have distinguished speakers who are going to join us. And then a very, very important signing agreement procedure, the protocols that need to be followed. We are also going to have a photo op session after this. Ladies and gentlemen, in the meantime, may I please request Mr. Sanjay Mehrotra, the CEO of Micron, to kindly come on the stage and present his keynote address. Mr. Sanjay Mehrotra.

Sanjay Mehrotra

Good morning. On behalf of Micron Technology, I want to say we are super excited to be here participating in this phenomenal AI Summit. Micron is a semiconductor technology leader, leader in memory and storage. Memory and storage are critical to driving AI. As contextual processing becomes larger and as real -time demands on performance are placed on AI systems, they need more and more memory. I’m very proud to say that Micron is the only company in the Western Hemisphere that develops and manufactures memory and storage, and we have had successive generations of leadership in DRAM technology as well as NAND technology. But I’m also very proud today, later, with this PAC -SILICA initiative that will be signed here, bringing the technology collaboration closer between U .S.

and India. Micron, since 2019, has had large presence here in India with R &D centers in Bangalore, in Hyderabad, employing nearly 4 ,000 employees today. What I’m proud of is that Micron has 60 ,000 patents worldwide, one of the most innovative companies, but also a manufacturing powerhouse. Some of our most advanced DRAM products are being designed right here in India in collaboration with our teams in the U .S. In fact, we have now, in this short period since 2019, we now have 300 inventors with number of patents approaching nearly 2 ,000 that have been contributed by the innovative, phenomenal team here in India. Very proud. We are proud also of Micron’s investment in bringing advanced packaging, assembly, and test technologies here to Sanant, Gujarat.

In fact, Mitron is making an investment of $2 .75 billion here in Gujarat. We’ll talk more about it in the fireside chat a little bit later. And those investments now are going to be bringing a grand opening coming up soon where packaging and assembly will be done of advanced memory wafers produced worldwide. So this is a pioneering project here in India. The size of this facility that has been built is 500 ,000 square feet. So imagine that clean room is the size of 10 cricket fields. The amount of steel that has been used in that is about three and a half times of Eiffel Tower. The amount of concrete that is used in that is size of 100 Olympic -sized swimming pools.

This is the pioneering project of semiconductor manufacturing here in India, and Micron is proud to have partnered with the central government as well as the government in Gujarat bringing this project to Sanand. Modi Ji’s government has provided tremendous support and really policy that encourages investment here in India. So without further ado, having shared some of the importance of memory and storage in terms of driving AI infrastructure worldwide and importance of Micron here in India in R &D as well as in manufacturing, I would now like to pass it back to our host. Our host here in continuing with the regularly scheduled program. Thank you very much.

Participant

Thank you so much, Mr. Mehrotra. Ladies and gentlemen, please join me now in inviting Honorable Mr. Jacob Helberg, Undersecretary of State for Economic Affairs, to deliver his remarks.

Jacob Helberg

Good morning. It’s a profound honor to be here in Delhi at the India AI Impact Forum to mark a historic milestone in the partnership between the United States and India. Today, we sign the Pax Silica Declaration, a document that’s not merely an agreement on paper, but a roadmap for a shared future. There’s a line from antiquity attributed to Alexander the Great that famously said that the people of India are the ones who are the most important people in the world. The peoples of Asia were slaves because they had not yet learned to pronounce the word no. Alexander viewed himself as a conqueror speaking to a world of subjects, and after traveling 11 ,000 miles for eight years, it was in India that Alexander finally met his match and turned around.

He did not know India, and India said no. The truth is, both of our nations were forged by that very word. Both of our nations claimed their freedom by learning to say no. We are the people who looked at a king oceans away and refused to quietly acquiesce. We rejected the counsel of polite society and broke centuries of colonial rule to take our destiny into our own hands. That spirit of defiance, that insistence on self -determination, is the fire that burns at the heart of both of our democracies, and today we are called upon to summon that spirit once again. For too long, we have allowed the foundation of our democracy and the foundations of our economic security to drip.

We find ourselves grappling with a global supply chain that is massively over -concentrated. We watch as our friends and allies face daily threats of economic coercion and blackmail, forced to choose between their sovereignty and their prosperity. We have seen the lights of a great Indian city extinguished by a keystroke from across the border, and we’ve seen our friends denied essential minerals simply because a leader dared to speak her mind. So today, as we sign the Pax Silica Declaration, we say no to weaponized dependency, and we say no to blackmail, and together we say that economic security is national security. But we must be precise about what that word means. There are some who use words like global governance and sovereignty in the same breath, just like Orwell used.

There are some who use freedom and slavery interchangeably. America and India are not deceived. sovereignty does not come from a global bureaucracy. It comes from builders, from the very builders present in this room today. It comes from the builders of smelters and oil wells, airplanes and expressways. And it comes from the hardworking people who physically build the rails of the future. And through the joint statement that we’re signing today, the United States and India are affirming our embrace of a pro -innovation approach to AI against those who would constrain us to set us back. But our fundamental mission is not resistance, it’s renewal. We are forging a supply chain that is the foundation for prosperity.

We are building a new architecture that diffuses intelligence, placing the awesome power of AI into the palm of our people’s hands and unleashing a wave of unprecedented possibility. From the minds to the models, we are securing the foundation, the full stack of the future, the minerals deep in the earth. the silicon wafers in our labs and fabs, and the intelligence that will unleash human potential. Packed Silica is our declaration that the future belongs to those who build. And when free people join forces, we do not wait for the future to be given to us. We build it ourselves. I want to end by thanking my good friend and colleague, Ambassador Sergio Gore. Sergio and his leadership has been the bridge for this very moment.

His work to bring our nations closer together is a testament to the vital importance that the United States places on this friendship. Sergio, thank you

Sergio Gore

for your service and your energy. Will you please all join me in giving Ambassador Gore a very warm welcome? Thank you. Good morning. Namaste. It is great to be here with you all. Thank you, Jacob. I want to just say a quick word about Jacob. Jacob’s an incredible friend, but Jacob also cares deeply about this relationship. This initiative, Pax Silica, would not be happening if it’s not for Jacob Helleberg. So a round of applause to him. What an honor to stand before all of you here today here in New Delhi at this historic moment as we welcome India into Pax Silica. Just over a month ago, I arrived in this extraordinary nation as the U .S.

ambassador. In my first weeks, I’ve walked the halls of South Block, met with innovators in Bangalore, and broke bread with entrepreneurs who are building the future. What struck me most was the fact that I was able to be here today. It wasn’t just India’s scale, although that is breathtaking. It’s India’s resolve, the determination to chart your own course. I keep talking about the limitless potential between our two nations, and I truly mean it. From the trade deal, to Pax Silica, to defense cooperation, the potential for our two nations to work together is truly limitless. And I aim to fulfill that over the next three years that I’m here. Earlier this month, we concluded the Interim Trade Agreement, a deal that shapes the economic contours of the Indo -Pacific.

We overcame friction points that had held us back for far too long. That agreement wasn’t just about trade flows or tariff schedules. It was about two great democracies saying we will build together, not just buy from one another. And now today, we take the next step. India joins Pax Silica, the coalition that will define the 21st century economic and technological order. I’m delighted to welcome Jacob. Jacob here. I’m also delighted to welcome the OSTP Director, Michael Kratios, who’s the head of our delegation at this very important summit. The U .S. leads in a strategic coalition which is designed to secure an entire silicon stack. From the mines we extract critical minerals, to the fabs where we manufacture chips, to the data centers where we deploy frontier AI.

It’s a coalition of capabilities that replaces coercive dependencies with a positive sum alliance of trusted industrial bases. Pax Silica will be a group of nations that believe technology should empower free people and free markets. India’s entry into Pax Silica isn’t just symbolic, it’s strategic, it’s essential. India is a nation with deep talent, deep enough. To rival challengers. India’s engineering depth offers critical capabilities for this vital coalition. In addition to talent, India has made important strides towards critical mineral processing capacity, and that’s something that we’re fully engaged on also. Policies that will reinforce U .S.-India tech cooperation will power AI innovation and adoption for years to come. We can share trusted AI technology with the world and especially with partners like India.

And critically, India brings strength. Peace doesn’t come from hoping adversaries will play fair. We all know they won’t. Peace comes through strength. India understands this. India understands strong borders. India understands this part of the world. That strength, that sovereignty, is exactly what Pax Silica amplifies. Because here’s the truth. Strength multiplies when it’s connected. When Minister Vaishnav and Minister Jay Shankar traveled to Washington, in recent weeks, they came as partners, forging the future. Their discussions on critical minerals were about interdependency among strong actors, about building supply chains that will not be held hostage. America is building coalitions of the capable and the willing. We’re ensuring the technologies that will define the next century. AI, space, and advanced semiconductors are developed, deployed, and controlled by free nations.

And we’re doing it in a partnership with the world’s largest democracy, a nation of 1 .4 billion people that share our values and our vision. We welcome India joining to co -found the future. Pax Silica is about whether free societies will control the commanding heights of the global economy. It’s about whether innovation happens in Bengaluru and Silicon Valley or in surveillance states. They use technology to monitor. And control their

Participant

Thank you, His Excellency Ambassador Sajagore, for re -strengthening and highlighting the enduring ties between our two nations and also for the shared vision that underpins today’s milestone. May I now request Honourable Minister Sri Ashwini Vaishnav to address the August gathering. Siemens,

Ashwini. Vaishnav

all the design EDA tools, students have available. Counting. Not able to count more than 20 in the whole world. India has 315. This capability we have to develop. This scale we have to develop. And in the world India today is seen as a trusted country. India is a trusted country. And that’s because our Prime Minister Narendra Modi ji has conducted the foreign policy in a way where the trust and respect the respect of a 5000 year old civilization that gravitas that India’s civilization’s stability that stability that world believes in. That gravitas that world believes in. And that’s why India has trust. Because of that trust today that trust is becoming part of the tax silica. I welcome you all and especially those who worked on the US side.

My biggest gratitude to all the three honorable guests from the US for taking out time to be part of this Paxillica signing. And I’ll now request the Paxillica signing ceremony to be done. Thank you, friends. Bharat Mata Ki. Bharat Mata Ki. Thank you. Thank you. S -T -P -U -S.

Participant

Ladies and gentlemen, and now the Pax Silica Declaration is being signed between India and the United States of America. The Pax Silica Declaration is being signed by Honorable Undersecretary Jacob Helberg, His Excellency Ambassador Sergio Gore, and the Secretary, Mr. S. Krishnan. And now Once the declaration has been signed by the respected signatories, the declaration will be exchanged. I request the distinguished guests to kindly hold up the signed declaration for the official photograph. I request the distinguished guests to kindly hold up the signed declaration for the official photograph. I request our distinguished guests to kindly proceed to the photo point on the right of the stage in front of the flags for the official photograph We are going to have an official photograph So may I please request our distinguished guests to kindly proceed to the point in front of the flags on your right that will give us the right picture for this photo So once again we are going to have this photo I would like to now also request CEO of Micron, Mr.

Sanjay Mehrotra and Mr. Randhir Thakur, CEO of Tata Electronics to please join us for a photo op on the stage. I also invite CEO of General Catalyst to come on the stage, please. I thank our distinguished guests for that photo op. It’s a great moment when Pax Silica Declaration has been signed between India and the United States of America. The photo op to commemorate this special moment. This is another historical milestone between the relationship between India and the United States of America. I thank all our distinguished guests for this photo op. I thank Honorable Minister and Mr. Michael Kratios for being with us on this wonderful and historic occasion. Ladies and gentlemen, we are waiting for the furniture to be rearranged and very soon we will now continue with the Fireside Conversation.

Ladies and gentlemen, now we would proceed to the Fireside Conversation. I invite our distinguished guests to please join us for this conversation. Undersecretary Jacob Helberg is going to moderate this discussion. His Excellency Sajiv Garb, Secretary Krishnan, Mr. Sanjay Mehrotra, CEO Mike Krohn, and Mr. Sanjay Mehrotra, CEO Mike Krohn. Dr. Randhir Thakur, CEO Tata Electronics. I request our distinguished guests to please take your seats as we begin the Fireside Conversation. Please stand by for the Fireside Conversation.

Related ResourcesKnowledge base sources related to the discussion topics (11)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Sundar Pichai described the summit as occurring at a “profound moment with AI” and warned the world stands on the “cusp of an era of hyper‑progress and new discoveries””

The transcript excerpt shows Pichai saying he shared thoughts on “this profound moment with AI” and that “we are on the cusp of an era of hyper-progress,” confirming the report’s wording [S6].

Confirmedhigh

“$15 billion investment in an AI Hub at Vizag”

Both the Leaders’ Plenary notes and a separate summit briefing cite a $15 billion AI Hub project in Visakhapatnam (Vizag) announced by Sundar Pichai [S18] and [S67].

Confirmedhigh

“Google will bring a full‑stack commitment to India, from TPUs to infrastructure investments to research and models”

The Leaders’ Plenary transcript explicitly states “we will bring a full-stack commitment to India, all the way from TPUs to infrastructure investments to research and models” [S18].

Additional Contextmedium

“AI‑enhanced search, voice and visual tools such as Circle‑to‑Search and Lens are heavily used by Indian users”

The knowledge base records the launch of Google’s AI Mode Search in India, showing that AI-enhanced search tools are being introduced to Indian users, which provides context for the reported usage of Circle-to-Search and Lens [S63].

Additional Contextmedium

“Google is collaborating with the Indian government on applications ranging from monsoon‑forecasting for farmers to diabetic‑retinopathy screening and multilingual information services”

A briefing on Google’s AI push in India highlights work on language barriers and agricultural efficiency, giving background to monsoon-forecasting and farmer-focused AI applications mentioned in the report [S64].

Additional Contextmedium

“The AI Skill House aims to equip ten million future Indian AI leaders”

The knowledge base discusses large-scale AI talent development programmes in India, providing context for the AI Skill House initiative, though the exact ten-million target is not detailed [S71].

External Sources (72)
S1
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Participant** – (Role/title not specified – appears to be Dr. Esther Yarmitsky based on context)
S2
Leaders TalkX: Moral pixels: painting an ethical landscape in the information society — – **Participant**: Role/Title: Not specified, Area of expertise: Not specified
S3
Leaders TalkX: ICT application to unlock the full potential of digital – Part II — – **Participant**: Role/Title not specified, Area of expertise not specified
S4
Keynote Adresses at India AI Impact Summit 2026 — -Sergio Gore- U.S. Ambassador to India Ambassador Sergio Gore explained that Pax Silica creates “a coalition of capabil…
S5
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Ambassador Sergio Gor And that’s actually a great segue to shift to Ambassador Gore, who just arrived in India with a b…
S6
https://dig.watch/event/india-ai-impact-summit-2026/keynote-adresses-at-india-ai-impact-summit-2026 — We are building a new architecture that diffuses intelligence, placing the awesome power of AI into the palm of our peop…
S7
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Ashwini Vaishnaw- Role/Title: Honorable Minister (appears to be instrumental in India’s semiconductor industry developm…
S8
Announcement of New Delhi Frontier AI Commitments — -Shri Ashwini Vaishnaw: Role/Title: Honorable Minister for Electronics and Information Technology, Area of expertise: El…
S9
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — -Ashwini Vaishnaw- Minister for Economic Electronics and Information Technology of India
S10
Keynote Adresses at India AI Impact Summit 2026 — -Jacob Helberg- Undersecretary of State for Economic Affairs, United States I invite our distinguished guests to please…
S11
S12
Keynote Adresses at India AI Impact Summit 2026 — -Sanjay Mehrotra- CEO of Micron Technology And so we are here to listen to our distinguished guests as they present the…
S13
https://dig.watch/event/india-ai-impact-summit-2026/keynote-adresses-at-india-ai-impact-summit-2026 — And so we are here to listen to our distinguished guests as they present their views, their remarks on Pax Silica. This …
S15
Keynote-Sundar Pichai — -Moderator: Role/Title: Event Moderator; Area of Expertise: Not mentioned -Mr. Dario Amote: Role/Title: Not mentioned; …
S16
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — -Sundar Pichai: Role/Title: Not specified in transcript; Area of expertise: Technology (implied)
S17
From India to the Global South_ Advancing Social Impact with AI — AI is the new electricity. The question is who has the switch? And today that’s what we will be discussing. You know, if…
S18
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Although I did check, and I can gently point out that England remains just ahead of India in the ICC test rankings, so n…
S19
Skilling and Education in AI — Infrastructure development emerged as crucial, with investments in data centers, subsea cables, and compute capacity to …
S20
EU Digital Diplomacy: Geopolitical shift from focus on values to economic security  — The EU emphasises ‘resilient ICT supply chains’ and the use of trusted suppliers. In practice, this means diversifying a…
S21
World Economic Forum Panel: Sovereignty and Interconnectedness in the Modern Economy — Need for economic security and resilience in supply chains
S22
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — Major major achievement and this kind of achievement shows how India can be leading the thought process. We also had Pax…
S23
The Geopolitics of Materials: Critical Mineral Supply Chains and Global Competition — The speakers demonstrated remarkable consensus on key structural challenges (permitting bottlenecks, supply-demand imbal…
S24
The Geoeconomics of Energy and Materials/ DAVOS 2025 — – Jonathan Price- Kgosientso Ramokgopa Balancing Energy Security and Transition Birol emphasizes the importance of cri…
S25
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Adding to what just was discussed, we have a tendency to overestimate the next two years and impact and underestimate wh…
S26
The Battle for Chips — India is placing a strong emphasis on developing a comprehensive ecosystem for the semiconductor industry. The country b…
S27
India faces AI challenge as global race accelerates — China’sDeepSeekhas shaken the AI industry by dramatically reducing the cost of developing generative AI models. While gl…
S28
Meta and Google adopt different approaches to election-related query restrictions in India — As India’s elections conclude and the new government commences its term, Metahas removedrestrictions on election-related…
S29
Open Forum #79 Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — Strategic competition and democratic security Tallis argues that while being in an AI arms race is undesirable, allowin…
S30
Responsible AI for Shared Prosperity — Social and economic development Social and economic development | Artificial intelligence
S31
Keynote-Mukesh Dhirubhai Ambani — Artificial intelligence | Social and economic development
S32
How AI Drives Innovation and Economic Growth — Artificial intelligence | Capacity development | Social and economic development Artificial intelligence | Social and e…
S33
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — In conclusion, the analysis reinforces the potential of digitalisation and emerging technologies, such as artificial int…
S34
Keynote Adresses at India AI Impact Summit 2026 — “India is a trusted country.”[66]. “And critically, India brings strength.”[68]. “I welcome you all and especially those…
S35
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — India positioned as a trusted partner in global semiconductor supply chain through Pac Silica agreement
S36
Nations unite to strengthen AI and chip networks — South Korea hasjoined the US-led Pax Silica initiative, a new partnership aimed at fortifying the global AI and semicond…
S37
AI Meets Cybersecurity Trust Governance & Global Security — The main disagreements center on the role of regulation versus industry pressure, the urgency of action versus deliberat…
S38
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The discussion highlighted the importance of policy interoperability rather than uniform global governance, recognizing …
S39
Protecting critical infrastructure in a fragile cyberspace — ‘Securing Critical Infrastructure in Cyber: Who and How?’ is the name of one of the main panels at IGF 2024 in Riyadh, w…
S40
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — The discussion revealed relatively low levels of fundamental disagreement, with most differences centered on implementat…
S41
Digital Technologies in Emerging Countries Edited by Francis Fukuyama and Marietje Schaake — India’s current policy envisions a future semiconductor industry that leverages its domestic talent pool and focus…
S42
https://dig.watch/event/india-ai-impact-summit-2026/keynote-adresses-at-india-ai-impact-summit-2026 — all the design EDA tools, students have available. Counting. Not able to count more than 20 in the whole world. India ha…
S43
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Very high level of consensus with no significant disagreements identified. The alignment spans government policy makers,…
S44
Semiconductor diplomacy — Over the last few years, chips have gained geopolitical and diplomatic relevance. As a result, countries with substantia…
S45
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — All speakers agree that the U.S.-India partnership represents a natural, mutually beneficial collaboration based on comp…
S46
Keynote Adresses at India AI Impact Summit 2026 — -Sundar Pichai- CEO of Google Our commitment extends to reimagining the products people use every day. As one example, …
S47
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Adding to what just was discussed, we have a tendency to overestimate the next two years and impact and underestimate wh…
S48
Keynote-Sundar Pichai — In this comprehensive keynote address delivered in India, Sundar Pichai, CEO of Alphabet and Google, opened with “Namast…
S49
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — “And the Vizag project, the AI Hub, which is a $15 billion investment, is our start.”[1]. “And we will bring a full‑stac…
S50
UAE joins US led Pax Silica alliance — The United Arab Emirates hasjoinedPax Silica, a US-led alliance focused on AI and semiconductor supply chains. The move …
S51
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Micron to proceed with $2.75 billion investment in assembly and test operations in Gujarat, India, complementing U.S. ma…
S52
The Battle for Chips — India is placing a strong emphasis on developing a comprehensive ecosystem for the semiconductor industry. The country b…
S53
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — “And I think that’s super important for the future of semiconductors in India that we focus on broad talent”[22]. “I tri…
S54
Skilling and Education in AI — Neena Pahuja from the National Council for Vocational Training (NCBT) emphasized the SWOT initiative and the importance …
S55
Open Forum #79 Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — Strategic competition and democratic security Tallis argues that while being in an AI arms race is undesirable, allowin…
S56
Digital sovereignty: the end of the open Internet as we know it? (Part 1) — 2.Just like ‘unthinking sovereignty’ remains important, it is also urgent to rethink and reclaim ‘economic security’. Th…
S57
AI Policy Summit Opening Remarks: Discussion Report — The tone is consistently optimistic and collaborative throughout both speeches. Both speakers maintain an encouraging, f…
S58
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Okay. We don’t see… Okay. Thank you, President. Dear representatives, ladies, gentlemen, and friends, hello, everyone….
S59
Fireside Conversation: 01 — The speakers explored how AI’s risk-benefit equation differs across global regions. Amodei acknowledged that while AI pr…
S60
AI for Democracy_ Reimagining Governance in the Age of Intelligence — This brings me to the international dimension. AI is a truly global challenge whose effects transcend national borders. …
S61
Strengthening bilateral technological cooperation: Indian Prime Minister discusses joint projects in US visit — Indian Prime Minister Narendra Modi is currently undertaking a significant state visit to the United States, where he ha…
S62
Keynote-Demis Hassabis — This discussion features a keynote address by Sir Demis Hassabis, co-founder and CEO of Google DeepMind and Nobel laurea…
S63
Google launches AI Mode Search in India — Googlehas launchedits advanced AI Mode search experience in India, allowing users to explore information through more na…
S64
AI push in India: Google tackles language and farming challenges — Google isintensifyingits AI initiatives in India, with a focus on addressing language barriers and improving agricultura…
S65
Google boosts AI in coding and cloud growth — More than 30% of all code at Googleis now writtenwith the help of AI, according to CEO Sundar Pichai during Alphabet’s Q…
S66
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — India’s AI stack, bridging government vision with enterprise needs. My name is Amanraj Khanna. I’m a partner and managin…
S67
Google plans $15bn AI push in India — Google CEO Sundar Pichaisaidat the India AI Impact Summit 2026 in New Delhi that he never imagined Visakhapatnam would b…
S68
Parallel Session A5: Achieving Sustainable and Resilient Transport and Logistics including inSIDS — It acknowledges the importance of technology and infrastructure, the ethical necessity for transparency, and the strateg…
S69
Trade Deals or Disputes? / DAVOS 2025 — Simon Evenett: Thank you very much. Now I’d like to turn to Tak san. Japanese firms have always emphasised their stro…
S70
High-Level session: Building and Financing Resilient and Sustainable Global Supply chains and the Role of the Private Sector — At the well-attended Global Supply Chain Forum, the paramount importance of partnerships and innovation was highlighted …
S71
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Talent development, education and future skills
S72
Major IBM training programme to boost India’s AI, cybersecurity and quantum skills — Technology giant IBM hasannounceda major education initiative to skill 5 million people in India by 2030 in frontier are…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sundar Pichai
3 arguments137 words per minute574 words250 seconds
Argument 1
AI product rollout for Indian consumers – Sundar Pichai
EXPLANATION
Pichai outlines Google’s effort to deliver AI‑driven products tailored for Indian users, emphasizing that these tools will improve everyday experiences across the country. He highlights specific applications that address local needs such as agriculture, healthcare, and multilingual access.
EVIDENCE
He states that Google is building AI products and solutions for Indian consumers and businesses, has contributed 22 Gemini models to AI Coach, and is collaborating with the government on applications like timely monsoon forecasts for farmers, disease screening for diabetic retinopathy, and multilingual information services [13-20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote notes a $15 billion AI Hub in Vizag and new subsea cable routes that will support AI-driven services for Indian users, confirming the rollout focus [S4]. The broader vision of AI as a transformative utility for India is highlighted in the discussion of AI’s societal impact [S17].
MAJOR DISCUSSION POINT
US‑India AI Collaboration and Investment
DISAGREED WITH
Jacob Helberg
Argument 2
AI skill development for 10 million leaders – Sundar Pichai
EXPLANATION
Pichai announces a large‑scale skilling initiative aimed at equipping ten million Indian future leaders with AI competencies. The program includes certifications and partnerships with local educational providers.
EVIDENCE
He describes the AI Skill House that will equip 10 million future Indian leaders with AI tools, and a partnership with Badwani AI to offer a Google AI certificate to students and early-career professionals [20-22].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel discussions on AI skilling and education stress the need for large-scale certification programmes and AI assistants for every Indian, aligning with the 10-million leader initiative [S19].
MAJOR DISCUSSION POINT
US‑India AI Collaboration and Investment
Argument 3
AI infrastructure investment (AI Hub, subsea cables) – Sundar Pichai
EXPLANATION
Pichai details a $15 billion investment in Indian AI infrastructure, including a gigawatt‑scale AI Hub in Vizag and new subsea cable routes linking the United States, India, and the Southern Hemisphere. These assets are intended to boost computing capacity and digital trade.
EVIDENCE
He notes the $15 billion AI Hub in Vizag that will house gigawatt-scale computing, and the India-America Connect Initiative that will add subsea cable routes to expand digital trade routes and act as a literal bridge between the two countries [22-27].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The summit keynote details a $15 billion investment in an AI Hub at Vizag and the India-America Connect Initiative that will add subsea cable routes, directly supporting the infrastructure claim [S4][S15].
MAJOR DISCUSSION POINT
US‑India AI Collaboration and Investment
S
Sanjay Mehrotra
1 argument129 words per minute496 words229 seconds
Argument 1
Semiconductor memory and storage as AI foundation – Sanjay Mehrotra
EXPLANATION
Mehrotra argues that memory and storage are essential components for scaling AI workloads, and that Micron’s leadership in DRAM and NAND technology underpins global AI progress. He also highlights Micron’s R&D and manufacturing presence in India.
EVIDENCE
He explains that memory and storage are critical to driving AI, especially as models grow and real-time performance demands increase, and that Micron is the only Western company developing and manufacturing these components, with successive generations of DRAM and NAND technology [57-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sanjay Mehrotra is identified as the CEO of Micron Technology, a global leader in DRAM and NAND memory, underscoring the argument that memory and storage are foundational for AI workloads [S5].
MAJOR DISCUSSION POINT
US‑India AI Collaboration and Investment
J
Jacob Helberg
2 arguments159 words per minute668 words250 seconds
Argument 1
Pax Silica as a roadmap for shared future and rejection of weaponised dependency – Jacob Helberg
EXPLANATION
Helberg frames the Pax Silica Declaration as a strategic roadmap that commits the United States and India to resist coercive economic practices and to build a resilient, secure technology ecosystem. He emphasizes the need to say “no” to weaponised dependency and to treat economic security as national security.
EVIDENCE
He describes the declaration as a roadmap for a shared future, stating that the signing signals a refusal of weaponised dependency, blackmail, and that economic security is national security, and he calls for a renewed, secure supply chain [82-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Helberg’s remarks describe Pax Silica as a strategic roadmap that rejects coercive economic practices and builds a resilient technology ecosystem [S4].
MAJOR DISCUSSION POINT
Pax Silica Declaration – Vision for a Secure, Resilient Tech Ecosystem
DISAGREED WITH
Sergio Gore
Argument 2
Need for diversified, trusted supply chains; economic security as national security – Jacob Helberg
EXPLANATION
Helberg warns that current global supply chains are overly concentrated and vulnerable to coercion, arguing that diversified, trusted supply chains are essential for both economic and national security. He cites examples of cyber‑enabled disruptions and mineral blackmail to illustrate the risk.
EVIDENCE
He notes that the foundation of democracy and economic security is dripping due to an over-concentrated supply chain, cites threats of economic coercion, a city’s lights being extinguished by a keystroke, and denial of essential minerals as evidence of vulnerability, and calls for saying no to weaponised dependency [92-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Helberg highlights the risks of over-concentrated supply chains and calls for diversification, a theme echoed in EU digital diplomacy on resilient ICT supply chains [S20] and the World Economic Forum’s focus on economic security in supply chains [S21].
MAJOR DISCUSSION POINT
Supply‑Chain Security and Economic Sovereignty
DISAGREED WITH
Sundar Pichai
S
Sergio Gore
2 arguments133 words per minute715 words320 seconds
Argument 1
Diplomatic endorsement and coalition‑building for Pax Silica – Sergio Gore
EXPLANATION
Gore praises the diplomatic partnership that enabled Pax Silica, highlighting Ambassador Jacob Helberg’s role and the broader U.S.‑India collaboration. He positions the initiative as a testament to the deepening bilateral relationship.
EVIDENCE
He thanks Jacob Helberg for his friendship and leadership, calls the initiative possible because of Helberg, and acknowledges the ambassador’s bridge-building work that underscores U.S. commitment to the partnership [112-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Gore, the U.S. Ambassador to India, is praised for his role in enabling the Pax Silica partnership, illustrating diplomatic endorsement of the initiative [S4].
MAJOR DISCUSSION POINT
Pax Silica Declaration – Vision for a Secure, Resilient Tech Ecosystem
Argument 2
Coalition to secure the full silicon stack and resist coercive dependencies – Sergio Gore
EXPLANATION
Gore describes the U.S.‑led coalition that will protect the entire silicon supply chain—from mineral extraction to chip fabrication and data‑center deployment—thereby replacing coercive dependencies with a trusted industrial base.
EVIDENCE
He outlines that the coalition will secure the full silicon stack, replacing coercive dependencies with a positive-sum alliance of trusted industrial bases, and emphasizes that the coalition will ensure technologies defining the next century are controlled by free nations [140-144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The coalition’s mandate to protect the entire silicon supply chain-from mineral extraction to chip fabrication and data-center deployment-is outlined in the summit remarks [S4].
MAJOR DISCUSSION POINT
Supply‑Chain Security and Economic Sovereignty
DISAGREED WITH
Jacob Helberg
A
Ashwini Vaishnav
2 arguments0 words per minute0 words1 seconds
Argument 1
India’s trusted status, talent depth, and capability in critical tech – Ashwini Vaishnav
EXPLANATION
Vaishnav asserts that India is a trusted global partner with deep talent and extensive capabilities in critical technologies, reinforced by the country’s longstanding civilizational gravitas and supportive foreign policy.
EVIDENCE
He mentions India’s large pool of design and EDA tools, the country’s reputation as a trusted nation, and attributes this trust to Prime Minister Narendra Modi’s foreign policy that respects a 5,000-year-old civilization, thereby linking it to the Pax Silica initiative [174-184].
MAJOR DISCUSSION POINT
Pax Silica Declaration – Vision for a Secure, Resilient Tech Ecosystem
Argument 2
Emphasis on critical mineral processing capacity and self‑reliance – Ashwini Vaishnav
EXPLANATION
Vaishnav stresses the importance of developing India’s own critical mineral processing capabilities to achieve self‑reliance in strategic technologies.
MAJOR DISCUSSION POINT
Supply‑Chain Security and Economic Sovereignty
P
Participant
1 argument49 words per minute851 words1023 seconds
Argument 1
Framing and procedural facilitation of the Pax Silica signing – Participant
EXPLANATION
The participant provides the ceremonial framework for the event, welcoming dignitaries, announcing the signing, and coordinating the photo‑op and subsequent fireside conversation.
EVIDENCE
He thanks Sundar Pichai, introduces the Pax Silica partnership, invites dignitaries to the stage, announces the signing ceremony, requests the presence of Jacob Helberg and Sergio Gore, manages the timing for the signing, and later directs guests to the photo area and fireside conversation [32-55][190-210].
MAJOR DISCUSSION POINT
Pax Silica Declaration – Vision for a Secure, Resilient Tech Ecosystem
A
Ashwini. Vaishnav
2 arguments93 words per minute185 words118 seconds
Argument 1
India’s international trust stems from its 5,000‑year‑old civilizational heritage and a foreign policy that emphasizes respect and stability – a trust that underpins its role in the Pax Silica partnership.
EXPLANATION
Vaishnav argues that the world views India as a reliable partner because its foreign policy, guided by the gravitas of a long‑standing civilization, has earned lasting confidence. This trust is presented as a foundational element for India’s participation in the new technology coalition.
EVIDENCE
He states that India is seen globally as a trusted country and attributes this trust to Prime Minister Narendra Modi’s foreign policy, which respects the heritage of a 5,000-year-old civilization, thereby giving India a reputation for stability and reliability that is now being incorporated into Pax Silica [178-183].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vaishnav links India’s longstanding civilizational heritage and respectful foreign policy to its trusted status in the Pax Silica declaration, as highlighted in the summit remarks [S4].
MAJOR DISCUSSION POINT
Pax Silica Declaration — Vision for a Secure, Resilient Tech Ecosystem
Argument 2
India possesses a vast pool of design and EDA tools – 315 in total – far exceeding the global average, demonstrating deep technical talent and capacity for advanced technology development.
EXPLANATION
Vaishnav highlights that India’s engineering community has access to an extensive set of electronic design automation (EDA) tools, far more than any other nation, which signals a strong foundation for innovation in critical technologies.
EVIDENCE
He notes that while the world has fewer than 20 such capabilities, India alone has 315 design and EDA tools, emphasizing the scale and need to further develop this capability [174-177].
MAJOR DISCUSSION POINT
Pax Silica Declaration — Vision for a Secure, Resilient Tech Ecosystem
Agreements
Agreement Points
Building resilient AI infrastructure and supply chains across the US‑India partnership
Speakers: Sundar Pichai, Jacob Helberg, Sergio Gore, Ashwini Vaishnav
AI infrastructure investment (AI Hub, subsea cables) — Sundar Pichai Need for diversified, trusted supply chains; economic security as national security — Jacob Helberg Coalition to secure the full silicon stack and resist coercive dependencies — Sergio Gore Emphasis on critical mineral processing capacity and self‑reliance — Ashwini Vaishnav
All four speakers stress the necessity of large-scale, secure infrastructure-from the $15 billion AI Hub in Vizag and new subsea cable routes (Sundar) [22-27]-to diversified, trusted supply chains that protect economic security (Jacob) [92-99] and a coalition that safeguards the entire silicon stack (Sergio) [140-144]. Ashwini adds that India’s own critical-mineral processing is essential for self-reliance [148-149]. Together they present a unified vision of a robust, sovereign-friendly digital backbone for the US-India alliance.
India as a trusted, talent‑rich partner essential to the Pax Silica coalition
Speakers: Jacob Helberg, Sergio Gore, Ashwini Vaishnav
Pax Silica as a roadmap for shared future and rejection of weaponised dependency — Jacob Helberg Diplomatic endorsement and coalition‑building for Pax Silica — Sergio Gore India’s trusted status, talent depth, and capability in critical tech — Ashwini Vaishnav
Jacob frames Pax Silica as a strategic roadmap that depends on trusted partners (Jacob) [82-85][112-115]; Sergio highlights the diplomatic bridge that makes the coalition possible and praises India’s deep engineering talent (Sergio) [146-148]; Ashwini reinforces India’s reputation as a trusted nation with vast design and EDA capabilities (Ashwini) [174-180][182-184]. The three converge on the view that India’s credibility and talent are central to the initiative.
POLICY CONTEXT (KNOWLEDGE BASE)
India’s reputation as a trusted nation was explicitly noted in the AI Impact Summit 2026 keynote and press briefings, positioning it as a key partner in the Pax Silica semiconductor coalition [S34][S35]. The coalition’s expansion to include allies such as South Korea underscores the strategic trust placed in India’s talent pool and design capabilities [S36], while India’s domestic policy emphasizes leveraging its engineering talent for next-generation chip design [S41].
AI as a catalyst for economic and social development
Speakers: Sundar Pichai, Sanjay Mehrotra, Jacob Helberg
AI product rollout for Indian consumers — Sundar Pichai Semiconductor memory and storage as AI foundation — Sanjay Mehrotra AI as part of a secure, future‑building architecture — Jacob Helberg
Sundar outlines AI-driven products for agriculture, health and multilingual services (Sundar) [13-20]; Sanjay stresses that memory and storage are essential to scale AI workloads (Sanjay) [57-60]; Jacob describes AI as a transformative power that must be placed in people’s hands to unlock unprecedented possibilities (Jacob) [106-108]. All three agree that AI, supported by robust hardware and tailored applications, is a key driver of development.
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple policy documents articulate AI’s role in driving inclusive economic growth and social well-being, including the “Responsible AI for Shared Prosperity” framework and analyses of AI’s contribution to innovation and development [S30][S32].
Similar Viewpoints
Both officials argue that security of the technology supply chain—covering minerals, chips and data centres—is essential to protect national and economic security and to avoid coercive dependencies (Jacob) [92-99] and (Sergio) [140-144].
Speakers: Jacob Helberg, Sergio Gore
Need for diversified, trusted supply chains; economic security as national security — Jacob Helberg Coalition to secure the full silicon stack and resist coercive dependencies — Sergio Gore
Both emphasize that AI must be broadly accessible and under sovereign control, with Sundar focusing on inclusive product deployment (Sundar) [13-20] and Jacob stressing that AI ecosystems should be free from weaponised dependency (Jacob) [82-99].
Speakers: Sundar Pichai, Jacob Helberg
AI product rollout for Indian consumers — Sundar Pichai Pax Silica as a roadmap for shared future and rejection of weaponised dependency — Jacob Helberg
Both link India’s technical talent and trusted reputation to the success of a coalition that safeguards the silicon stack (Sergio) [146-148] and (Ashwini) [174-184].
Speakers: Sergio Gore, Ashwini Vaishnav
Coalition to secure the full silicon stack and resist coercive dependencies — Sergio Gore India’s trusted status, talent depth, and capability in critical tech — Ashwini Vaishnav
Unexpected Consensus
Alignment between a corporate semiconductor leader and diplomatic officials on the strategic importance of the full silicon stack
Speakers: Sanjay Mehrotra, Jacob Helberg, Sergio Gore
Semiconductor memory and storage as AI foundation — Sanjay Mehrotra Need for diversified, trusted supply chains; economic security as national security — Jacob Helberg Coalition to secure the full silicon stack and resist coercive dependencies — Sergio Gore
While Sanjay’s remarks are technical, focusing on memory and storage as the hardware backbone for AI (Sanjay) [57-60], both Jacob and Sergio frame the same hardware ecosystem as a geopolitical security asset (Jacob) [92-99] and (Sergio) [140-144]. The convergence of a corporate CEO’s hardware emphasis with high-level diplomatic security narratives is not explicitly anticipated, revealing a cross-sector consensus on the strategic centrality of the silicon supply chain.
POLICY CONTEXT (KNOWLEDGE BASE)
The press briefing by HMIT Ashwani Vaishnav highlighted coordinated messaging between industry leaders and diplomatic officials on the full silicon stack’s strategic value within the Pax Silica agreement [S35]. This mirrors the broader narrative of semiconductor diplomacy, where corporate and governmental actors jointly shape policy to secure chip supply chains [S44], and is supported by consensus on workforce development for chip design in India [S43].
Overall Assessment

The speakers display a strong, multi‑layered consensus that the US‑India partnership must be underpinned by secure, diversified AI and semiconductor infrastructure, that India’s trusted talent pool is vital to the coalition, and that AI should be deployed inclusively while being protected from coercive dependencies.

High consensus across governmental, diplomatic, and industry actors, indicating a unified strategic direction that blends technological investment, supply‑chain security, and inclusive AI deployment, which bodes well for the implementation of Pax Silica and related initiatives.

Differences
Different Viewpoints
Collaborative AI rollout vs security‑focused supply‑chain priority
Speakers: Sundar Pichai, Jacob Helberg
AI product rollout for Indian consumers – Sundar Pichai Need for diversified, trusted supply chains; economic security as national security – Jacob Helberg
Pichai emphasizes delivering AI-driven products, skills and infrastructure to Indian users as the primary path to shared prosperity, highlighting concrete initiatives such as Gemini models, monsoon forecasts and multilingual services [13-20]. Helberg, by contrast, warns that without diversified, trusted supply chains and a refusal of “weaponised dependency,” the benefits of AI cannot be secured, stressing that economic security is national security and calling for a renewed, secure supply chain [92-99].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions at IGF 2024 and AI policy roadmaps reveal tension between rapid collaborative AI deployment and the need for security-focused supply-chain safeguards [S37][S38][S39]. Panel debates on digital public infrastructure similarly note divergent views on prioritizing openness versus security in AI and chip supply chains [S40].
Framing of Pax Silica: rejection of coercion vs coalition‑building optimism
Speakers: Jacob Helberg, Sergio Gore
Pax Silica as a roadmap for shared future and rejection of weaponised dependency – Jacob Helberg Coalition to secure the full silicon stack and resist coercive dependencies – Sergio Gore
Helberg presents Pax Silica as a strategic roadmap that explicitly says “no” to weaponised dependency, blackmail and coercive economic practices, positioning the declaration as a defensive stance [82-99]. Gore describes the same initiative as a positive-sum coalition that secures the full silicon stack, replacing coercive dependencies with trusted industrial bases and emphasizing collective strength rather than an explicit rejection narrative [140-144].
POLICY CONTEXT (KNOWLEDGE BASE)
The Pax Silica initiative is portrayed both as a coalition-building effort promoting shared standards and as a response to coercive pressures in the global chip arena, as reflected in diplomatic statements and analyses of semiconductor diplomacy [S36][S44]. This framing debate aligns with broader policy discussions on governance versus geopolitical leverage [S37].
Unexpected Differences
Quantitative claim about India’s design/EDA tool capacity
Speakers: Ashwini Vaishnav
India possesses a vast pool of design and EDA tools – 315 in total – far exceeding the global average – Ashwini Vaishnav
Vaishnav asserts that India has 315 design and EDA tools while “the world” has fewer than 20, a claim not corroborated or addressed by any other speaker, making it an unexpected and potentially contested statement within the discussion [174-177].
POLICY CONTEXT (KNOWLEDGE BASE)
A specific claim that India possesses 315 design/EDA tools was documented in briefing notes, providing quantitative backing for India’s emerging design ecosystem, though the figure remains subject to verification [S42]. The claim is consistent with broader assessments of India’s growing semiconductor design capabilities [S41].
Overall Assessment

The speakers are united on the overarching goal of deepening U.S.–India cooperation in AI and technology. Disagreements centre on emphasis: Pichai promotes collaborative AI product rollout, Helberg stresses security‑driven supply‑chain diversification, and Gore frames the effort as a coalition building exercise. These differences reflect varied priorities (innovation rollout vs. supply‑chain sovereignty vs. multilateral alliance) rather than fundamental conflict.

Moderate – while there is clear consensus on partnership, the divergent framing and strategic focus could lead to differing policy prescriptions and implementation pathways, requiring careful coordination to align product‑centric, security‑centric, and coalition‑centric agendas.

Partial Agreements
All speakers concur that a strong U.S.–India partnership is essential for advancing AI and related technologies. However, they diverge on the primary mechanism: Pichai stresses AI products, skills and infrastructure; Mehrotra stresses memory and storage manufacturing; Helberg stresses supply‑chain security and rejecting coercion; Gore stresses a multilateral coalition securing the silicon stack; Vaishnav stresses India’s talent pool and trusted status as the foundation for the partnership [13-20][57-60][82-99][140-144][174-184].
Speakers: Sundar Pichai, Sanjay Mehrotra, Jacob Helberg, Sergio Gore, Ashwini Vaishnav
AI product rollout for Indian consumers – Sundar Pichai Semiconductor memory and storage as AI foundation – Sanjay Mehrotra Pax Silica as a roadmap for shared future and rejection of weaponised dependency – Jacob Helberg Coalition to secure the full silicon stack and resist coercive dependencies – Sergio Gore India’s trusted status, talent depth, and capability in critical tech – Ashwini Vaishnav
Both the Participant and Helberg agree on the importance of formally signing the Pax Silica Declaration as a milestone, but the Participant focuses on ceremony logistics while Helberg emphasizes the strategic content of the declaration [32-55][82-99].
Speakers: Participant, Jacob Helberg
Framing and procedural facilitation of the Pax Silica signing – Participant Pax Silica as a roadmap for shared future and rejection of weaponised dependency – Jacob Helberg
Takeaways
Key takeaways
The United States and India are deepening AI collaboration, with Google committing to AI product roll‑outs, skill development for 10 million Indian leaders, and major infrastructure investments such as the AI Hub in Vizag and new subsea cable routes. Micron highlighted the critical role of memory and storage for AI, announced a $2.75 billion advanced packaging and test facility in Sanand, Gujarat, and emphasized its R&D presence and patent contributions from Indian teams. The Pax Silica Declaration was signed, framing a shared roadmap for a secure, resilient technology ecosystem and explicitly rejecting weaponised dependency and coercive supply‑chain practices. U.S. officials (Jacob Helberg, Sergio Gore, Michael Kratios) and Indian leaders (Ashwini Vaishnav) stressed the strategic importance of diversified, trusted supply chains, critical mineral processing, and the full silicon stack as matters of economic and national security. India’s trusted status, deep talent pool, and emerging capabilities in critical tech were highlighted as essential to the coalition’s success.
Resolutions and action items
Formal signing of the Pax Silica Declaration between the United States and India. Google to deploy 22 Gemini models via AI Coach, expand AI products for Indian consumers, and advance the AI Hub in Vizag with gigawatt‑scale computing. Google to launch new subsea cable routes linking the U.S., India, and the Southern Hemisphere to expand digital trade routes. Micron to invest $2.75 billion in a 500,000‑sq‑ft advanced packaging, assembly, and test facility in Sanand, Gujarat. Launch of the AI Skill House program targeting 10 million future Indian AI leaders and partnership with Badwani AI for Google AI certification. Ongoing collaboration with Indian government on AI applications for agriculture, healthcare, and multilingual services.
Unresolved issues
Specific timelines and milestones for the completion of the Vizag AI Hub and the subsea cable projects were not provided. Details on how India will develop and scale critical mineral processing capacity remain vague. Metrics and accountability mechanisms for measuring the impact of the AI Skill House and the 10 million‑leader target were not defined. Potential regulatory, policy, or geopolitical hurdles that could affect supply‑chain diversification were not addressed. The exact governance structure and decision‑making process for the Pax Silica coalition were not clarified.
Suggested compromises
A balanced approach that simultaneously pursues security‑focused supply‑chain diversification while maintaining open trade and collaboration (as expressed in the Pax Silica roadmap). Emphasis on joint investment and shared infrastructure (e.g., subsea cables, AI Hub) as a middle ground between unilateral dependence and isolation.
Thought Provoking Comments
We are on the cusp of an era of hyper‑progress and new discoveries, but the best outcomes are not guaranteed. We must work together to ensure the benefits of AI are available to everyone and everywhere.
This statement frames AI development as a pivotal, uncertain moment that requires global collaboration, moving the conversation beyond corporate announcements to a shared responsibility narrative.
It set a collaborative tone for the summit, prompting subsequent speakers to frame their initiatives (e.g., Google’s AI Hub, subsea cables, Micron’s investments) as part of a broader, inclusive mission rather than isolated business projects.
Speaker: Sundar Pichai
Micron’s $2.75 billion investment in Gujarat will create a 500,000 sq ft facility – a clean‑room the size of ten cricket fields, with steel three‑and‑a‑half times the Eiffel Tower and concrete equal to 100 Olympic‑size swimming pools.
Beyond the impressive numbers, the comment ties massive physical infrastructure directly to AI enablement, illustrating how memory and storage are the foundational hardware that will power the AI future.
It shifted the discussion from abstract policy to tangible, on‑the‑ground progress, reinforcing the narrative of a concrete “full‑stack” partnership and prompting listeners to consider the scale of investment required for AI leadership.
Speaker: Sanjay Mehrotra
Both of our nations were forged by that very word – ‘no.’ We rejected colonial rule, we said no to coercion, and today we say no to weaponised dependency and economic blackmail. Economic security is national security.
Helberg reframes the Pax Silica declaration as a moral and strategic stance against over‑concentrated supply chains, linking historical self‑determination to contemporary techno‑economic sovereignty.
This marked a turning point, moving the dialogue from partnership celebration to a geopolitical framing of the alliance. It prompted the audience (e.g., Sergio Gore) to echo themes of resilience, strength, and the need for a trusted industrial base, deepening the conversation into security and sovereignty dimensions.
Speaker: Jacob Helberg
Pax Silica is about whether free societies will control the commanding heights of the global economy… Peace comes through strength, not by hoping adversaries will play fair.
Gore expands the alliance’s purpose from economic cooperation to a contest of values, contrasting free‑market innovation with “surveillance states,” and emphasizing that strength is derived from interconnected, trusted supply chains.
His remarks reinforced Helberg’s geopolitical framing and introduced a value‑based narrative that elevated the discussion to a civilizational competition, influencing later remarks about India’s strategic depth and the coalition’s future direction.
Speaker: Sergio Gore
India is a trusted country because of a 5,000‑year‑old civilization’s gravitas; that trust is now becoming part of Pax Silica.
Vaishnav links cultural heritage and soft power to the technical alliance, suggesting that historical credibility can translate into modern strategic trust—a perspective not previously articulated.
This comment added a cultural‑soft‑power dimension to the conversation, prompting participants to view the partnership not only through economic or security lenses but also as a continuation of India’s longstanding global standing.
Speaker: Ashwini Vaishnav
Overall Assessment

The discussion began with a collaborative, technology‑focused narrative led by Sundar Pichai’s call for inclusive AI development. Sanjay Mehrotra’s concrete illustration of massive hardware investment grounded the conversation in tangible progress. Jacob Helberg’s historical analogy and warning about weaponised dependency pivoted the dialogue toward geopolitical stakes, a shift amplified by Sergio Gore’s framing of Pax Silica as a contest of free‑society versus surveillance‑state values. Finally, Ashwini Vaishnav’s cultural reference broadened the alliance’s identity, tying India’s ancient credibility to modern strategic trust. Collectively, these key comments transformed the summit from a series of announcements into a multidimensional discourse on technology, economics, security, and cultural legitimacy, shaping the overall direction and depth of the conversation.

Follow-up Questions
How will the 22 Gemini models contributed by Google be integrated and utilized by the Indian developer community?
Understanding integration details is crucial for maximizing the impact of AI tools on local innovation.
Speaker: Sundar Pichai
What are the specific AI applications planned for delivering timely monsoon forecasts to farmers, screening for diabetic retinopathy, and providing multilingual information services?
Clarifying these use‑cases will help assess real‑world impact and guide further development.
Speaker: Sundar Pichai
What is the curriculum, timeline, and partnership structure of the AI Skill House initiative aimed at equipping 10 million future Indian leaders?
Details are needed to evaluate scalability and effectiveness of the skilling program.
Speaker: Sundar Pichai
What is the current progress, expected completion date, and job‑creation outlook for the $15 billion AI Hub in Vizag?
The hub is a cornerstone of the partnership; its timeline and economic impact require monitoring.
Speaker: Sundar Pichai
What are the technical specifications, capacity gains, and expected economic effects of the India‑America Connect Initiative subsea cable routes?
Understanding the infrastructure rollout will inform expectations for digital trade and connectivity.
Speaker: Sundar Pichai
How will Axilica ensure the safety and security of supply chains for critical technology components across borders?
Supply‑chain resilience is a key risk area that needs concrete mitigation strategies.
Speaker: Sundar Pichai
What is the production capacity, technology roadmap, and anticipated impact of Micron’s advanced packaging, assembly, and test facility in Sanand, Gujarat?
Details will indicate how the facility strengthens India’s semiconductor ecosystem.
Speaker: Sanjay Mehrotra
How will Micron expand its R&D collaboration between India and the United States, and what future patent or innovation targets are set?
Tracking collaborative outputs will measure the partnership’s innovation momentum.
Speaker: Sanjay Mehrotra
What concrete actions under Pax Silica will address the over‑concentration of global supply chains and prevent economic coercion?
Specific measures are needed to translate the declaration into resilient supply‑chain structures.
Speaker: Jacob Helberg
What investments, partnerships, and timelines are planned to develop critical mineral processing capacity in India?
Critical minerals are essential for the full AI stack; their domestic processing is a strategic priority.
Speaker: Sergio Gore
How will the full‑stack AI infrastructure—from mineral extraction to wafer fabrication to model deployment—be secured and governed under Pax Silica?
A governance framework is required to protect the end‑to‑end AI supply chain.
Speaker: Jacob Helberg
What strategies are being pursued to expand India’s EDA tool and design capabilities beyond the current 315‑person capacity, and how will education pipelines be scaled?
Enhancing design tools and talent pipelines is vital for sustaining advanced semiconductor development.
Speaker: Ashwini Vaishnav
What metrics will be used to assess the trust, resilience, and effectiveness of the technology ecosystem after the Pax Silica signing?
Measurable indicators are needed to evaluate the success of the partnership over time.
Speaker: Ashwini Vaishnav
How has the Interim Trade Agreement influenced AI and semiconductor collaboration between the United States and India, and what further steps are planned?
Understanding the trade agreement’s impact will guide future policy and investment decisions.
Speaker: Sergio Gore

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote by Uday Shankar Vice Chairman_JioStar India

Keynote by Uday Shankar Vice Chairman_JioStar India

Session at a glanceSummary, keypoints, and speakers overview

Summary

Speaker 1 opens by praising the Prime Minister’s AI-focused growth agenda and the India AI team for delivering the summit [1][2]. He says he will not debate AI’s readiness or moral issues, but affirms his belief that emerging technologies can transform societies and businesses [4-6][8]. Drawing on three decades in media, he notes how successive tech inflection points-from the first personal computers to digital news platforms-have increased speed, agility and audience reach [9-11]. He highlights India’s rapid media expansion, citing growth to a $30 billion industry, 900 channels, 210 million TV households and 800 million video viewers within about 25 years [15-19][22-23].


Despite this domestic success, he argues India remains a largely internal content producer, unlike smaller nations that have captured global imagination [31-34][36-38]. He identifies structural barriers: a distraction from the large domestic market, limited capital compared with Hollywood budgets, and difficulty attracting global talent [41-48][49-53]. While India possesses world-class creative and technical talent, these resources are often sold to Western productions because domestic monetisation is insufficient [57-61].


AI is presented as a “once-in-a-generation” chance to overcome cost and infrastructure limits, enabling faster, higher-quality production such as a 100-episode series created three-to-five times quicker than traditional pipelines [70-78][80-82]. He explains that AI will reshape the three pillars of the industry-content, consumer and commerce-by lowering production barriers, enabling personalized viewer experiences and dynamic pricing, thereby expanding the “orange economy” [73-78][90-97][98]. With the global media market projected at $3-3.5 trillion, raising India’s share from under 2 % to 4-5 % could generate tens of billions of dollars [99-102].


To seize this, he calls for three commitments: self-disruption, building AI-native creative talent through large-scale skilling, and crafting policy that accelerates rather than hinders innovation [105-112][124-130][131-138]. He warns against importing Western regulatory models wholesale and urges a uniquely Indian framework that leverages the country’s entrepreneurial and creative depth [135-138][149-150].


Concluding, he expresses confidence that India’s market scale, cultural richness and technology alignment position it to lead the AI-driven media era if it moves quickly [151-155][158-159].


Keypoints


India’s media and entertainment sector has achieved rapid domestic growth but remains limited in global reach due to structural constraints.


The speaker notes the industry’s rise to the world’s fifth-largest media market and its massive domestic audience, yet highlights that “India has not yet broken through as a global content powerhouse” and points to “capital constraints” and a “target audience largely confined to the domestic audience” as key barriers [31-34][41-46][47-53][56-62].


Artificial intelligence is presented as a once-in-a-generation catalyst that can dissolve these barriers across content, consumer, and commerce.


AI-driven production is said to “reduce costs” and “unlock an unprecedented capacity to produce more,” illustrated by the rapid creation of a 100-episode series [70-78][81-84]; it also enables “genuine consumer segmentation,” “dynamic pricing,” and new value categories [88-96][98-102].


Three concrete commitments are urged for the industry, talent pipeline, and policy environment.


1. Disrupt ourselves or be disrupted – citing past resistance to digital newsrooms and streaming [105-108][111-119];


2. Cultivate AI-native creative talent – a blend of storytelling and technical skill, requiring large-scale upskilling [124-130];


3. Make policy an accelerator – remove obstacles, avoid wholesale import of Western regulations, and craft frameworks that reflect India’s ambitions [131-138].


The overarching vision is for India to become the global media powerhouse of the AI age, leveraging its cultural depth, entrepreneurship, and now-aligned market scale.


The speaker asserts that AI makes India “the most powerful competitive asset” and that “the race has just begun,” urging the nation to “shape and lead” this transformation [70-73][85-87][149-152][155-158].


Overall purpose:


The discussion aims to rally government, industry leaders, creators, and policymakers around a shared agenda: harness AI to overcome existing capital and talent constraints, modernize the Indian media ecosystem, and position India as the world’s leading source of AI-enhanced content and creative talent.


Overall tone:


The speaker begins with a celebratory and congratulatory tone, shifts to a sober analysis of structural limitations, moves into an enthusiastic and visionary tone about AI’s transformative potential, and concludes with an urgent, rally-calling tone that is hopeful and motivational. The progression moves from praise → problem-identification → opportunity framing → decisive call-to-action.


Speakers

Speaker 1


– Role/Title:


– Area of Expertise: Media & Entertainment, Artificial Intelligence


– Affiliation: Geostar (implied)


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

Speaker 1 opened the address by congratulating the Honorable Prime Minister for placing artificial intelligence at the centre of the nation’s growth agenda, echoing the Prime Minister’s rallying cry “Create in India, create for the world.” [1-3][2] He clarified that he would not re-ignite the long-running debate over AI’s readiness or its moral dimensions, preferring instead to focus on the transformative potential of emerging technologies. [4-8]


Drawing on more than three decades of experience in media, he recounted how successive technological inflection points-from the first personal computer in newsrooms to the launch of India’s inaugural end-to-end digital news platform, Aajitak-have continually increased speed, agility and audience reach for the businesses he has served. [9-11] He argued that each wave of adoption positioned Indian media at the forefront of innovation, benefitting all stakeholders and helping India, despite being a late entrant, become one of the world’s most exciting media markets. [12-14][15-20][21-23]


He quantified this rapid expansion, noting that within roughly twenty-five years the sector has grown from a modest few-billion-dollar industry to the world’s fifth-largest media and entertainment market, now contributing over $30 billion to the economy. [15-17] He highlighted the proliferation of channels-from a single broadcaster at the turn of the century to about 900 channels in dozens of languages-alongside a jump in television households from 70 million to more than 210 million and an audience of over 800 million video viewers. [18-19][21-23] He also underscored the massive investment by his own company, Geostar, of more than $10 billion in content over the past three years, and the intense competition from global media giants seeking Indian viewers. [24-26]


Despite these achievements, the speaker warned that India remains largely a domestic content producer and has not yet broken through as a global content powerhouse. [31-34][36-38] He identified three inter-linked structural barriers. First, the sheer size of the home market creates complacency, diverting attention from global ambitions. [41-43] Second, capital constraints are stark: the average Hollywood studio budget of $65-100 million and tent-pole projects of up to $350 million dwarf the typical Indian film budget of $3-5 million, limiting the ability to afford high-end production services. [44-53][56-58] Third, a talent paradox persists-India possesses world-class creative and technical talent that often ends up supporting Western productions because domestic monetisation is insufficient, reinforcing a chicken-and-egg cycle of limited capital and limited global reach. [57-62][65-68]


Against this backdrop, the speaker presented artificial intelligence as a “once-in-a-generation” catalyst that can dissolve these constraints. AI-powered production, he argued, not only cuts costs but also unlocks unprecedented capacity to create more content, as demonstrated by Geostar’s 100-episode live-action series Mahabharata Ek Dharmayu, which achieved global-level visual quality three to five times faster than a traditional pipeline and delivered significant economic efficiencies. [70-78][79-82] He asserted that with AI the only remaining limits are imagination and creativity, positioning India’s deep cultural storytelling DNA as its most powerful competitive asset. [83-86]


He then mapped AI’s impact onto the three traditional pillars of the media industry-content, consumer and commerce. On the content side, AI removes long-standing infrastructure barriers, enabling rapid, cost-effective production. [73-78] For consumers, AI shatters the historic “produce-then-receive” monologue by enabling conversational discovery, interactive storytelling and hyper-regionalisation that goes beyond simple dubbing, capturing the authentic texture of India’s distinct markets. [88-92][93-96] On the commerce side, AI makes granular consumer segmentation and dynamic pricing a reality, allowing packaging that reflects the diverse economic realities of India’s 800 million viewers and opening entirely new value categories. [95-97][98-102]


Quantifying the opportunity, he noted that the global media market is currently valued at about $3 trillion and is projected to reach $3.5 trillion by 2029. [99-100] India’s share is presently under 2 % [101], but even a modest rise to 4-5 % would generate tens of billions of dollars in new value, a transformation that could benefit a large segment of the population. [102-103]


To seize this moment, the speaker called for three concrete commitments. First, the industry must “disrupt itself or be disrupted”, recalling past resistance to digital newsrooms and streaming and urging a proactive redesign of revenue models that fairly reward writers, actors, technicians and producers. [105-112][119-122] Second, India must become a global hot-bed for AI-native creative talent-a hybrid of storyteller and technologist-through relentless, large-scale skilling and upskilling programmes that fuse the nation’s rich creative traditions with its sharp engineering expertise. [124-130] Third, policy must act as an accelerator, removing obstacles while avoiding the wholesale import of Western regulatory frameworks; instead, India should craft guardrails that reflect its unique ambitions, learning selectively from other jurisdictions such as China. [131-138][135-138]


He underscored the symbolic context of the address, noting that it was delivered from the Bharat Mandapam at the first global AI summit hosted in the Global South. [149-151] He reminded the audience that for too long the intersection of technology and media has been dominated by a handful of countries and companies: “The tools were always made elsewhere. The platforms were built elsewhere. The rules were written elsewhere.” [155-158] He declared that AI is the ultimate leveler, shifting advantage from deep pockets to deep wells of entrepreneurship, creativity and technology adoption-areas where India is uniquely positioned. [149-151][155-158]


The address concluded with a hopeful call to action: “Let us not just participate in this new era. Let us shape and lead this.” [158-159] He expressed confidence that the nation’s cultural depth, entrepreneurial spirit and now-aligned market scale will enable it to lead the AI-driven media era, provided it moves swiftly to claim the role that rightfully belongs to us. [152-157][158-159]


Session transcriptComplete transcript of the session
Speaker 1

Let me begin by first of all congratulating our Honorable Prime Minister on his vision and leadership in centering this country’s growth agenda around artificial intelligence. I must also compliment the India AI team for executing so flawlessly on the Prime Minister’s vision and bringing us all together at this seminal forum. The summit could not have come a day too soon. As for myself, I am not here to talk about the technology of AI. Enough debate has happened on that and I do not want to add to the debate on whether we are ready. Whether we are ready and whether that whole debate of good versus evil. We do a lot of that in our entertainment stories.

But I personally am a big believer in the power of harnessing emerging technologies to transform societies, businesses, and lives of people. Over three decades as a media professional, I have had a ringside view of technology’s transformative impact, starting with the introduction of the first personal computer in newsrooms and the launch of India’s first end -to -end digital news platform, Aajitak. At every stage since, technology has allowed the businesses I have been involved with to operate with speed, agility, and efficiency that fundamentally changed our relationship with audiences. At each of these inflection points, these businesses have been at the forefront of adopting and introducing innovations to Indian people. This has helped all stakeholders. It is exactly because of this adoption of cutting -edge technologies that India has been A late entrant to the world of technology has been a key part of the development of media and entertainment has rapidly become one of the most exciting media markets globally.

The transformation has truly been extraordinary. Within the span of just about a quarter century or so, we have gone from an industry valued at just a few billion dollars to the fifth largest media and entertainment market in the world. We are valued with our economic contribution going to over 30 billion dollars. We have transitioned from one sleepy broadcaster at the turn of the century to about 900 channels across dozens of languages. Our consumer universe has expanded from about 70 million households to more than 210 million television households and over 800 million video viewers. And the content itself has evolved beyond recognition. From a few tentative experiments in family drama to a vast, diverse, multilingual, ecosystem serving the most heterogeneous consumer universe in the world.

In this process, we have built an ecosystem that has fired the aspirations and ambitions of the whole country. The aspirations of a generation of Indians, what they wanted to become and what they thought was possible, have been shaped as much by what they watched as by what they were taught. While the social impact gives me immense satisfaction, the economic and business impact is equally compelling. At Geostar alone, we have invested over $10 billion in content over the past three years, and that will continue to be the case going forward, if anything. Every major global media enterprise is competing fiercely for the Indian viewers’ attention. Those who are not here are not here simply because they could not crack this complex market.

So the key question… The key question is what can AI do for the… Indian media industry that we are already not doing? To answer that, we need to zoom out and look at the broader landscape a little bit. Despite our remarkable domestic progress, India has not yet broken through as a global content powerhouse. We still produce and consume domestically. Compare this to countries with far smaller population, less cultural diversity, and less formidable technological capabilities that despite those, they have managed to capture the global imagination. A small country like South Korea gave the world squid games and Parasite. Puerto Rico, an island of 3 million people, just gave the world the most streamed artist on the planet, performing entirely in Spanish, headlining the Super Bowl halftime show, but gravitating.

Grabbing global attention. These cultures dared to imagine that their stories and their languages could command a global stage, and they succeeded. This is precisely the mindset that the Honourable Prime Minister called for in his rallying cry at Waves last year. Create in India, create for the world. It’s a dream many of us in the media industry have always nourished, but so far it’s just remained a dream. So why have we not been able to break out of the domestic bounds and achieve a larger mindshare and market share globally? In my view, first and foremost, our big domestic market itself has been a distraction. We can get easily satisfied as long as we are getting attention and business in India.

But our ability to translate our abundant ambition into reality has also been constrained by a few structural factors. Chief among them being the capital constraints. An inability to attract global talent and a target audience largely confined to the domestic audience. The numbers make these constraints stark. The average Hollywood studio production commands a budget of 65 to 100 million dollars. A major tent pole runs up to anything, anything up to 300 or 350 million dollars. The average Indian film, 3 to 5 million dollars. And this is equally true of television production. A single episode of a marquee series in Hollywood can cost up to 20 to 30 million dollars. We can only afford to spend a fraction of that. Because, one, we have the constraint, but two, we are not able to get the capital because our primary market of monetization still remains India.

And as a result, it’s become a spiral and we just cannot compete globally in that race. And this financial ceiling has been set. And this has created a paradox of talent as well. India has some of the finest creative and technical talent anywhere in the world. We have created cutting -edge technology and production capabilities in areas such as VFX that power the world’s biggest productions. But these are all deployed to support Western productions. Our own producers and directors who have the quality and the ambition cannot afford these services because our monetization universe is much more smaller and limited. So when both capital and talent are constrained, the horizon of our content narrows with them. Our films, our television, our music have been made primarily for consumers within the country, or at best, for the diaspora overseas.

There have been some exceptions, but they have been made. There have been just exceptions, not a pattern. The result is a peculiar chicken -and -an -egg problem. Limited capital, much of which owes to our status as a developing economy, and a primarily domestic audience constrain our global competitiveness. That lack of competitiveness in turn hinders our ability to attract the capital that would close the gap. This is not to lament what we have achieved. We have done remarkably well with the limitations and challenges that we had, but the opportunity at hand is much larger, much bigger. AI provides India a once -in -a -generation opportunity to become the creative capital of the world. Not just the back office for the world’s content, but the front office, the producer and deliverer of content globally, the leader, the standard bearer.

Because our business is built on human creativity, the media and entertainment sector is said to be the biggest beneficiary of the AI. This is a catalyst that fundamentally rewires three core pillars on which our entire industry is built. Content, consumer and commerce. On content, for decades, the limitations of infrastructure have been a constraint on the business of media and entertainment. Today, that barrier is coming down rapidly. AI -powered production is not just reducing costs, it is unlocking an unprecedented capacity to produce more and offer more. At Geostar, we recently produced the Mahabharata Ek Dharmayu, the 100 -episode live -action series, which is exhibited right here at the GeoPavilion. We achieved the visual scale and emotional depth of a global production three to five times faster than a traditional pipeline.

The economic efficiencies were significant, too. What this tells me is that the old barriers are vanishing. The only binding constraints that are left are imagination and creativity. And a landscape where imagination determines the winner. And a landscape where imagination determines the winner. India’s formidable cultural depth and inherent DNA for storytelling and entrepreneurship has become our most powerful competitive assets. Our agenda at Geostar is clear, to harness these attributes and position ourselves as the world’s leading foundry for stories and creativity. For consumers, we have an opportunity to retire a model that has been one -directional for a century. We produce, they receive. AI shatters that monologue. It allows us to create experiences that audiences have never had before.

We are opening a new frontier in the viewer relationship, conversational discovery, interactive storytelling, and regionalization that goes beyond simply dubbing the capture, the authentic texture of India’s distinct markets. And finally, commerce. Since the first newspapers, this industry has operated with exactly two monetization models, advertising and subscription. These are two incredibly broad. These are two incredibly blunt levers for a market of 800 million viewers with wildly different economic realities. AI makes genuine consumer segmentation a reality. It enables dynamic pricing and packaging that actually reflect how people live, how they consume, what they consume, and what they can afford. It unlocks entirely new categories of value we haven’t even begun to imagine in the media and entertainment sector.

Taken together, the disruption across the three pillars of content, consumer, and commerce form the very engine of the orange economy that the Honorable Prime Minister talks about. The global media market is nearly $3 trillion today, heading to $3 .5 trillion by 2029. India’s share is currently less than 2%. AI offers us the potential to explore our share in this pie. Even a modest shift in our share of global revenue from 2 % to 4 % or 5 % would represent tens of billions of dollars in new value creation and can be transformational for a large segment of our people. But opportunity and outcome are not the same thing. We need all stakeholders pulling in the same direction. To seize the moment, we need three commitments from everyone.

in this country and in this room. First, disrupt ourselves or be disrupted. I’ve seen this movie before. When we introduced digital newsrooms, senior editors resisted. When streaming arrived, traditional broadcasters looked the other way. The pattern is almost always the same. Incumbents defend the fortress until the walls come down and they are buried under it. We cannot afford the same mistake. Right now, we have an advantage the West does not. The freedom to move. The lack of baggage. Hollywood is approaching AI defensively, paralyzed by legal battles and locked in protectionist reflexes. The incumbents are conflicted and held back by the legacy value that they have accumulated. Luckily, we don’t have such liabilities. We can design the revenue models that actually work for everyone.

The writers, the actors, the technicians, and the producers. This does not have to be a zero -sum game. It is a larger pie and everybody, you must share it. fairly and squarely. We can set the global precedent, but only if we lead with ambition rather than anxiety. Secondly, India must become the global hotbed for AI -native creative talent. The most valuable person in tomorrow’s media industry is not a pure technologist, not a traditional artist. It is a blend of both. Someone who can conceive a world -class story and command the AI tools to bring it to life. We have the deepest creative traditions and the sharpest engineering minds. The task now is to fuse them seamlessly through a relentless focus on skilling and upskilling at scale so that the world looks at India for this exact kind of talent.

And finally, policy must be an accelerator. In this early stage of our growth and ambition, it should not become a break. Our creators do not need a roadmap handed to them. They simply need the obstacles removed. because these are early days. The guardrails we set now will have a massive multiplier effect on our competitiveness in future. As we shape these frameworks, we must resist the temptation to import Western regulatory construct wholesale. Look at China. It’s been very clear -eyed about this. They identified exactly what they needed to outpace the West and build their regulatory approach around that goal. Our frameworks must also reflect our unique ambitions and opportunities. We are sitting in Bharat Mandapam at the first global AI summit hosted in the global south.

This is significant in a way that goes far beyond symbolism. For too long, the intersection of technology and media has been dominated by a handful of countries and companies. The tools were always made elsewhere. The platforms were built elsewhere. The rules were written elsewhere. AI changes that equation forever. Everybody is starting at the same place. as far as application to this sector is concerned. When the barriers across the entire value chain collapse, the advantage may shift decisively. It moves away from those with deepest pockets and towards those with deepest wells of entrepreneurship, creativity, and adoption to technology. And no country on earth is better positioned for that shift than India. The question before us today is not whether India can become the global media powerhouse of the AI age.

It is whether we will move fast enough to claim that position that actually rightfully belongs to us. I believe we will. The energy and the ambition of this country always gives me hope. The stories have always been here. Now the scale of our market and the power of our technology have finally aligned, and the race has just begun. This technology is the ultimate leveler. Let us not just participate in this new era. Let us shape and lead this. Thank you very much. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (12)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“The Prime Minister’s rallying cry is “Create in India, create for the world.””

The knowledge base records the Prime Minister’s rallying cry as “Create in India, create for the world” in the keynote by Uday Shankar [S7].

Confirmedhigh

“The speaker has more than three decades of experience in media and witnessed the introduction of the first personal computer in newsrooms.”

A source notes the speaker’s three-decade media career and reference to the first personal computer’s impact on newsrooms [S10].

Confirmedmedium

“India now has about 900 television channels and produces roughly 1,500 films compared with Hollywood’s 250 films.”

The summit keynote cites 900 TV channels and the production figures of 1,500 Indian films versus 250 Hollywood films [S8].

Additional Contextmedium

“India remains largely a domestic content producer and must think beyond domestic concerns to become a global content powerhouse.”

A separate address stresses that India should embrace a global telecom super-power role rather than focus only on domestic markets, adding nuance to the domestic-versus-global discussion [S56].

External Sources (59)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Artificial Intelligence Strategy for the U.S. Department of Justice — | Message from the CIO ……………………………………………………………………………………….
S5
Impact the Future – Compassion AI | IGF 2023 Town Hall #63 — Historically, transformation or shift from one age to another has always involved some form of technology. The analysis…
S6
What Is Sci-Fi, What Is High-Tech? / Davos 2025 — – Building public trust is critical for the successful adoption of these transformative technologies
S7
Keynote by Uday Shankar Vice Chairman_JioStar India — Drawing from his extensive experience witnessing technology’s transformative impact—from the introduction of personal co…
S8
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — And the thought is we’re also moving from a world of finite. If I look at content today, in whichever platform it is, ri…
S9
High-level ministerial roundtable on digital trade: Do regional trade agreements indicate the way forward for the multilateral trading system? — Creating an enabling environment for e-commerce within a country is crucial, but must take into account the local contex…
S10
https://dig.watch/event/india-ai-impact-summit-2026/keynote-by-uday-shankar-vice-chairman_jiostar-india — Because, one, we have the constraint, but two, we are not able to get the capital because our primary market of monetiza…
S11
Sticking with Start-ups / DAVOS 2025 — Bhatnagar explains how AI is transforming content creation and enabling new business models. He highlights the reduced c…
S12
Embracing the future of e-commerce and AI now (WEF) — The system improves productivity and speed.
S13
AI to boost India’s media and entertainment sector — AIcould boostrevenues by 10% and reduce costs by 15% for media and entertainment firms, according to a report by EY, unv…
S14
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — In conclusion, the analysis reinforces the potential of digitalisation and emerging technologies, such as artificial int…
S15
Keynotes — Oleksandr Bornyakov: Dear ladies and gentlemen, I’m honored to represent Ukraine today here in Strasbourg in the heart o…
S16
AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82 — Melanie MacNeil:Hi, everyone. Good morning, good afternoon, depending on where you are. If you just bear with me for one…
S17
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — Paola Galvez: Thank you, Ananda. Hello, everyone. Thank you so much for joining us to this very, very critical conver…
S18
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — This comment was particularly insightful because it revealed the existential nature of the talent shortage for industry …
S19
TradeTech’s Trillion-Dollar Promise — Creating an innovative and flexible environment for policy-making and regulation is crucial to ensure the quick adaptati…
S20
China’s top regulator to boost industrial internet with new policies — China’s top industry regulatorannounced plans to create new policies to boost the advancement of the industrial internet…
S21
Foreword — Governments of all four countries acknowledge the importance of ICT for the economy. However, the scope and implementati…
S22
Keynote by Uday Shankar Vice Chairman_JioStar India — But I personally am a big believer in the power of harnessing emerging technologies to transform societies, businesses, …
S23
A Decade Later-Content creation, access to open information | IGF 2023 WS #108 — In conclusion, over the past decade, there have been significant advancements in internet video, IP rights management, a…
S24
How AI Drives Innovation and Economic Growth — Michael, your answer should be read the book. Okay. We’ve spoken about the use cases of India, but setting up digital ID…
S25
The Global Power Shift India’s Rise in AI & Semiconductors — Both speakers acknowledge that while India has become excellent at fast-following, true leadership requires scaling ambi…
S26
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — “The first is the fact that we have demographic energy.”[27]”This is certainly a category where India can lead and show …
S27
Building the Next Wave of AI_ Responsible Frameworks & Standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S28
Rethinking trade and IP: prospects and challenges for development in the knowledge economy (WTO) — Moreover, the analysis brings attention to the existence of a significant value gap in the creative industry, primarily …
S29
High-Level sessions: Setting the Scene – Global Supply Chain Challenges and Solutions — However, concerns have been growing over the increase of non-tariff barriers and subsidies that favour domestic markets,…
S30
BOOK LAUNCH: The law and politics of Global Competition — Attention must be given to the imbalance of power and finances between big business and consumer organizations. Understa…
S31
Sticking with Start-ups / DAVOS 2025 — Bhatnagar explains how AI is transforming content creation and enabling new business models. He highlights the reduced c…
S32
Building fair markets in the algorithmic age (The Dialogue) — Algorithms have become prevalent in daily life, from communication through emails and chat apps to online trading and pa…
S33
A Digital Future for All (afternoon sessions) — AI is enabling economic progress and entrepreneurship, especially in emerging markets. It can boost productivity across …
S34
Keynote by Uday Shankar Vice Chairman_JioStar India — Current structural barriers including capital constraints ($3-5M Indian film budgets vs $65-100M Hollywood budgets), dom…
S35
AI to boost India’s media and entertainment sector — AIcould boostrevenues by 10% and reduce costs by 15% for media and entertainment firms, according to a report by EY, unv…
S36
https://dig.watch/event/india-ai-impact-summit-2026/keynote-by-uday-shankar-vice-chairman_jiostar-india — AI provides India a once -in -a -generation opportunity to become the creative capital of the world. Not just the back o…
S37
AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82 — Melanie MacNeil:Hi, everyone. Good morning, good afternoon, depending on where you are. If you just bear with me for one…
S38
Keynotes — Oleksandr Bornyakov: Dear ladies and gentlemen, I’m honored to represent Ukraine today here in Strasbourg in the heart o…
S39
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — In conclusion, the analysis reinforces the potential of digitalisation and emerging technologies, such as artificial int…
S40
Artificial intelligence: a catalyst for scientific discovery and advancement — While concerns about AI’s dangers abound, experts believe that it can greatly accelerate scientific progress and lead to…
S41
China’s top regulator to boost industrial internet with new policies — China’s top industry regulatorannounced plans to create new policies to boost the advancement of the industrial internet…
S42
Closing Ceremony — Juan Fernandez: I’m going to speak in Spanish, so put your . . . Dear colleagues, I would like to start congratulating…
S43
Policy Guidelines — needed, however, and much work remains to be done to get the full foundations in place.
S44
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — This discussion focused on India’s semiconductor industry development, workforce challenges, and the collaboration betwe…
S45
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — “The first is the fact that we have demographic energy.”[27]”This is certainly a category where India can lead and show …
S46
Welcome Address — Artificial intelligence
S47
Keynote Address_Revanth Reddy_Chief Minister Telangana — Good afternoon, friends. My pleasure to address this event because of some of the best of minds from all over the world …
S48
https://dig.watch/event/india-ai-impact-summit-2026/the-global-power-shift-indias-rise-in-ai-semiconductors — Thank you. Thank you. across CPUs, GPUs, SoCs, and AI engines that power cutting -edge compute systems worldwide. She br…
S49
https://dig.watch/event/india-ai-impact-summit-2026/driving-indias-ai-future-growth-innovation-and-impact — Awesome. Great question, Midu. And, you know, we as a nation have proven ourselves to be phenomenal adopters of technolo…
S50
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-hemant-taneja-general-catalyst — Ladies and gentlemen, moving on. Our next speaker is from one of Silicon Valley’s most influential venture capital firms…
S51
Keynote by Naveen Tewari Founder & CEO, inMobi India AI Impact Summit — A significant portion of Tewari’s presentation focused on India’s unique opportunity to lead in AI-driven commerce trans…
S52
AI transforms global healthcare with major growth ahead — The healthcare sectoris poisedfor significant growth as AI continues to revolutionise the industry. A new report from Av…
S53
29, filed Jan. 22, 2010, at 9-10. — Each of the past three decades has seen a new tranche of mobile spectrum create successive waves of innovation and inves…
S54
Microsoft CEO Nadella raises concerns over tech giants’ content battles — During the US antitrust trial against Google, Microsoft’s CEO, Satya Nadella, testified, underscoring thefierce competit…
S55
Widening Lens: A New Narrative for Media Coverage of Cyberspace — Dr. Miqat Zuhairi Bin Miqat, Chief Executive of Malaysia’s National Cybersecurity Agency, highlighted the significance o…
S56
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — India must embrace its role as a global telecom superpower and work as part of a global community, not just focus domest…
S57
WS #305 Financing Self Sustaining Community Connectivity Solutions — Brian Vo: Thanks, Nathalia. And thank you all for having us. I think it’s really been a joy to work with APC and also co…
S58
 Network Evolution: Challenges and Solutions  — Miguel González-Sancho from the European Commission provided insights into the EU White Paper, which outlines the challe…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
17 arguments151 words per minute2358 words935 seconds
Argument 1
Transformative tech adoption history
EXPLANATION
The speaker outlines a three‑decade career in media during which he witnessed successive technological breakthroughs, from the first personal computers in newsrooms to India’s inaugural end‑to‑end digital news platform. Each wave of technology has accelerated speed, agility and efficiency for the businesses he has been part of.
EVIDENCE
He recounts his ringside view of the first personal computer entering newsrooms and the launch of Aaj Tak, India’s first end-to-end digital news platform, noting how technology enabled faster, more agile operations and fundamentally changed audience relationships [9-12].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Shankar’s keynote recounts witnessing the first personal computers in newsrooms and the launch of Aaj Tak, illustrating successive technology waves in Indian media [S7]; broader analysis of technology-driven historical shifts is discussed in [S5].
MAJOR DISCUSSION POINT
Tech adoption history
Argument 2
Market now 5th largest globally, $30 bn contribution
EXPLANATION
The speaker states that India’s media and entertainment sector has grown into the world’s fifth‑largest market, contributing over thirty billion dollars to the economy. This rapid expansion reflects both scale and economic significance.
EVIDENCE
He cites that within about a quarter of a century the industry moved from a few-billion-dollar size to the fifth largest globally, with an economic contribution exceeding $30 billion [15-17].
MAJOR DISCUSSION POINT
Market size and contribution
Argument 3
Expansion to 900 channels, 210 m TV households, 800 m viewers
EXPLANATION
The speaker highlights the quantitative growth of India’s media ecosystem, noting the proliferation of channels, the rise in television households, and the massive viewer base that now spans the country.
EVIDENCE
He provides figures showing the transition from a single broadcaster to about 900 channels, an increase in TV households from 70 million to over 210 million, and a viewership exceeding 800 million people [17-19].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote notes that India now has around 900 TV channels and massive content production volumes, confirming the scale of the ecosystem [S8].
MAJOR DISCUSSION POINT
Scale of media ecosystem
Argument 4
Domestic market focus limits global reach
EXPLANATION
The speaker argues that India’s large domestic market has become a distraction, causing the industry to be content‑centric for India rather than aiming for global audiences. This inward focus hampers the ability to compete internationally.
EVIDENCE
He contrasts India’s domestic-only production and consumption with smaller countries that have captured global imagination, and points out that the sheer size of the home market can lead to complacency [31-33][41-43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Shankar explicitly states that India’s large domestic market has become a distraction, curbing ambitions for global audiences [S7].
MAJOR DISCUSSION POINT
Domestic focus vs global ambition
Argument 5
Capital constraints: Indian budgets far below Hollywood standards
EXPLANATION
The speaker highlights the stark budget gap between Indian productions and Hollywood, emphasizing that limited capital restricts the ability to create globally competitive content.
EVIDENCE
He lists average Hollywood studio budgets of $65-100 million and tent-pole projects up to $350 million, versus Indian films typically made for $3-5 million, and notes that television episodes in Hollywood can cost $20-30 million, a scale Indian producers cannot match [44-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker highlights the stark budget gap between Indian productions and Hollywood, describing capital constraints as a key barrier [S7].
MAJOR DISCUSSION POINT
Budget disparity
Argument 6
Talent paradox: world‑class talent unable to afford local services
EXPLANATION
Despite possessing world‑class creative and technical talent, Indian creators cannot afford high‑end services because the domestic market’s limited monetisation keeps budgets low, creating a paradox where talent is under‑utilised.
EVIDENCE
He points out that India has top-tier talent and cutting-edge VFX capabilities that are largely exported to Western productions, while local creators cannot afford these services due to a smaller monetisation universe [56-61].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He points out that despite world-class creative and technical talent, Indian creators cannot afford high-end services due to limited domestic monetisation [S7].
MAJOR DISCUSSION POINT
Talent‑budget mismatch
Argument 7
Chicken‑and‑egg cycle of limited capital and audience
EXPLANATION
The speaker describes a self‑reinforcing loop: limited capital shrinks audience reach, which in turn deters investment, perpetuating the inability to compete globally.
EVIDENCE
He explains that limited capital, stemming from a developing-economy status and a primarily domestic audience, constrains competitiveness, which then hinders the attraction of further capital, creating a chicken-and-egg problem [65-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote describes a “peculiar chicken-and-egg problem” where limited capital shrinks audience reach, which in turn deters further investment [S7].
MAJOR DISCUSSION POINT
Capital‑audience feedback loop
Argument 8
AI cuts production costs and accelerates timelines
EXPLANATION
The speaker asserts that AI‑driven production reduces expenses and dramatically speeds up content creation, removing traditional infrastructure constraints.
EVIDENCE
He notes that AI-powered production is lowering costs and unlocking unprecedented capacity to produce more, thereby removing long-standing infrastructure barriers [76-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-driven production is said to lower costs and boost speed, unlocking unprecedented capacity; this is corroborated by observations on cost reduction and productivity gains in AI-enabled media workflows [S11][S12][S8].
MAJOR DISCUSSION POINT
Cost and speed benefits of AI
Argument 9
Case study: 100‑episode Mahabharata produced 3‑5× faster
EXPLANATION
A concrete example is given where an AI‑enhanced workflow enabled a 100‑episode live‑action series to be delivered three to five times faster than traditional pipelines, also delivering economic efficiencies.
EVIDENCE
He cites the production of the Mahabharata Ek Dharmayu series at GeoStar, achieving visual scale and emotional depth comparable to global productions while completing it 3-5 times faster and with significant cost savings [78-80].
MAJOR DISCUSSION POINT
AI‑enabled production example
Argument 10
New consumer experiences: interactive, conversational, hyper‑regionalized
EXPLANATION
AI is portrayed as breaking the one‑way broadcast model, enabling interactive storytelling, conversational discovery, and deep regionalisation that goes beyond simple dubbing.
EVIDENCE
He describes moving from a monologue model to AI-driven experiences such as conversational discovery, interactive storytelling, and hyper-regionalisation that captures the authentic texture of India’s diverse markets [88-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker describes AI-enabled experiences such as conversational discovery, interactive storytelling and deep regionalisation that go beyond simple dubbing [S7].
MAJOR DISCUSSION POINT
AI‑driven consumer engagement
Argument 11
Granular segmentation, dynamic pricing, novel monetisation models
EXPLANATION
AI allows precise consumer segmentation, dynamic pricing, and the creation of new value categories, moving beyond the blunt levers of advertising and subscription.
EVIDENCE
He explains that AI makes genuine consumer segmentation possible, enables dynamic pricing and packaging reflecting varied consumer realities, and unlocks entirely new categories of value in the media sector [93-97].
MAJOR DISCUSSION POINT
AI‑enabled monetisation innovation
Argument 12
Potential to raise India’s global media share from <2 % to 4‑5 %
EXPLANATION
The speaker quantifies the opportunity: with the global media market projected at $3‑3.5 trillion, raising India’s share from under 2 % to 4‑5 % could generate tens of billions of dollars in new value.
EVIDENCE
He cites the global media market size of nearly $3 trillion (rising to $3.5 trillion by 2029), India’s current share of less than 2 %, and the transformational potential of shifting to 4-5 % share [99-103].
MAJOR DISCUSSION POINT
Economic upside of AI‑driven expansion
Argument 13
Disrupt self or be disrupted – proactive industry change
EXPLANATION
The speaker calls for the industry to self‑disrupt rather than wait for external forces, citing past resistance to digital newsrooms and streaming as cautionary examples.
EVIDENCE
He recounts previous resistance when digital newsrooms and streaming arrived, noting a pattern where incumbents defend fortresses until they are overtaken, and urges pre-emptive disruption [106-112].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He urges the industry to self-disrupt rather than wait for external forces, citing past resistance to digital newsrooms and streaming as cautionary tales [S7].
MAJOR DISCUSSION POINT
Need for proactive disruption
Argument 14
Cultivate AI‑native creative talent blending art and technology
EXPLANATION
The speaker stresses the importance of developing talent that combines creative storytelling with AI technical skills, through large‑scale skilling and upskilling initiatives.
EVIDENCE
He argues that tomorrow’s most valuable media professionals will blend artistry and technology, and calls for relentless focus on scaling skill development to fuse India’s creative traditions with its engineering strengths [124-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Shankar emphasizes the need to develop talent that fuses storytelling with AI technical skills through large-scale skilling initiatives [S7].
MAJOR DISCUSSION POINT
Talent development for AI‑native media
Argument 15
Policy as accelerator: craft India‑specific guardrails, avoid wholesale Western models
EXPLANATION
The speaker urges policymakers to create enabling frameworks tailored to India’s ambitions, learning from other jurisdictions but not copying Western regulations wholesale.
EVIDENCE
He recommends that policy should accelerate growth, remove obstacles, and be shaped to India’s unique goals, citing China’s tailored regulatory approach as an example while warning against importing Western constructs [130-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He calls for India-tailored policy frameworks that accelerate growth while learning from other jurisdictions, echoing broader recommendations on creating enabling environments for digital trade [S9][S7].
MAJOR DISCUSSION POINT
India‑centric policy framework
Argument 16
India uniquely positioned as the ultimate leveler in AI media
EXPLANATION
The speaker claims that India’s combination of cultural depth, entrepreneurial spirit, and emerging AI capabilities makes it the ideal country to lead the AI‑driven media transformation.
EVIDENCE
He asserts that no country is better positioned than India for the shift, describing it as the ultimate leveler where advantage moves from deep pockets to entrepreneurship, creativity, and technology adoption [149-157].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote asserts that India’s cultural depth, entrepreneurial spirit and emerging AI capabilities make it the ideal country to lead the AI-driven media transformation [S7].
MAJOR DISCUSSION POINT
Strategic positioning of India
Argument 17
Move fast, shape and lead the AI‑driven global media era
EXPLANATION
The speaker concludes with a call to action, urging rapid execution to claim the global media leadership role that he believes rightfully belongs to India.
EVIDENCE
He frames the question as whether India will move fast enough to claim the position, expresses confidence in the country’s energy and ambition, and calls for shaping and leading the new era [150-158].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker concludes with a call to rapid execution to claim global media leadership in the AI era, reinforcing the urgency of swift action [S7].
MAJOR DISCUSSION POINT
Call to rapid action and leadership
Agreements
Agreement Points
India’s media sector has rapidly transformed into a large, globally significant market driven by successive technology adoption.
Speakers: Speaker 1
Transformative tech adoption history Market now 5th largest globally, $30 bn contribution Expansion to 900 channels, 210 m TV households, 800 m viewers
The speaker recounts a three-decade career witnessing technology waves-from personal computers to digital news platforms-that have accelerated speed, agility and efficiency, and cites the sector’s growth to the fifth-largest global market with over $30 bn economic contribution, 900 channels, 210 m TV households and 800 m viewers [9-12][15-19].
POLICY CONTEXT (KNOWLEDGE BASE)
This observation echoes the transformation narrative highlighted in Uday Shankar’s keynote on technology’s impact on media [S22] and the IGF 2023 discussion on internet video, IP rights and copyright evolution [S23].
India’s large domestic market and capital constraints limit its ability to compete globally.
Speakers: Speaker 1
Domestic market focus limits global reach Capital constraints: Indian budgets far below Hollywood standards Talent paradox: world‑class talent unable to afford local services Chicken‑and‑egg cycle of limited capital and audience
The speaker argues that the sheer size of the home market creates complacency, while low production budgets (US$3-5 m vs. US$65-100 m in Hollywood) and limited capital prevent Indian creators from accessing high-end services despite world-class talent, producing a self-reinforcing loop of limited capital and audience reach [31-33][41-43][44-53][56-61][65-68].
POLICY CONTEXT (KNOWLEDGE BASE)
Analysts note that while India’s domestic scale is a strength, it also creates capital coordination challenges for global competition, as described in the Global Power Shift briefing on AI and semiconductors [S25] and the WTO-focused session on non-tariff barriers affecting emerging economies [S29].
AI can dramatically reduce production costs, accelerate timelines, and enable new consumer experiences and monetisation models.
Speakers: Speaker 1
AI cuts production costs and accelerates timelines Case study: 100‑episode Mahabharata produced 3‑5× faster New consumer experiences: interactive, conversational, hyper‑regionalized Granular segmentation, dynamic pricing, novel monetisation models Potential to raise India’s global media share from <2 % to 4‑5 %
AI-powered production lowers expenses and speeds delivery, illustrated by a 100-episode series completed 3-5 times faster with economic efficiencies; it also creates interactive, hyper-regional experiences, precise segmentation, dynamic pricing, and could double India’s share of the $3-3.5 trillion global media market, adding tens of billions of dollars [76-80][88-92][93-97][99-103].
POLICY CONTEXT (KNOWLEDGE BASE)
The cost-saving and efficiency benefits of AI for content creation were underscored at DAVOS 2025, where AI’s impact on production was highlighted [S31], and the broader economic boost from AI was discussed in the ‘Digital Future for All’ session [S33].
The industry must proactively disrupt itself, develop AI‑native creative talent, and adopt India‑specific policy frameworks.
Speakers: Speaker 1
Disrupt self or be disrupted – proactive industry change Cultivate AI‑native creative talent blending art and technology Policy as accelerator: craft India‑specific guardrails, avoid wholesale Western models
The speaker calls for self-disruption, citing past resistance to digital newsrooms and streaming, urges large-scale skilling to fuse storytelling with AI tools, and recommends policy that removes obstacles while reflecting India’s unique ambitions rather than copying Western regulations [106-112][124-130][130-138].
POLICY CONTEXT (KNOWLEDGE BASE)
India-centric AI governance and talent development were emphasized in the responsible AI frameworks briefing, which stresses the need for tailored policies for multilingual, low-resource contexts [S27], and in the algorithmic-age market fairness dialogue calling for specific regulatory measures [S32].
India is uniquely positioned to become the global AI‑driven media leader and must move quickly to claim that role.
Speakers: Speaker 1
India uniquely positioned as the ultimate leveler in AI media Move fast, shape and lead the AI‑driven global media era
The speaker asserts that India’s cultural depth, entrepreneurial spirit and emerging AI capabilities make it the ideal country to lead the AI transformation, and urges rapid action to secure the leadership position that “rightfully belongs” to India [149-157][150-158].
POLICY CONTEXT (KNOWLEDGE BASE)
The strategic positioning of India in AI-driven media is highlighted in the AI Storytelling Civilization keynote, citing demographic energy and cultural depth as competitive advantages [S26], and reinforced by the Global Power Shift analysis on scaling ambition for global leadership [S25].
Similar Viewpoints
Both arguments emphasize that AI dramatically reduces costs and shortens production cycles, as shown by the Mahabharata series achieving visual quality comparable to global productions in a fraction of the time [76-80].
Speakers: Speaker 1
AI cuts production costs and accelerates timelines Case study: 100‑episode Mahabharata produced 3‑5× faster
These points converge on the need for an ecosystem‑wide, proactive transformation—through self‑disruption, talent development, and enabling policy—to harness AI’s potential [106-112][124-130][130-138].
Speakers: Speaker 1
Disrupt self or be disrupted – proactive industry change Cultivate AI‑native creative talent blending art and technology Policy as accelerator: craft India‑specific guardrails
Unexpected Consensus
Recognition that a massive domestic market can be a hindrance rather than an advantage for global competitiveness.
Speakers: Speaker 1
Domestic market focus limits global reach Capital constraints: Indian budgets far below Hollywood standards Talent paradox: world‑class talent unable to afford local services
While large domestic audiences are often viewed as a strength, the speaker highlights them as a distraction that breeds complacency and limits capital, creating a paradox where world-class talent cannot be fully utilized-a perspective that may be unexpected given the usual narrative of size as an asset [31-33][41-43][44-53][56-61].
POLICY CONTEXT (KNOWLEDGE BASE)
The paradox of a large domestic market was discussed in WTO-related sessions on trade barriers [S29] and in the law and politics of global competition briefing, which points to power imbalances that can limit outward competitiveness [S30].
Overall Assessment

Speaker 1 consistently argues that India’s media sector has achieved remarkable scale through technology adoption, but its domestic‑centric focus, capital shortfalls, and talent‑budget mismatch impede global leadership. AI is presented as a transformative lever that can cut costs, unlock new consumer experiences, and double global market share, provided the industry self‑disrupts, cultivates AI‑native talent, and receives India‑specific policy support. The speaker concludes with a strong call for rapid action, positioning India as uniquely suited to lead the AI‑driven media era.

Since only a single speaker is present, internal consensus is very high—arguments are coherent and mutually reinforcing—but there is no cross‑speaker agreement to gauge broader stakeholder alignment. The implications are that the vision rests on one authoritative voice; achieving the outlined goals will require translating this internal consensus into multi‑stakeholder buy‑in across industry, government and talent pools.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains only statements from Speaker 1; no other speakers are present, so there are no points of disagreement or partial agreement. The discussion is a unilateral presentation of a vision rather than a debate.

None – the absence of multiple viewpoints means no disagreement, implying consensus or at least no contested issues within this session.

Takeaways
Key takeaways
India’s media & entertainment sector has grown rapidly to become the 5th largest globally, with a $30 bn economic contribution, 900 channels, 210 m TV households, and 800 m video viewers. Structural barriers—domestic market focus, limited capital, and a talent‑capital paradox—prevent Indian content from achieving global competitiveness. Artificial intelligence can dramatically lower production costs, accelerate timelines, enable new consumer experiences (interactive, hyper‑regionalized), and allow granular segmentation and dynamic pricing. AI offers the potential to double or more India’s share of the global media market (from <2 % to 4‑5 %), unlocking tens of billions in value. Three strategic commitments are essential: (1) industry must proactively disrupt itself, (2) develop AI‑native creative talent that blends artistry and technology, and (3) craft India‑specific policy guardrails that accelerate rather than hinder innovation.
Resolutions and action items
Industry stakeholders to adopt a self‑disruption mindset and redesign revenue models to be inclusive of writers, actors, technicians, and producers. Launch large‑scale skilling and upskilling programmes to create AI‑native creative talent capable of conceiving stories and operating AI tools. Policymakers to formulate AI‑focused regulatory frameworks tailored to India’s ambitions, avoiding wholesale adoption of Western models and ensuring they act as accelerators. Invest in AI‑driven production pipelines (e.g., replicating the Mahabharata case) to achieve cost efficiencies and faster content creation.
Unresolved issues
Specific mechanisms for attracting the large capital required to compete with Hollywood budgets remain undefined. How to effectively bridge the chicken‑and‑egg cycle between limited domestic monetisation and global market entry is not addressed. Detailed design of new monetisation models (dynamic pricing, segmentation) and their implementation pathways are not fleshed out. Concrete policy measures, timelines, and responsible agencies for creating the proposed regulatory guardrails are not specified. Strategies for scaling AI talent development to meet industry demand and for retaining that talent within India are not fully resolved.
Suggested compromises
Adopt a balanced revenue‑sharing model that fairly compensates creators, talent, and production teams, positioning the shift as a larger pie rather than a zero‑sum game. Blend international best‑practice regulatory concepts with India‑specific objectives, avoiding wholesale import of Western frameworks while learning from other jurisdictions.
Thought Provoking Comments
Our big domestic market itself has been a distraction. We can get easily satisfied as long as we are getting attention and business in India, which limits our ability to translate ambition into global reality.
This reframes the commonly held belief that India’s large internal audience is an advantage, instead presenting it as a structural barrier to global competitiveness.
It shifts the conversation from celebrating domestic growth to diagnosing a core limitation, prompting the audience to consider how to overcome complacency and look beyond the Indian market.
Speaker: Speaker 1
Capital constraints and a talent paradox create a chicken‑and‑egg problem: we have world‑class creative and technical talent, but we cannot afford the high‑cost services that our own creators need because our monetisation universe is limited to India.
By linking finance and talent, the speaker highlights a systemic issue that explains why Indian content rarely breaks globally, adding depth to the earlier point about market size.
This insight leads to a deeper analysis of why Indian productions lag internationally and sets up the later argument that AI can break this loop.
Speaker: Speaker 1
AI provides India a once‑in‑a‑generation opportunity to become the creative capital of the world – not just the back‑office for the world’s content, but the front‑office, the producer and deliverer of content globally.
It introduces a bold, forward‑looking vision that positions AI as a strategic equaliser rather than a mere tool, challenging the audience to think about India’s role on the global stage.
The comment pivots the discussion from problem‑identification to solution‑orientation, opening space for talking about AI‑driven production, consumer experiences, and new commerce models.
Speaker: Speaker 1
AI‑powered production is not just reducing costs, it is unlocking an unprecedented capacity to produce more and offer more – the only binding constraints left are imagination and creativity.
This statement reframes AI as a catalyst that removes traditional barriers, shifting the limiting factor from resources to creative imagination, which is a powerful conceptual shift.
It encourages participants to focus on nurturing creativity and imagination, leading to the later emphasis on AI‑native talent and skilling.
Speaker: Speaker 1
For consumers, AI shatters the monologue of ‘produce‑then‑receive’ and enables conversational discovery, interactive storytelling, and regionalisation that goes beyond simple dubbing.
The comment expands the conversation from production economics to the consumer experience, highlighting how AI can transform audience engagement.
It broadens the discussion to include new business models and user‑centric innovation, setting the stage for the later point about dynamic pricing and segmentation.
Speaker: Speaker 1
AI makes genuine consumer segmentation a reality. It enables dynamic pricing and packaging that actually reflect how people live, how they consume, and what they can afford.
This introduces a concrete commercial implication of AI, moving beyond abstract benefits to tangible revenue‑generation strategies.
The audience is prompted to think about monetisation innovations, linking the earlier discussion of limited revenue models to actionable AI‑driven solutions.
Speaker: Speaker 1
First, disrupt ourselves or be disrupted. Incumbents defend the fortress until the walls come down and they are buried under it. We have the advantage of freedom to move and lack of legacy baggage.
A clear call to action that challenges complacency and frames the current moment as a strategic inflection point.
It creates a turning point in tone—from analytical to urgent—and rallies stakeholders to consider proactive change rather than passive observation.
Speaker: Speaker 1
India must become the global hot‑bed for AI‑native creative talent – a blend of storyteller and technologist – through relentless skilling and upskilling at scale.
This identifies the future workforce archetype needed to realise the AI vision, adding a human‑capital dimension to the technological narrative.
It steers the conversation toward education, talent pipelines, and policy, preparing the ground for the subsequent point about regulatory frameworks.
Speaker: Speaker 1
Policy must be an accelerator, not a brake. We should resist importing Western regulatory constructs wholesale and instead craft frameworks that reflect India’s unique ambitions.
It challenges the default assumption that existing global regulations are suitable, urging a tailored policy approach that aligns with national goals.
This comment shifts the discussion toward governance, prompting participants to consider how regulation can either enable or hinder the AI‑driven media revolution.
Speaker: Speaker 1
AI changes the equation forever: everyone starts at the same place, and when barriers collapse the advantage moves from deepest pockets to deepest wells of entrepreneurship, creativity, and adoption – and no country is better positioned than India.
A sweeping, optimistic framing that positions India uniquely in the emerging AI landscape, reinforcing the earlier arguments with a unifying narrative.
It serves as a concluding rallying point, consolidating earlier themes and leaving the audience with a sense of urgency and possibility.
Speaker: Speaker 1
Overall Assessment

Speaker 1’s monologue weaves a narrative that moves from celebrating India’s media growth to diagnosing structural constraints, then pivots to AI as a transformative lever across content, consumer, and commerce. Each of the highlighted comments acts as a catalyst—first exposing the paradox of domestic size, then linking capital and talent, and finally offering AI‑driven solutions and a strategic call to action. These moments redirect the conversation from description to prescription, prompting listeners to rethink market orientation, embrace new creative‑technical talent models, and advocate for supportive policy. Collectively, the thought‑provoking remarks shape the discussion into a forward‑looking roadmap, turning a celebratory speech into a strategic blueprint for positioning India as a global AI‑powered media powerhouse.

Follow-up Questions
What can AI do for the Indian media industry that we are already not doing?
Identifies the core opportunity gap that AI could fill, guiding strategic focus for the sector.
Speaker: Speaker 1
How can India overcome the capital constraints that limit high‑budget global‑level productions?
Addressing financing is crucial to enable Indian creators to compete with Hollywood‑scale projects and attract investment.
Speaker: Speaker 1
What strategies can be employed to attract global creative and technical talent to India’s media ecosystem?
Talent acquisition is essential to bridge the gap between domestic expertise and the needs of world‑class content creation.
Speaker: Speaker 1
How can India develop AI‑native creative talent that blends storytelling with advanced technology skills?
Building a workforce proficient in both creative and AI tools is vital for producing globally competitive content.
Speaker: Speaker 1
What policy frameworks should be created to accelerate AI adoption in media while avoiding over‑regulation?
Effective, India‑specific regulations can remove obstacles and provide guardrails that enhance competitiveness.
Speaker: Speaker 1
What new monetization models beyond advertising and subscription can AI enable for the Indian media market?
Exploring innovative revenue streams is important to better capture value from a highly diverse audience.
Speaker: Speaker 1
How can AI‑driven consumer segmentation and dynamic pricing be implemented to reflect India’s varied economic realities?
Precise segmentation can increase relevance and affordability, driving higher engagement and revenue.
Speaker: Speaker 1
What concrete steps are needed to increase India’s share of the global media market from the current ~2% to 4‑5%?
A clear roadmap is required to translate AI advantages into measurable growth in global market share.
Speaker: Speaker 1
What best practices should be adopted for AI‑powered production pipelines to achieve cost reductions and faster delivery?
Understanding efficient AI workflows can help scale content creation while maintaining quality.
Speaker: Speaker 1
How should ethical guardrails be designed for AI use in media content creation and distribution?
Ensuring responsible AI deployment protects cultural values and maintains public trust.
Speaker: Speaker 1

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote by Mathias Cormann OECD Secretary-General India AI Impact

Keynote by Mathias Cormann OECD Secretary-General India AI Impact

Session at a glanceSummary, keypoints, and speakers overview

Summary

At the India AI Impact Summit, the OECD thanked India and pledged evidence-based AI policy support worldwide [1-4]. It noted that AI could raise productivity by up to one percent annually, backed by nearly three-quarters of a trillion dollars in infrastructure spending [8-10]. Effective public policy, which previously enabled the internet and semiconductors, remains essential for AI benefits [11-12]. The OECD tracks AI compute capacity and venture capital, finding 61 % of AI VC now targets AI firms, 75 % of which is U.S.-based [15-17]. A report on generative AI agents shows half of developers plan to use them, urging better security and privacy [19]. AI incident reports rose from 92 to 324 per month between 2022 and 2025, leading to a common reporting framework [21-22]. The OECD released an AI Index for benchmarking and will launch a toolkit sharing global best practices [24-25]. Its Global Partnership on AI added Malta and Saudi Arabia, bringing membership to 46 countries [27-28]. For firms, it offers the Hiroshima AI Process Code, SME updates, and new due-diligence guidance [30-32]. About 27 % of jobs face high automation risk; flexible training is needed, especially as only 23 % of low-literacy adults receive AI training [35-38]. In partnership with the ILO, the OECD issued an Equitable AI Transitions Playbook and introduced the next data-sovereignty panel [39-40][41-47].


Keypoints


AI’s transformative economic potential – The OECD highlights that widespread AI adoption could raise labor productivity by up to one percent annually across OECD and G20 nations, driving greater efficiency, lower costs, and higher living standards, while noting massive private-sector investment of “almost three quarters of a trillion dollars” in AI infrastructure this year [9-10].


OECD’s data-driven support for policymakers – The organization provides evidence-based analysis on AI trends, including tracking global AI compute capacity, venture-capital flows (61 % of VC dollars now go to AI firms, with the U.S. receiving 75 % of deal value), and publishing reports on AI-agent usage and associated security, privacy, and accuracy needs [14-19][16-18].


Monitoring and managing AI-related risks – The OECD collects and classifies AI incident data, showing a rise from 92 to 324 reported incidents per month between 2022-2025, and promotes a common reporting framework to ensure consistency and interoperability [20-22].


Benchmarking and sharing best practices – New tools such as the OECD AI Index and an upcoming interactive toolkit will enable countries to assess progress against OECD recommendations and learn from international good-practice repositories [23-25].


Addressing workforce impacts and equitable transition – The speech acknowledges that about 27 % of jobs are at high automation risk and that low-literacy adults have markedly lower AI-training participation (23 % vs. 61 %). It calls for flexible, modular training and introduces the “Equitable AI Transitions Playbook” developed with the ILO to guide up-skilling and reskilling policies [34-40].


Overall purpose/goal


The discussion serves to showcase the OECD’s role in guiding global AI policy-providing data, risk-monitoring, benchmarking tools, and international coordination-to harness AI’s economic benefits while mitigating associated risks, especially for workers and societies.


Overall tone


The tone is initially celebratory and forward-looking, emphasizing AI’s promise and the scale of investment. It then shifts to an analytical, evidence-based stance as Cormann presents data and policy tools. Mid-speech, the tone becomes cautiously concerned when addressing job displacement and training gaps, before concluding with a collaborative, solution-oriented call to action for governments, industry, and labor groups. The progression moves from optimism to measured caution and ends on a constructive, cooperative note.


Speakers

Mathias Cormann


– Role/Title: Secretary-General, Organisation for Economic Co-operation and Development (OECD)


– Area of Expertise: AI policy, economic impact of artificial intelligence, public policy


[S1]


Speaker 2


– Role/Title: Moderator/Chair for the data-sovereignty panel at the India AI Impact Summit


– Area of Expertise: Event moderation, AI governance (implied)


[S2]


Additional speakers:


Sunil Gupta – Managing Director and Chief Executive Officer, Yota Data Services


Nisubo Ongama – Chief Operating Officer, Kala


Kala Sonia Vaigando – Founders Associate, Kala Limited


Ms. Seema Ambasta – Chief Executive Officer, L & T, Vioma


Mr. Orgo Sengupta – Founder and Research Director, WIDI Center for Legal Policy


Full session reportComprehensive analysis and detailed insights

The OECD Secretary-General opened the India AI Impact Summit by thanking India for its leadership in convening the global AI community after successful meetings in the United Kingdom, Korea and France, and reaffirmed the Organisation’s commitment to provide evidence-based analysis and policy guidance that supports responsible AI innovation worldwide [1-4].


He then highlighted the transformative economic promise of artificial intelligence. The OECD estimates that, with strong adoption, AI could raise labour productivity by up to one percent each year across OECD and G20 economies over the next decade – a boost that would translate into greater efficiency, lower costs and higher living standards. This optimism is underpinned by massive private-sector investment: big-tech firms alone plan to spend almost three-quarters of a trillion dollars on AI infrastructure in the current year [9-10].


Cormann stressed that such benefits will not materialise without effective public policy. He reminded the audience that the foundational technologies enabling today’s AI revolution – from internet connectivity to semiconductor supply chains – were themselves the result of deliberate policy interventions, and that similar pro-innovation, pro-adoption and pro-safety AI policies are now essential for AI [11-13].


To help governments design those policies, the OECD offers a data-driven service that maps the evolving AI ecosystem. It monitors the global distribution of public AI compute capacity to inform industrial-strategy decisions and supply-chain security, and tracks venture-capital flows, noting that 61 % of worldwide VC dollars (about US$259 billion) now target AI firms, up from 30 % three years ago, with the United States receiving 75 % of that deal value. A new report on the Argentic AI landscape, published last week, highlights that half of surveyed developers intend to use AI agents in their work, while flagging the need for improvements in security, privacy and accuracy [14-19][16-18][S46].


Risk management is another pillar of the OECD’s work. Its AI incident database records a sharp rise in reported hazards – from an average of 92 incidents per month in 2022 to 324 per month in 2025 – prompting the creation of a common framework for AI-incident reporting that promotes global consistency and interoperability [20-22][S46].


For benchmarking, the Organisation released the OECD AI Index, an evidence-based tool that lets countries assess progress against the OECD AI Recommendations, and announced an interactive toolkit to be launched later in the year, which will host a repository of good-practice case studies to facilitate peer learning [23-25][S51].


International coordination is pursued through the Global Partnership on AI (GPA). The partnership, designed to promote the responsible development and use of artificial intelligence grounded in the OECD’s landmarks, welcomed Malta and Saudi Arabia as its newest members, bringing total membership to 46 countries across six continents. The G-PI Council will reconvene later this morning to discuss next steps in the partnership’s work [26-28].


Beyond governments, the OECD supports businesses. It maintains the Hiroshima II Process Code of Conduct, introduced at the OECD II Action Summit in Paris last year, to foster transparency and accountability, and is now updating the framework to make it accessible to small and medium-sized enterprises. In addition, a new due-diligence guidance for responsible AI was published to help firms navigate the expanding landscape of regulations and voluntary standards [30-32].


Addressing the human dimension, Cormann noted that roughly 27 % of employment is in occupations at the highest risk of automation, underscoring the urgency of upskilling and reskilling programmes. His analysis revealed a stark disparity in AI-training participation: only 23 % of adults with low literacy engage in relevant training compared with 61 % of higher-literacy adults. He advocated for learning models that are flexible, modular and tailored to individual job contexts [34-38][S53].


In partnership with the International Labour Organisation, the OECD has produced the “Equitable AI Transitions Playbook”, which offers concrete policy examples for updating skills frameworks and launching up-skilling and reskilling initiatives that aim to ensure an inclusive AI transition while maximising the technology’s benefits and mitigating its disruptions [39-40][S54].


He then handed the floor to the next speaker. The second speaker thanked the Secretary-General for his insights, introduced the forthcoming data-sovereignty panel, and listed the distinguished panelists – Sunil Gupta (Yota Data Services), Nisubo Ongama (COO, Kala), Sonia Vaigando (Founders Associate, Kala Limited) and Ms. Seema Ambasta (Chief Executive Officer, L & T, Vioma) – before inviting the dignitaries to the stage, signalling a shift from high-level policy framing to a focused discussion on cross-border data governance [41-47].


Overall, the summit’s opening underscored the OECD’s role as a central, evidence-based hub that supplies data, risk-monitoring tools and benchmarking resources, while championing multistakeholder cooperation and inclusive policy design. By coupling quantitative forecasts of AI-driven productivity gains with concrete mechanisms for risk mitigation, corporate guidance and workforce upskilling, the Organisation aims to harness AI’s transformative potential responsibly and equitably, setting the stage for the detailed deliberations that follow.


Session transcriptComplete transcript of the session
Mathias Cormann

India AI Impact Summit. And thank you to India for your leadership in bringing together the global AI community following the successful summits in the United Kingdom, Korea, and France. The OECD is proud to work with you and support policymakers, people, and businesses all around the world in harnessing the benefits of AI. And we do so with our unique data, evidence -based analysis, and policy guidance, aiming to promote responsible innovation and adoption while managing the potential risks along the way. In yesterday’s discussions, we heard about the wide -reaching potential impacts of AI development on our economies and societies. And of course, they continue to evolve as adoption accelerates and new applications are introduced. But one thing is clear.

These impacts are already a transforming and will become more transformative going forward. At the OECD, we estimate that with a strong level of adoption, AI could boost labor productivity by up to one percentage point every year across OECD and G20 countries over the next decade. Greater efficiency, lower costs, higher living standards, and the opportunities are also reflected in the scale of investment in AI infrastructure with almost three quarters of a trillion dollars in investment planned by big tech companies this year alone. Amid the rapid technological change and the massive investment flows, effective public policy is essential to allow AI to reach its full potential. Indeed, the foundational technologies that made this technological revolution possible were very much shaped and supported by public policy, from internet connectivity to semiconductor, supply chains, and everything in between.

Today, the OECD helps policymakers develop pro -innovation, pro -adoption, and pro -safety AI policies, drawing on the lessons of these previous interventions, sharing experiences at the cutting edge of AI policy, and identifying policy best practice. First, the OECD helps policymakers understand how AI technologies and business models are evolving and who the key players are in the AI ecosystem. We are tracking the global distribution of public AI compute capacity to help countries design their industrial strategies and assess opportunities to enhance AI supply chain security. We are also tracking global AI investment, with our analysis released earlier this week showing that 61 % of all venture capital investment worldwide, or $259 billion US, now goes to AI firms, which is up from just 30 % three years ago.

We are tracking the global distribution of public AI compute capacity to help countries Firms in the United States attract the largest share of venture capital by a wide margin, comprising 75 % of global I .I. venture capital deal value. Our analysis is also helping policymakers keep up with the latest technological developments. Our new report on the Argentic I .I. landscape, published last week, highlights that half of developers in recent surveys plan to use I .I. agents in their work, while identifying the need for progress on security, privacy and accuracy of I .I. agents to support further adoption. Second, we help policymakers track and classify I .I.-related risks. Our data on I .I. incidents shows that between 2022 and 2025, in just three years, the number of I .I.

incidents and hazards reported by the media increased dramatically, from 92 to 324 per month on average. The OECD common framework for reporting IA incidents helps promote global consistency and interoperability in IA incident reporting. And thirdly, we help policymakers benchmark their IA policies relative to their peers and international standards. Just yesterday, we released the OECD IA Index, which provides policymakers with an evidence -based tool to assess their progress in implementing the OECD recommendation on IA. We will also launch an interactive toolkit this year, which will feature a repository of good practices from around the world to support evidence -based peer learning. Fourth, we help governments coordinate their efforts internationally. Our integrated global partnership on IA was designed to promote the responsible development and use of artificial intelligence grounded in the OECD’s landmarks.

Thank you. G -PI, the G -PI Council, which we meet later this morning, to officially welcome our two newest members, Malta and Saudi Arabia, bringing G -PI’s membership to 46 countries across six continents. Beyond governments, we also provide analysis and recommendations to support II adoption of companies. The reporting framework for the Hiroshima II Process Code of Conduct launched at the OECD II Action Summit in Paris last year promotes transparency and accountability for responsible II innovation. We’re now updating that framework to support adoption by small and medium -sized enterprises. And yesterday, we published the OECD due diligence guidance for responsible II, which supports companies around the world in navigating a growing landscape of rules, regulations, and voluntary frameworks.

And we support people by providing recommendations for governments, business, labor, and other stakeholders to work together and to ensure everyone has the best possible opportunity to participate in and benefit from AI technologies. While AI adoption offers many exciting opportunities, it also carries the risk of job displacement for some. We estimate that taking the effects of AI into account, about 27 % of employment is in occupations that are at the highest risk of automation. It will be particularly important to ensure access to training opportunities for those who need the most. And on that front, our analysis shows that among adults with low literacy skills, only 23 % participate in relevant AI training, compared with 61 % of adults with higher literacy skills.

To improve participation in AI training among adults, learning needs to be more flexible, modular, and targeted to individual circumstances and job experiences. For this summit, together with the International Labour Organization, we have developed the Equitable AI. AI Transitions Playbook. which provides examples of policies to update skills frameworks as well as initiatives to upskill and reskill workers for an equitable II transition in closing to fully harness the enormous benefit and benefits and opportunities flowing from II while mitigating and managing some of the associated risks and disruptions we need to ensure governments industry labor and experts work together to support responsible adoption the OECD will continue to support this cooperation guided by our II principles so that II

Speaker 2

Thank you so much, Secretary General of OECD. These remarks, we’re very grateful for your remarks. For the next panel on data sovereignty, we have Mr. Sunil Gupta, Managing Director and Chief Executive Officer, Yota Data Services. We have Nisubo Ongama, COO, Kala Sonia Vaigando, Founders Associate, Kala Limited. We have Ms. Seema Ambasta, Chief Executive Officer, L &T, Vioma. And this session is being moderated by Mr. Orgo Sengupta, Founder and Research Director, WIDI Center for Legal Policy. May I request all the dignitaries to come up on stage, please.

Related ResourcesKnowledge base sources related to the discussion topics (18)
Factual NotesClaims verified against the Diplo knowledge base (5)
Additional Contexthigh

“The OECD estimates that, with strong adoption, AI could raise labour productivity by up to one percent each year across OECD and G20 economies over the next decade.”

While the report gives a specific 1 % annual productivity boost, the knowledge base cites a PwC study showing AI-intensive industries experiencing productivity growth nearly five times faster than average, and an IDC forecast that AI could add $19.9 trillion to the global economy by 2030, providing broader context on AI’s economic impact [S42] and [S62].

Additional Contextmedium

“Big‑tech firms alone plan to spend almost three‑quarters of a trillion dollars on AI infrastructure in the current year.”

The knowledge base notes that major technology companies are significantly increasing capital expenditures for AI data centres and that overall AI-related spend could approach $2 trillion over the next 5-10 years, but it does not give the exact $0.75 trillion figure cited in the report [S63] and [S66].

Additional Contextmedium

“61 % of worldwide VC dollars (about US$259 billion) now target AI firms, up from 30 % three years ago, with the United States receiving 75 % of that deal value.”

Industry observations in the knowledge base describe a shift toward leaner AI start-ups and heightened venture-capital activity, but they do not provide the precise 61 % share or the $259 billion amount; the trend is corroborated in a discussion of VC dynamics in AI startups [S78] and big-tech investment patterns [S63].

Additional Contextlow

“The Global Partnership on AI (GPA) welcomed Malta and Saudi Arabia as its newest members, bringing total membership …”

The knowledge base mentions the GPA’s co-chairmanship by Korea and Singapore and the existence of the OECD AI Policy Observatory, confirming the partnership’s structure, but it does not list Malta or Saudi Arabia as recent members [S69].

Additional Contextmedium

“A new report on the Argentic AI landscape highlights that half of surveyed developers intend to use AI agents in their work.”

Separate analysis of AI agents shows rapid growth in interest and adoption, indicating that a sizable proportion of developers are exploring agent-based tools, which aligns with the report’s finding though the exact 50 % figure is not specified in the knowledge base [S65].

External Sources (78)
S1
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — -Mathias Cormann- Secretary General, OECD (Organisation for Economic Co-operation and Development) -Moderator- Role: Ev…
S2
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S3
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S4
S5
AI/Gen AI for the Global Goals — Shea Gopaul: So thank you, Sanda. And like Sandra, I’d like to thank the African Union, as well as Global Compact. i…
S6
AI will have a significant impact on jobs, the OECD said — According to the Organisation for Economic Co-operation and Development (OECD), more than a quarter of jobs in their mem…
S7
AI startups defy tech downturn with record-breaking investments and growth — For the past two years, many unprofitable tech startups have faced significantchallenges, leading to cost-cutting, merge…
S8
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S9
Deutsche Bank warns on scale of AI spending — Deutsche Bank haswarnedthat surging AI investment is helping to prop up US economic growth. Analysts say that broader sp…
S10
Policymaker’s Guide to International AI Safety Coordination — OECD Secretary General Mathias Cormann emphasized that trust is built through inclusion and objective evidence. He ident…
S11
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Nobuhisa Nishigata:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ….
S12
OECD releases AI Incidents Monitor to address AI challenges with evidence-based policies — The OECD.AI Observatoryreleaseda beta version of the AI Incidents Monitor (AIM). Designed by the OECD.AI Observatory, th…
S13
Building Sovereign and Responsible AI Beyond Proof of Concepts — It could be trust in terms of the impacts that it will have on people and people’s lives. It could be trust in terms of …
S14
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — During a session focused on the impact of digitalisation on employment, experts from the International Labour Organisati…
S15
UK schools lag in providing access to AI learning tools — A newstudy conducted by GoStudenthasuncovereda significant technological gap in European classrooms,including the UK, wh…
S16
Judiciary engagement — Development | Capacity development International Initiatives and Guidelines There is a significant gap between AI impl…
S17
S18
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — In conclusion, the analysis reinforces the potential of digitalisation and emerging technologies, such as artificial int…
S19
Comprehensive Report: Preventing Jobless Growth in the Age of AI — AI democratizes access to expertise and disproportionately benefits lower-skilled workers by providing them with capabil…
S20
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — AI is driving exceptional economic growth in the United States, with economists predicting 3-4% growth and techno-econom…
S22
Open Forum #30 High Level Review of AI Governance Including the Discussion — Lucia Russo: Thank you, Yoichi. Good morning and thank you my fellow panelists for this interesting discussion. As Yoich…
S23
Future-proofing global tech governance: a bottom-up approach | IGF 2023 Open Forum #44 — Cedric Sabbah:Okay. I think there’s a lot to unpack in everything. But we’ll have the opportunity to continue to delve i…
S24
Shadow AI and poor governance fuel growing cyber risks, IBM warns — Many organisations racing to adopt AI arefailing to implement adequate security and governance controls, according to IB…
S25
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — ### Country Experiences and Perspectives ### OECD Research Findings – A repository of best practices from comparable c…
S26
Empowering Workers in the Age of AI — The magnitude of the skills challenge is substantial. According to UNESCO research cited by Lataix, 9 out of 10 jobs wil…
S27
The impact of AI on jobs and workforce — The ILO’s webinar was triggered by the recent impact of ChatGPT on our society and jobs. OpenAI’s ChatGPT, in particular…
S28
WS #100 Integrating the Global South in Global AI Governance — Salma Alkhoudi: So this slide is probably well before I get to the slide, just really loud. Is this good? Okay. I’m Se…
S29
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S30
Global AI Policy Framework: International Cooperation and Historical Perspectives — The discussion revealed both shared concerns and different approaches to addressing them. Speakers generally agreed on t…
S31
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — The discussion revealed relatively low levels of fundamental disagreement, with most differences centered on implementat…
S32
Why science metters in global AI governance — The discussion maintained a consistently serious, collaborative, and optimistic tone throughout. Speakers emphasized urg…
S33
What policy levers can bridge the AI divide? — Hubert Vargas Picado: and we go to His Excellency, your title has innovation, can you tell us more about what are the be…
S34
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — ### OECD and UNESCO Frameworks – **Audrey Plonk** – Deputy Director, STI OECD (joined virtually) 2. Ensure local data …
S36
The Challenges of Data Governance in a Multilateral World — During the discussion on data governance, several speakers highlighted the importance of adopting a multistakeholder app…
S37
Artificial Intelligence &amp; Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S38
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — Fink acknowledged that while some jobs may be displaced, new opportunities are simultaneously created. Both speakers agr…
S39
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — AI is driving exceptional economic growth in the United States, with economists predicting 3-4% growth and techno-econom…
S40
From Innovation to Impact_ Bringing AI to the Public — Sharma’s central thesis positions AI not as a threat to employment but as a productivity multiplier that will enable Ind…
S41
S42
AI drives productivity surge in certain industries, report shows — A recent PwC (PricewaterhouseCoopers International Limited) reporthighlightsthat sectors of the global economy with high…
S43
Open Forum #30 High Level Review of AI Governance Including the Discussion — Lucia Russo: Thank you, Yoichi. Good morning and thank you my fellow panelists for this interesting discussion. As Yoich…
S44
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — The OECD is providing comprehensive support through data tracking, policy benchmarking tools, incident reporting framewo…
S45
Future-proofing global tech governance: a bottom-up approach | IGF 2023 Open Forum #44 — Cedric Sabbah:Okay. I think there’s a lot to unpack in everything. But we’ll have the opportunity to continue to delve i…
S46
OECD releases AI Incidents Monitor to address AI challenges with evidence-based policies — The OECD.AI Observatoryreleaseda beta version of the AI Incidents Monitor (AIM). Designed by the OECD.AI Observatory, th…
S47
AI Safety at the Global Level Insights from Digital Ministers Of — The report uses OECD scenarios for evidence-informed forecasting, providing policymakers with scientifically grounded pr…
S48
Artificial intelligence (AI) – UN Security Council — During the9821st meetingof the Security Council, the discussions centered around the concept of accidental risks associa…
S49
Building Sovereign and Responsible AI Beyond Proof of Concepts — It could be trust in terms of the impacts that it will have on people and people’s lives. It could be trust in terms of …
S50
Shadow AI and poor governance fuel growing cyber risks, IBM warns — Many organisations racing to adopt AI arefailing to implement adequate security and governance controls, according to IB…
S51
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — – **OECD AI Principles Implementation Toolkit Development**: A collaborative initiative led by Costa Rica to create prac…
S52
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Robert Opp:Thanks so much, Galia. And the time is racing by. I can’t believe it. We have about 15 minutes left in this s…
S53
Empowering Workers in the Age of AI — The magnitude of the skills challenge is substantial. According to UNESCO research cited by Lataix, 9 out of 10 jobs wil…
S54
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — During a session focused on the impact of digitalisation on employment, experts from the International Labour Organisati…
S55
The impact of AI on jobs and workforce — The ILO’s webinar was triggered by the recent impact of ChatGPT on our society and jobs. OpenAI’s ChatGPT, in particular…
S56
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Julie Sweet from Accenture highlighted another crucial advantage: India’s human capital. With over 350,000 employees in …
S57
The Global Economic Outlook — Panelists emphasized the need to rebuild optimism and trust among populations feeling economically insecure. They discus…
S58
Multi-stakeholder Discussion on issues about Generative AI — He believes these applications have the potential to improve society and drive economic development. Maruyama himself i…
S59
Keynote-HE Emmanuel Macron — Artificial intelligence | Social and economic development
S60
World Economic Forum Panel: Sovereignty and Interconnectedness in the Modern Economy — Economic Growth and Market Confidence Economic | Infrastructure Tooze suggests that the convergence of artificial inte…
S61
Who Benefits from Augmentation? / DAVOS 2025 — Kumar argues that AI can lead to increased productivity and the creation of new job opportunities. He suggests that this…
S62
AI set to drive trillion-dollar growth by 2030 — AI is forecast to add a cumulative $19.9 trillion to the global economy by 2030, according to arecent IDC study. This gr…
S63
Big Tech boosts AI investments amid Wall Street pressure — Big technology firms, including Microsoft and Meta, are significantly increasing their investments in AI data centres to…
S64
WS #462 Bridging the Compute Divide a Global Alliance for AI — Elena Estavillo Flores: Yes, thank you. I was also reflecting on the question because when we ask if we have enough acce…
S65
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S66
Waves of infrastructure Open Systems Open Source Open Cloud — “AI is going to impact 95 % of work”[1]. “…in the next 5 to 10 years will be almost $2 trillion of spend”[2].
S67
Advocacy to Action: Engaging Policymakers on Digital Rights | IGF 2023 — Public accountability of policymakers is stressed as a crucial component of policy-making
S69
https://dig.watch/event/india-ai-impact-summit-2026/policymakers-guide-to-international-ai-safety-coordination — And we’ve had just earlier the meeting of the Global Partnership on AI co -chaired by Korea and Singapore. We’ve got the…
S70
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-democracy_-reimagining-governance-in-the-age-of-intelligence — This brings me to the international dimension. AI is a truly global challenge whose effects transcend national borders. …
S71
WSIS Action Line C7 E-learning — Tawfik Jelassi, UNESCO’s Assistant Director General for Communication and Information, delivered the keynote address est…
S72
Artificial intelligence (AI) and cyber diplomacy — A key point raised was the need for clarity in defining and discussing AI governance. This encompasses various elements,…
S73
AI Policy Summit Opening Remarks: Discussion Report — “The only way you could see that he was communicating with us is that there was a little bit of a tear coming out of his…
S74
Creating Eco-friendly Policy System for Emerging Technology — Ingrid Volkmer:Could you bring up my slides, please? Okay. Hi, everyone. My name is Ingrid Volkmar. I’m a professor at t…
S75
The Global Power Shift India’s Rise in AI &amp; Semiconductors — Joining us is Professor Vivek Kumar Singh, Senior… advisor on science and technology at NITI IO. Professor Singh plays…
S76
Global Perspectives on Openness and Trust in AI — And then exclusive partnerships and the systems being opaque. So those were the things identified in the market study. A…
S77
Multistakeholder Partnerships for Thriving AI Ecosystems — Robert Opp opened the discussion by highlighting UNDP’s concern that without responsible deployment, AI could exacerbate…
S78
AI startups in Silicon Valley rethink VC funding with leaner teams and strategic growth — In Silicon Valley, a notable trend isemergingas AI startups achieve significant revenue with leaner teams, challenging t…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Mathias Cormann
10 arguments136 words per minute1030 words452 seconds
Argument 1
AI can boost labor productivity by up to 1 % per year across OECD and G20 countries (Mathias Cormann)
EXPLANATION
The OECD estimates that strong adoption of artificial intelligence could raise labour productivity by roughly one percentage point each year. This boost is projected to occur across both OECD and G20 economies over the next decade.
EVIDENCE
Mathias cited the OECD’s projection that, with a strong level of AI adoption, productivity could increase by up to one percent annually across OECD and G20 countries over the next ten years [9].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The OECD projection of a 1 % annual productivity gain from AI adoption is cited in Cormann’s keynote and aligns with OECD estimates [S1].
MAJOR DISCUSSION POINT
AI productivity boost
Argument 2
Nearly three‑quarters of a trillion dollars in AI infrastructure investment planned by big‑tech firms this year (Mathias Cormann)
EXPLANATION
Big‑technology companies are slated to invest an enormous sum in AI infrastructure during the current year. The scale of this investment underscores the rapid commercialization of AI technologies.
EVIDENCE
He noted that almost three-quarters of a trillion dollars in AI infrastructure investment is planned by big-tech firms for the year alone, highlighting the magnitude of private-sector spending [10].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of AI spending highlight massive private-sector outlays, with forecasts of up to $4 trillion in AI data-centre investment by 2030, underscoring the scale of current investment plans [S9].
MAJOR DISCUSSION POINT
AI investment scale
Argument 3
OECD provides evidence‑based analysis, policy guidance, and pro‑innovation, pro‑adoption, pro‑safety AI policies (Mathias Cormann)
EXPLANATION
The Organisation works with governments to supply data‑driven insights and policy recommendations that encourage responsible AI development. Its approach balances fostering innovation with ensuring safety and societal benefits.
EVIDENCE
He described the OECD’s use of unique data and evidence-based analysis to promote responsible innovation and adoption while managing risks, and its role in helping policymakers develop pro-innovation, pro-adoption, and pro-safety AI policies [4][13].
MAJOR DISCUSSION POINT
OECD policy support for AI
AGREED WITH
Speaker 2
Argument 4
OECD tracks global AI ecosystem data – compute capacity, venture‑capital flows, and developer intentions – to inform industrial strategies (Mathias Cormann)
EXPLANATION
By monitoring where AI compute resources are located, how venture capital is allocated, and what developers plan to build, the OECD equips countries with intelligence for industrial planning and supply‑chain security. This data‑driven approach helps shape national AI strategies.
EVIDENCE
He listed several tracking activities: mapping public AI compute capacity for industrial strategy design, monitoring global AI investment with 61 % of venture capital now flowing to AI firms (up from 30 % three years earlier), and reporting on developers’ intentions to use AI agents, all drawn from recent OECD analyses [14-19].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The OECD’s AI Observatory monitors compute resources, VC investment (now 61 % to AI firms) and developer plans, as described in the keynote briefing [S1].
MAJOR DISCUSSION POINT
AI ecosystem monitoring
Argument 5
OECD monitors AI incidents, noting a rise from 92 to 324 reported incidents per month (2022‑2025) and offers a common reporting framework (Mathias Cormann)
EXPLANATION
The organisation collects data on AI‑related accidents and hazards, showing a sharp increase in reported cases over a three‑year span. It also provides a standardized framework to improve consistency in incident reporting worldwide.
EVIDENCE
He presented data indicating that monthly AI incidents reported by the media grew from an average of 92 in 2022 to 324 in 2025, and explained that the OECD’s common framework promotes global consistency in AI incident reporting [21-22].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The launch of the AI Incidents Monitor, which tracks media-reported AI-related incidents worldwide, provides the data underpinning the reported increase and the common framework [S12]; further discussion of the Observatory’s role appears in [S13].
MAJOR DISCUSSION POINT
AI risk monitoring
Argument 6
OECD offers benchmarking tools such as the AI Index and an upcoming interactive toolkit for peer learning (Mathias Cormann)
EXPLANATION
Policymakers can compare their national AI policies against peers using the newly released AI Index, while a forthcoming interactive toolkit will provide a repository of best practices for collaborative learning.
EVIDENCE
He announced the release of the OECD AI Index as an evidence-based tool for assessing policy progress, and previewed an interactive toolkit that will host good-practice examples to support peer learning [24-25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The OECD AI Index and a forthcoming interactive toolkit for sharing best practices were announced in Cormann’s keynote as new benchmarking resources for policymakers [S1].
MAJOR DISCUSSION POINT
Policy benchmarking tools
Argument 7
OECD facilitates international coordination through the Global Partnership on AI (GPA) and related partnerships (Mathias Cormann)
EXPLANATION
The OECD helps align national AI strategies by operating an integrated global partnership that brings together governments and other stakeholders. Recent expansion of the GPA underscores growing multilateral cooperation.
EVIDENCE
He described the integrated global partnership on AI designed to promote responsible development, and noted the recent welcome of Malta and Saudi Arabia, raising GPA membership to 46 countries across six continents [26-28].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The expansion of the GPA to 46 member countries and its role in aligning AI strategies is highlighted in the keynote address [S1].
MAJOR DISCUSSION POINT
International AI coordination
AGREED WITH
Speaker 2
Argument 8
Approximately 27 % of employment is in occupations at highest risk of automation, highlighting the need for upskilling (Mathias Cormann)
EXPLANATION
A sizable share of jobs could be displaced as AI automates tasks, making reskilling and training essential to mitigate labour market disruption. Targeted policies are required to support those most vulnerable.
EVIDENCE
He warned that AI adoption carries a risk of job displacement and quantified that about 27 % of employment lies in occupations with the highest automation risk [34-35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
OECD estimates that more than a quarter of jobs are at high risk of automation, reinforcing the 27 % figure and the urgency for reskilling [S6].
MAJOR DISCUSSION POINT
Automation risk to jobs
Argument 9
Only 23 % of adults with low literacy engage in AI training versus 61 % of higher‑literacy adults; training must be flexible and modular (Mathias Cormann)
EXPLANATION
Training participation is uneven, with lower‑literacy adults far less likely to take AI‑related courses. To broaden inclusion, learning programmes need to be adaptable to diverse circumstances and prior experience.
EVIDENCE
He presented data showing that just 23 % of low-literacy adults participate in AI training compared with 61 % of higher-literacy adults, and argued that training should become more flexible, modular, and tailored to individual needs [37-38].
MAJOR DISCUSSION POINT
Inclusive AI skills development
Argument 10
The OECD, together with the ILO, released the Equitable AI Transitions Playbook to guide policies for reskilling and inclusive AI adoption (Mathias Cormann)
EXPLANATION
The joint OECD‑ILO playbook offers concrete policy examples to update skill frameworks and support upskilling and reskilling initiatives. It aims to ensure a fair AI transition that benefits all workers.
EVIDENCE
He explained that, in collaboration with the International Labour Organization, the OECD produced the Equitable AI Transitions Playbook, which provides policy examples for updating skills frameworks and up-/reskilling workers for an equitable AI transition [39-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The joint OECD-ILO Equitable AI Transitions Playbook is referenced in discussions on AI’s impact on employment quality and policy responses [S14].
MAJOR DISCUSSION POINT
Equitable AI transition guidance
S
Speaker 2
2 arguments111 words per minute91 words48 seconds
Argument 1
Expression of gratitude to the OECD Secretary‑General and introduction of the upcoming data‑sovereignty panel (Speaker 2)
EXPLANATION
The speaker thanks the OECD Secretary‑General for the opening remarks and signals the transition to the next session focused on data sovereignty. This sets the agenda for the forthcoming discussion.
EVIDENCE
He thanked the OECD Secretary-General and expressed appreciation for the remarks that had just been delivered [41-42].
MAJOR DISCUSSION POINT
Thank‑you and session handover
AGREED WITH
Mathias Cormann
Argument 2
Announcement of panel participants and invitation for dignitaries to join the stage (Speaker 2)
EXPLANATION
The moderator lists the experts who will take part in the data‑sovereignty panel and calls on the dignitaries to come forward. This provides the audience with the panel composition and logistical direction.
EVIDENCE
He introduced the panelists-Sunil Gupta, Nisubo Ongama, Seema Ambasta-and the moderator Orgo Sengupta, then requested the dignitaries to come up on stage [43-47].
MAJOR DISCUSSION POINT
Panel introduction
AGREED WITH
Mathias Cormann
Agreements
Agreement Points
Both speakers express appreciation for the OECD’s role in shaping AI policy and fostering international dialogue.
Speakers: Mathias Cormann, Speaker 2
OECD provides evidence‑based analysis, policy guidance, and pro‑innovation, pro‑adoption, pro‑safety AI policies (Mathias Cormann) Expression of gratitude to the OECD Secretary‑General and introduction of the upcoming data‑sovereignty panel (Speaker 2)
Mathias highlights the OECD’s evidence-based support for AI policy development [4][13], while Speaker 2 thanks the OECD Secretary-General for the opening remarks [41-42], showing a shared recognition of the Organisation’s contribution.
POLICY CONTEXT (KNOWLEDGE BASE)
The OECD is recognized as a key architect of international AI policy, with its AI Principles and the OECD AI Policy Observatory providing a framework for cross-border dialogue; this aligns with the OECD-UNESCO collaborative efforts highlighted in the Open Forum discussion on AI governance [S34].
Both speakers stress the importance of multilateral/​multistakeholder collaboration in AI and data governance.
Speakers: Mathias Cormann, Speaker 2
OECD facilitates international coordination through the Global Partnership on AI (GPA) and related partnerships (Mathias Cormann) Announcement of panel participants and invitation for dignitaries to join the stage (Speaker 2)
Mathias points to the GPA’s expansion to 46 countries as evidence of global AI coordination [26-28], while Speaker 2 introduces a diverse panel of experts and calls dignitaries to the stage, underscoring a collaborative approach to the next discussion [43-47].
POLICY CONTEXT (KNOWLEDGE BASE)
Multilateral and multistakeholder approaches are repeatedly emphasized in global AI governance, reflecting UN-led calls for inclusive, evidence-based decision-making [S32] and the importance of diverse stakeholder input in policy formulation noted in AI governance forums [S35]; similar emphasis appears in discussions on data governance collaboration [S36].
Similar Viewpoints
Both recognise the OECD as a central, trusted hub that enables policy makers and stakeholders to engage constructively on AI and data issues [4][13][41-42].
Speakers: Mathias Cormann, Speaker 2
OECD provides evidence‑based analysis, policy guidance, and pro‑innovation, pro‑adoption, pro‑safety AI policies (Mathias Cormann) Expression of gratitude to the OECD Secretary‑General and introduction of the upcoming data‑sovereignty panel (Speaker 2)
Both emphasize that effective AI governance requires broad, cross‑border participation and dialogue among governments, industry and civil society [26-28][43-47].
Speakers: Mathias Cormann, Speaker 2
OECD facilitates international coordination through the Global Partnership on AI (GPA) and related partnerships (Mathias Cormann) Announcement of panel participants and invitation for dignitaries to join the stage (Speaker 2)
Unexpected Consensus
Inclusive participation in AI development and training versus inclusive representation in the data‑sovereignty panel.
Speakers: Mathias Cormann, Speaker 2
Only 23 % of adults with low literacy engage in AI training versus 61 % of higher‑literacy adults; training must be flexible and modular (Mathias Cormann) Announcement of panel participants and invitation for dignitaries to join the stage (Speaker 2)
While Mathias focuses on inclusive AI skills development for low-literacy adults [34-38], Speaker 2’s introduction of a multi-expert panel reflects an unexpected alignment on the broader principle that AI-related initiatives should be inclusive of diverse stakeholders, even though the two speakers address different domains (training vs. governance).
POLICY CONTEXT (KNOWLEDGE BASE)
Recent AI governance dialogues have foregrounded inclusivity, with the AI Governance Dialogue reporting 76% participation from developing countries and stressing broad representation, while other forums call for inclusive representation in data-sovereignty discussions, echoing the broader push for equitable AI policy frameworks [S29][S30].
Overall Assessment

The two speakers show clear convergence on two fronts: (1) mutual appreciation of the OECD’s evidence‑based, policy‑support role in AI, and (2) a shared belief that international, multistakeholder collaboration is essential for responsible AI and data governance. Beyond these, an unanticipated overlap emerges around the theme of inclusivity—both in AI skills development and in the composition of forthcoming discussion panels.

Moderate consensus. The agreement is limited to high‑level acknowledgments of the OECD’s role and the need for cooperation, which bodes well for coordinated policy action but leaves substantive policy details (e.g., specific AI risk mitigation measures or training program designs) unaddressed in the short exchange.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains only a keynote by Mathias Cormann and a brief hand‑over by Speaker 2. No opposing viewpoints or substantive debates are presented; the two speakers are aligned in acknowledging the OECD’s contributions and in moving the agenda forward.

Minimal – the interaction is essentially cooperative, implying that any policy disagreements are not evident in this segment and are unlikely to impede the broader discussion on AI and data governance.

Partial Agreements
Both speakers acknowledge the relevance of the OECD’s work on AI and data governance, with Mathias highlighting AI’s productivity potential and Speaker 2 thanking the OECD Secretary‑General for those remarks and moving to the next session on data sovereignty, indicating shared recognition of the OECD’s role [1-4][41-42].
Speakers: Mathias Cormann, Speaker 2
AI can boost labor productivity by up to 1 % per year across OECD and G20 countries (Mathias Cormann) Expression of gratitude to the OECD Secretary‑General and introduction of the upcoming data‑sovereignty panel (Speaker 2)
Takeaways
Key takeaways
AI could increase labor productivity by up to 1 % per year across OECD and G20 economies. Big‑tech firms plan to invest nearly $750 billion in AI infrastructure this year. OECD provides evidence‑based analysis, policy guidance, and tools to support pro‑innovation, pro‑adoption, and pro‑safety AI policies. OECD tracks global AI ecosystem data (compute capacity, venture‑capital flows, developer intentions) to inform industrial strategies and supply‑chain security. AI incident reporting has risen sharply (from 92 to 324 incidents per month between 2022‑2025); OECD offers a common reporting framework. OECD has released the AI Index and will launch an interactive toolkit for peer learning and benchmarking. International coordination is facilitated through the Global Partnership on AI (GPA) and related partnerships. Around 27 % of jobs are in occupations at highest risk of automation; upskilling and reskilling are essential. Only 23 % of adults with low literacy engage in AI training versus 61 % of higher‑literacy adults; training must be flexible, modular, and targeted. OECD and ILO released the Equitable AI Transitions Playbook to guide inclusive reskilling policies.
Resolutions and action items
Launch an interactive AI policy toolkit later this year. Update the OECD AI reporting framework to support adoption by small and medium‑sized enterprises. Publish OECD due‑diligence guidance for responsible AI to help companies navigate regulations. Continue supporting the Global Partnership on AI to promote responsible AI development. Promote the Equitable AI Transitions Playbook with governments, industry, and labor groups for inclusive upskilling.
Unresolved issues
Specific mechanisms for increasing AI training participation among low‑literacy adults were not detailed. Implementation pathways for the AI Index and interactive toolkit across diverse national contexts remain unspecified. Details of how the upcoming data‑sovereignty panel will address cross‑border data governance were not covered.
Suggested compromises
None identified
Thought Provoking Comments
AI could boost labor productivity by up to one percentage point every year across OECD and G20 countries over the next decade.
Quantifies the macro‑economic benefit of AI in a concrete, forward‑looking metric, moving the conversation from abstract optimism to measurable impact.
Set the tone for the discussion of AI’s transformative potential and prompted subsequent references to investment flows and the need for supportive public policy.
Speaker: Mathias Cormann
61 % of all venture capital investment worldwide – about $259 billion US – now goes to AI firms, up from just 30 % three years ago.
Highlights the rapid reallocation of capital toward AI, underscoring the speed of market change and the urgency for policymakers to keep pace.
Shifted the conversation from potential benefits to the scale of financial commitment, leading to the later emphasis on tracking AI compute capacity and investment trends.
Speaker: Mathias Cormann
Between 2022 and 2025 the number of AI incidents and hazards reported by the media increased dramatically, from 92 to 324 per month on average.
Introduces a stark counter‑balance to the earlier optimism, drawing attention to emerging safety and governance challenges.
Created a turning point where the dialogue moved from growth narratives to risk management, paving the way for discussion of the OECD common framework for incident reporting.
Speaker: Mathias Cormann
About 27 % of employment is in occupations that are at the highest risk of automation.
Puts a human‑centric perspective on the technology debate, quantifying the scale of potential job displacement.
Prompted a deeper dive into equity concerns, leading to the mention of training gaps and the Equitable AI Transitions Playbook.
Speaker: Mathias Cormann
Among adults with low literacy skills, only 23 % participate in relevant AI training, compared with 61 % of adults with higher literacy skills.
Exposes a concrete inequality in access to AI upskilling, linking socioeconomic status to future labor market outcomes.
Steered the conversation toward policy solutions—flexible, modular learning and the OECD‑ILO Playbook—highlighting the need for inclusive policy design.
Speaker: Mathias Cormann
Learning needs to be more flexible, modular, and targeted to individual circumstances and job experiences.
Offers a specific, actionable recommendation that moves beyond problem identification to a potential remedy.
Provided a constructive bridge between the risk narrative and policy response, influencing the subsequent emphasis on collaborative frameworks and best‑practice toolkits.
Speaker: Mathias Cormann
We have developed the Equitable AI Transitions Playbook with the International Labour Organization, which provides examples of policies to update skills frameworks and initiatives to up‑skill and reskill workers for an equitable AI transition.
Demonstrates concrete international cooperation and a tangible resource, showing how the OECD is translating analysis into actionable guidance.
Reinforced the theme of coordinated, multi‑stakeholder action and set the stage for the upcoming panel on data sovereignty, signalling a shift from high‑level analysis to implementation focus.
Speaker: Mathias Cormann
Effective public policy is essential to allow AI to reach its full potential. Indeed, the foundational technologies that made this technological revolution possible were very much shaped and supported by public policy, from internet connectivity to semiconductor supply chains.
Frames AI development within a historical policy context, reminding participants that technology breakthroughs are rarely market‑only phenomena.
Anchored the entire speech in the premise that policy design, not just technology, drives outcomes, influencing the audience to view subsequent data (investment, incidents, training) through a policy‑lens.
Speaker: Mathias Cormann
Overall Assessment

The discussion was driven by a series of strategically placed insights from Mathias Cormann that alternated between highlighting AI’s massive economic promise and exposing its emerging risks and inequities. Each pivot—productivity gains, capital flows, incident spikes, job‑displacement figures, and training gaps—served as a turning point that broadened the conversation from pure optimism to a nuanced, policy‑centric dialogue. By coupling hard data with concrete policy tools (the OECD AI Index, incident‑reporting framework, and the Equitable AI Transitions Playbook), the speaker transformed abstract trends into actionable agendas, setting the stage for the subsequent panel on data sovereignty and signaling a shift from analysis to implementation.

Follow-up Questions
How can detailed data on the global distribution of public AI compute capacity be gathered and utilized to help countries design industrial strategies and enhance AI supply chain security?
Understanding compute capacity distribution is crucial for national AI strategies and for mitigating supply chain risks.
Speaker: Mathias Cormann
What are the drivers behind the sharp rise in AI incidents and hazards (from 92 to 324 per month) and how effective is the OECD common framework for reporting AI incidents in improving safety?
Identifying causes and evaluating reporting mechanisms are essential to manage emerging AI risks.
Speaker: Mathias Cormann
How will the OECD AI Index and the upcoming interactive toolkit influence policy benchmarking and peer learning among member countries?
Assessing the impact of these tools will determine their usefulness for aligning AI policies internationally.
Speaker: Mathias Cormann
What specific challenges do small and medium‑sized enterprises (SMEs) face in adopting AI, and how can the updated Hiroshima AI Process Code of Conduct support their responsible innovation?
SMEs represent a large part of the economy; tailored guidance is needed to ensure safe AI uptake.
Speaker: Mathias Cormann
How effective is the OECD due‑diligence guidance for responsible AI across different regulatory environments and industry sectors?
Evaluating guidance uptake will reveal gaps and inform future refinements of responsible‑AI frameworks.
Speaker: Mathias Cormann
What policies and measures can most effectively mitigate the risk that 27 % of occupations face highest automation risk, and how can governments ensure equitable outcomes?
Targeted interventions are required to prevent large‑scale job displacement and promote inclusive growth.
Speaker: Mathias Cormann
Why is AI training participation markedly lower among adults with low literacy (23 % vs 61 % for higher‑literacy adults), and what barriers need to be addressed?
Identifying barriers is key to designing inclusive training programs that reach vulnerable groups.
Speaker: Mathias Cormann
What design features (flexibility, modularity, personalization) are most effective in AI training programmes to increase uptake among diverse adult learners?
Evidence‑based training models can improve skill acquisition and support AI transition.
Speaker: Mathias Cormann
How is the Equitable AI Transitions Playbook being implemented by governments, industry, and labour groups, and what measurable outcomes are emerging?
Monitoring implementation will show whether the playbook achieves its goal of equitable AI transitions.
Speaker: Mathias Cormann
What are the implications of the concentration of AI venture‑capital investment in the United States (75 % of global deal value) for global AI competitiveness and policy coordination?
Understanding investment patterns helps policymakers address potential imbalances and foster a balanced AI ecosystem.
Speaker: Mathias Cormann
What progress is needed on security, privacy, and accuracy of AI agents to support broader adoption, as highlighted by the recent report on the generative AI landscape?
Addressing these technical challenges is critical for trust and widespread deployment of AI agents.
Speaker: Mathias Cormann

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Inclusive AI_ Why Linguistic Diversity Matters

Inclusive AI_ Why Linguistic Diversity Matters

Session at a glanceSummary, keypoints, and speakers overview

Summary

The summit opened by Sushant Kumar framed the discussion around “personal, local and multilingual AI” and announced a joint effort between Bhashani and Current AI to showcase an open-source, multilingual, handheld device that preserves privacy and works without connectivity [1-4]. After a short video, Aya Bdeir introduced the demo team, noting that the prototype was built in about five weeks through a close partnership between Current AI engineers and Bhashani’s model team [34-38]. Andrew Tergis explained that the device can run a full pipeline-automatic speech recognition, neural machine translation, a large language model, and text-to-speech-entirely on-device, demonstrated with a Hindi query answered in the user’s language [53-62]. Shalindra Pal Singh added that the models were heavily quantized to fit on the hardware without sacrificing accuracy, and that four to five models are currently operational offline on a Jetson platform [70-72][88-90][98-101].


Amitabh Nag described Bhashini’s origin in 2023, motivated by the difficulty of using non-native languages in school and the need to preserve linguistic nuance, leading to the creation of a corpus for many Indian languages [108-110][118-119]. He noted that the team now serves about 15 million inferences daily on a 200-GPU system and is expanding language coverage to 36 languages, including recently digitised tribal languages such as Bheeli [121-128][170-176]. Looking ahead, Nag highlighted four pillars for inclusive AI: a small offline form factor, broader language breadth, deeper model enrichment (e.g., adding place-name glossaries), and continuous contextualisation of data [163-176][180-186].


Aya Bdeir expressed concern that embodied AI devices from large tech firms are opaque, often trained on Western languages, and could lock users into proprietary stacks, whereas open-source hardware can democratise innovation similar to Linux [194-210]. She outlined hopeful trajectories, including cheaper, smaller devices, mesh networking for distributed inference, and specialised applications such as agricultural assistants or privacy-preserving toys [215-224][229-232]. The panel also debated cultural data sovereignty, with participants stressing the need for community involvement, consent, and reciprocity when data is used for AI, especially for health or tribal knowledge [311-318][326-332].


Anne Bouverot suggested that policy mechanisms, like France’s quotas for French content in media, could fund and protect cultural creation while supporting AI development [263-270]. The discussion concluded with the announcement of the India AI Innovation Challenge, an open-source competition to build on the prototype, offering prize funding from both Bhashini and Current AI and encouraging global collaboration, including with France [409-422][391-398]. Sushant’s closing remarks underscored that open, offline, multilingual AI hardware, coupled with collaborative governance of data and cultural assets, can empower diverse communities and drive inclusive AI innovation [233-236].


Keypoints


Major discussion points


Launch of a personal, local, multilingual AI hardware prototype – The session introduced an open-source, handheld device that runs AI offline, preserves privacy and supports many languages, followed by a live demo showing speech-to-text, translation, LLM inference and text-to-speech all on-device [4-9][52-58][88-91].


Collaboration model between Bhashani and Current AI – The project was built in a six-week partnership that emphasizes co-creation, open-source release as a public good, and a repeatable “identify-gap → co-develop → open-source” workflow [34-42][135-142].


Multilingual and cultural inclusion as a core goal – Speakers highlighted the need to serve mother-tongues and tribal languages (e.g., Bheeli), to avoid linguistic bias, and to preserve cultural nuance; the device currently covers 22 spoken languages and aims to expand to 36 [108-112][163-176]. Concerns were raised about “embodied AI” that is centrally controlled and trained on Western languages, while hope was expressed that open hardware can democratise access and enable diverse use-cases [194-206][214-222].


Future visions and application scenarios – The prototype can power vision-impaired assistants, agricultural tools, tourism guides, and can be networked in a mesh or scaled to a micro-data-center; the hardware is platform-agnostic (Jetson-based now but portable to other chips) and is intended to spark endless community-driven innovations [58-62][215-232][89-90].


Data sovereignty, reciprocity and governance – The panel debated who owns the data used to train models, the need for community-level standards, artist compensation, and the broader concept of AI sovereignty at individual, community and national levels, arguing for privacy-preserving, trusted third-party mechanisms and a “complete sovereign AI stack” [299-307][314-324][368-387].


Overall purpose / goal


The discussion aimed to showcase a tangible open-source AI device that makes advanced, multilingual AI accessible offline, to illustrate how cross-sector collaboration (Bhashani, Current AI, Kalpa Impact) can accelerate such public-good technology, and to launch the India AI Innovation Challenge that invites developers to build on the prototype for real-world, culturally-relevant solutions.


Overall tone


The conversation began with enthusiastic optimism about “making AI work for everyone” and celebrating the prototype’s capabilities. It shifted to reflective, personal anecdotes about language loss, then to cautious concern over centralized, embodied AI and data ownership. Throughout, the tone remained collaborative and hopeful, ending on a forward-looking, rally-the-community call-to-action as the speakers announced the innovation challenge and future partnerships.


Speakers

Sushant Kumar


– Areas of expertise: AI moderation, multilingual AI, public-interest AI


– Role: Session moderator / host


– Title: –


– Sources: [S1]


Announcer


– Areas of expertise: –


– Role: Event announcer / moderator


– Title: –


– Sources: [S3], [S4], [S5]


Anne Bouverot


– Areas of expertise: AI policy, digital diplomacy, telecommunications


– Role: Special Envoy for Artificial Intelligence, France; Chair of the board of École Normale Supérieure


– Title: Former Director General of the GSMA


– Sources: [S6]


Ayah Bdeir


– Areas of expertise: Open-source hardware, multilingual AI, entrepreneurship


– Role: CEO of Current AI; Engineer & entrepreneur with 20 years experience building open-source tech infrastructure


– Title: –


– Sources: [S8]


Amitabh Nag


– Areas of expertise: Linguistic AI, multilingual language models, large-scale inference systems


– Role: CEO of Bhashini


– Title: –


– Sources: [S9]


Martin Tisne


– Areas of expertise: AI governance, democratic AI values, collaborative AI development


– Role: Chair of Current AI; Lead of the AI Collaborative organization


– Title: –


– Sources: [S11], [S12]


Shalindra Pal Singh


– Areas of expertise: Integration of multilingual models, AI hardware-software co-design


– Role: General Manager at Bhashini; collaborator on device integration


– Title: –


– Sources: [S15]


Abhishek Singh


– Areas of expertise: Public-interest AI policy, digital sovereignty, government-industry collaboration


– Role: Under-Secretary, Ministry of Electronics and Information Technology (India)


– Title: –


– Sources: [S16]


Device


– Areas of expertise: On-device AI inference, multimodal processing (ASR, MMT, LLM, TTS)


– Role: AI hardware prototype that responded to queries


– Title: –


– Sources: – (information from transcript)


Andrew Tergis


– Areas of expertise: Embedded AI engineering, hardware prototyping, model quantization


– Role: Lead engineer on the Current AI side of the project


– Title: –


– Sources: [S22]


Additional speakers:


Aya Bhadel – identified in the transcript as “the CEO of Current AI” (likely the same person as Ayah Bdeir, but listed separately because of the distinct name used).


– Areas of expertise: –


– Role: CEO of Current AI


– Title: –


– Sources: – (derived from transcript)


Full session reportComprehensive analysis and detailed insights

The session opened with Sushant Kumar asking how a paradigm can be built so that artificial intelligence works for everyone and stating that this was the purpose of the gathering [1-2]. He introduced the theme “The case for personal, local and multilingual AI” and announced a joint effort between Bhashani and Current AI, coordinated by Kalpa Impact, to showcase a “seminal open-source AI hardware device” that is multilingual, handheld, privacy-preserving and capable of operating without connectivity [3-5]. After a brief outline of the agenda, he promised a video that would “capture our imagination of what this product would look like” and a live demonstration by the makers [6-12]. The video underscored that India’s AI journey has moved beyond pilots to “populations’ reach, clear use cases, last-mile delivery” and a vision of AI that is not governed by any single country or corporation [13-20].


Following the video, Sushant invited Ayah Bdeir, CEO of Current AI, to lead the product demonstration [22-24]. Ayah briefly paused the session to organise a group photo and then introduced the demo team: Andrew Tergis, the lead engineer from Current AI, and Shalindra Pal Singh, a general manager at Bhashani who had worked closely on integrating Bhashani’s models [30-33]. She highlighted that the prototype had been built in a remarkably short six-week (actually five-week) sprint, a timeline made possible by the pre-existing partnership discussions and her admiration for Bhashani’s work on linguistic diversity and its 250 models [34-38]. Ayah framed the collaboration as a model of “identify-gap → co-develop → open-source” that results in a public-good stack for OpenAI, emphasizing that Current AI seeks to learn partners’ priorities and release jointly built technology openly [70-78].


Andrew then described the prototype as an “open AI inference device” that differs from other conference products by being deliberately general-purpose: any user can connect, upload models and run inference locally on a handheld unit [53-56]. He demonstrated a flagship application co-created with Bosch for vision-impaired users, where a button press triggers a spoken question in the user’s native language, the device captures audio, transcribes it via automatic speech recognition (ASR), translates it to English, feeds the text and an image to a large language model (LLM), translates the answer back, and finally synthesises speech in the original language [58-66]. The entire pipeline-ASR, neural machine translation (MMT), LLM inference and text-to-speech (TTS)-runs on-device, illustrating the feasibility of full-stack offline AI [55-62].


During the live test, a Hindi query was processed: the ASR model converted the spoken input to text, the MMT translated it, the embedded LLM generated a response, and the TTS module vocalised the answer, all without any cloud connection [70-72][79-82]. In a second demonstration, an English query “What is on this table?” returned the answer “The table has candy wrappers of Twix, Milky Way, and KitKat,” showing the device’s ability to recognise objects and produce brand-level details [84-86]. Shalindra explained that the models had been heavily quantised to fit the limited hardware, yet the optimisation “reached a point where there is no hit on the accuracy fronts” [71-72]. The prototype currently runs on an Intel Jetson platform but is designed to be processor-agnostic, allowing future deployments on alternative chips while supporting the deployment of any model the community wishes to use [88-91].


Sushant praised the demonstration, noting the logistical challenge of clearing the device through customs and the significance of its offline operation, which meant that “all those queries, all the AI processing was happening on the device” [92-100]. He highlighted that four or five models are already operational on that particular device, a notable achievement for edge hardware [100-102].


Amitabh Nag then provided the backstory of Bhashani, explaining that it was founded in 2023 after he experienced the difficulty of learning in non-native languages at school, which motivated a drive to preserve linguistic nuance and create a corpus for Indian languages [108-110][118-120]. He described the early technical hurdles of building models without existing digital data, the reliance on “brute-force” data collection and collaboration with translators to create a digital corpus, and the subsequent scaling to a system that now handles roughly 15 million inferences per day on a 200-GPU cluster, monitored through real-time dashboards [121-131].


Regarding language coverage, Amitabh reported that the current system supports 22 spoken languages and aims to expand to 36, already having digitised the tribal Bheeli language, which previously lacked a script [170-176]. He outlined four pillars for inclusive AI: (1) a small, offline-first form factor that can reach the last mile; (2) expanding the breadth of language coverage so that no language is left behind; (3) deepening model enrichment, for example by adding place-name glossaries from the Survey of India; and (4) continuous contextualisation of data to improve relevance [163-168][180-186].


Ayah shifted the discussion to broader concerns, warning that the emerging wave of “embodied AI” – glasses, robots, voice assistants – often records continuously, sends data to the cloud, and is trained predominantly on Western languages, thereby creating a hardware lock-in similar to the iPhone’s ecosystem [194-206][210-213]. She argued that open-source hardware can break this lock-in, likening its potential impact to that of Linux, which provides a neutral foundation for community-driven innovation [210-213].


She then outlined hopeful trajectories for the platform: reducing cost, improving battery life, shrinking size, enabling mesh networking of multiple units, scaling to stationary micro-data-centres powered by solar panels, and developing specialised applications such as agricultural assistants, privacy-preserving toys for children, or tourism guides [215-232]. These possibilities are “infinite” once the hardware platform is open and modular [215-226][227-232].


Fireside chat – panel discussion


Martin Tisne opened the panel by introducing Abhishek Singh as “the master and orchestrator of this entire summit” and announcing the launch of Panteo, a framework for culturally-aware data sharing [300-310]. He then set the stage for a conversation on data sovereignty and reciprocity.


Abhishek Singh argued that communities must retain rights over data derived from them and should receive tangible benefits, especially in sectors like agriculture where aggregated data can improve advice, while recognising that health data may require stricter privacy controls [299-307][306-310]. He illustrated the point with a Netflix documentary about tribal women annotating pest data, highlighting how local knowledge can dramatically improve AI outcomes [260-270]. Anne Bouverot echoed the need for trusted third-party institutions that can manage privacy-preserving data sharing, ensuring that cultural creators can opt-out or be compensated, and that data use balances public-interest research with protection against misuse [318-324][330-342]. Martin Tisne highlighted the tension between open-source AI development and the need for controlled governance of cultural datasets, prompting a nuanced debate on reconciling openness with cultural rights [329-332][318-324][330-342].


On AI sovereignty, Abhishek defined it as complete national control over the five layers of the AI stack-energy, data-centre, chips, models and applications-asserting that no country should be dependent on external providers for any of these layers [368-373][374-382]. He noted that India already possesses energy sufficiency, data-centres, models and applications, and is progressing toward domestic chip design and eventual fabrication, aiming for full-stack independence within the next 5-10 years [383-387].


The conversation then turned to Indo-French cooperation. Anne highlighted existing joint research on resilient, multilingual AI and suggested that France’s policy mechanisms, such as cultural quotas that fund local creators, could be adapted to AI to ensure cultural representation and funding [263-270][271-276]. Amitabh added that the partnership between India and France, reinforced by recent high-level engagements, offers complementary strengths for building alternative, sovereign AI solutions and shaping global norms [391-398][401-403].


India AI Innovation Challenge


Abhishek announced the India AI Innovation Challenge, an open-source competition that invites researchers, developers and entrepreneurs to hack the Bhashani-Current AI prototype. Submissions open on 25 February, with prize funding from both organisations; Bhashani will provide quantisation expertise and technical support, while Current AI will continue to release the hardware and software as public-good resources[409-424][419-424]. Aya mentioned a possible prize pool of about $110 000, though the exact amount was not confirmed [420-422].


Sushant concluded by reaffirming that the combination of offline, multilingual, open-source hardware and collaborative governance of data and cultural assets can empower diverse communities, drive inclusive AI innovation and, ultimately, make AI work for everyone [233-236]. The session therefore illustrated a concrete technical achievement, a shared vision for culturally aware AI, and a concrete call-to-action through the innovation challenge-directly aligning with the overarching theme of personal, local, multilingual AI for everyone.


Session transcriptComplete transcript of the session
Sushant Kumar

And therefore, how do we develop and support a paradigm that can make AI work for everyone? And that’s what we are here today. The session today is very aptly called: The case for personal, local and multilingual AI. Through a collaboration between Bhashani and Current AI, orchestrated by Kalpa Impact, we are proud to present to you today a seminal open source AI hardware device, one that is multilingual, handheld, privacy preserving and works in zero connectivity settings. So what we are going to do today is we are going to talk about the concept of AI. What we are going to show you after this will be a video that presents the imagination of what such a device could lead to.

in terms of making AI work for everyone. And once we have done that, there’s a special treat for all of you. The maker of the device and the collaborators at Bhashani are there in the room and they will demonstrate the product to you. So why don’t I begin with playing this video, which captures, which takes some creative liberties and captures our imagination of what this product would look like. And train on what I am watching. Audio, please. Thank you. Thank you. India’s real journey is no longer about pilots or promises. It’s about populations’ reach, clear use cases, last mile delivery. This is real world impact. This is real world impact. and connected vision for AI, not one that’s governed by any one country or one company.

I think all countries have a huge amount to bring to the table and a big relief in the power of collaboration. I was ready, the cup is open, now we need you. Come innovate AI for your own language, for your own community. We want to work with as diverse a group as possible. We can’t wait to see what we do. Yes, we’re back on. And for the next segment, I would like to invite Aya Bhadel, the CEO of Current AI, to take us through the product demonstration. Aya is an engineer and an entrepreneur with 20 years of experience building open source technology infrastructure. that works at global scale. Aya, over to you.

Ayah Bdeir

I have a quick interruption. I have to ask everybody to come here to take a picture so that the picture can be read by the end of the panel. You have 90 seconds free to speak amongst yourselves. Thank you. All right. All right. Thank you so much for coming, everyone. I’d like to introduce Andrew Turgis, who was the lead engineer on this project from the current AI team, who’s going to take us through a demo. Oh, there you are. And also Shalindra Pal Singh, who is a general manager at Bashni, who was Andrew’s collaborator and worked very closely to integrate Bashni models into the device. And I just want to say a couple of things. This project was undertaken in a six -week period, I think maybe closer to five weeks, actually.

So I just joined current AI in January of this year. When I came in, the partnership with Bashni had already been in discussion, and I was very inspired by Bashni’s work on linguistic diversity and the 250 models. And we thought this was an opportunity for us to go all the way, say, to the user and create something where really people can create AI that works for themselves, for their communities, and for their languages. So this prototype is the beginning of a journey and also a platform to imagine infinite things that are possible. And so you’ll see how it works. But as it’s working, I also would like you to imagine what you could do with it and where you could take it.

And from my perspective, I’ll just say for Current AI, this is an example of how we’d like to work with partners where we learn more about their interests and their focus areas and their priorities, and we zero in on a collaboration that we can develop together. We build it together, and then we release it as a public good. So in this case, it’s a piece of hardware and a development platform. In another case, it could be something else. But we’re really proud that this collaboration with Vashni is our first collaborative build, and you get to see it kind of firsthand as you’re sitting here. So, Andrew, Shalindra, please. Please join me on stage, and I’ll let you take us away for the demo.

Andrew Tergis

All right. Perfect. Hello. I’m so pleased to be able to show you this prototype that we’ve created. Yes. Oh, thank you. In front of the table. Wonderful. So this is our prototype open AI inference device. So, you know, unlike some other products you might have seen at this conference, which might be designed for one very specific user or one very specific use case. This device is designed to be used by any number of users for any number of use cases. The hope is that anyone could feel empowered to connect up to this device, write their own application, pull any number of models onto the device and run inference locally in their hand. We have one flagship application that we’ve developed in concert with Bosch.

That demonstrates their the models that they’ve been developing over so much time. And this sample. Application we call here the world, which is. an application where a vision -impaired user can press a button, ask a question in their native language about their surrounding, and have the device read back their response again in their native language, leveraging Bosch, these 22 -plus languages. In particular, we’re leveraging an ASR, an automatic speech recognition module, to convert the audio into text in their native language. We’ll be leveraging an MMT, neural machine translation module, to convert that text into English. We’re running it through a large language model with the image data to answer the question, and then we’ll be converting it back into their native language using, again, the MMT model, and finally a TTS module to convert it back into audio.

So this device is able to run all of those modules in concert. So without further ado, let’s try and give it a test query. Shalinder, do you think you can help me out here? I guess you’ll take the photo, and then I’ll spin it around quickly so the audience can see what’s happening. We’ll ask in Hindi. Let me just triple check. Yep, you’re all good. All right.

Shalindra Pal Singh

What it has done is it has taken the image and then it has taken the automatic speech recognition model kicks in and then neural machine translation is happening and then again the response is getting from the LLM that we have embedded and the translation is happening and the text to speech which is being spoken out. We have quantized the model in such a way that it is fit in. Usually when we do the quantization there is always a trade -off that there is a hit on the accuracy but we have reached to a point where there is no hit on the accuracy fronts.

Andrew Tergis

This is a great way to a truly huge effort from your team, and we wouldn’t have been able to fit such a high -fidelity LLM on this if you didn’t do that great optimization work. So let’s see. Let’s ask another question. We have a couple of candy bars on this desk here, which we can show you. Let’s see. Let’s try it. I’m going to put this in English. What is on this table?

Device

The table has candy wrappers of Twix, Milky Way, and KitKat.

Andrew Tergis

All right. All right. It actually got the brands. And we have one more question of grave importance. But I’ll ask him in Hindi. That’s right. I got it. This is the best candy bar in the world. There we go. Would anyone like a candy bar? Anyone? Anyone? There you go. So just very briefly while we’re handing this out, this is currently based on the Intel Jetson, the NVIDIA Jetson processing platform, but we’ve used it to support other platforms as well because the processing that we’re doing does not depend on that. That just happens to be the platform we’ve chosen at the moment. And, yeah, we’re working on the ability to deploy any model that you could dream of onto this device.

Thank you.

Sushant Kumar

Thank you very much. How did everyone feel about that demonstration and the things that can be done? Thank you. Thank you. And kudos to the Bhashini team, which worked tirelessly, and, of course, Andrew and the current AI team, which worked tirelessly to make sure the hardware, software, all of that was integrated. We had to get a device through customs as well. So that took some time, but eventually it’s here. and it’s working, which is amazing. And the best part is that the device is offline. All those queries, all the AI processing was happening on the device. And there are four or five models operational. Four models operational on that particular device, no mean feat. I salute the engineers who have worked on this, and there’s more to come.

And we know we have to get in a lot in a short period of time. So I will invite Ayya Bedev, the CEO of Current AI, and Sri Amitabh Ma ‘amji, the CEO of Bhashini, to join me for a fireside chat. And we’ll try and understand what about the personal, local, multilingual AI is what they are passionate about. So this is also about what are their motivations. So why don’t we start with you, Anasabji. so we all know a lot about bhajani we have heard about it and you know it’s a superstar at this point in time in terms of what you have achieved tell us about the origins tell us about how this all started and why this is personal to

Amitabh Nag

Hey thank you see we all are born with our mother tongue right we learn our mother tongues for good 4 -5 years before we land up in a school and when we land up in a school it’s a three language formula so I am a Bengali and I talk about you know when it is Bengali everything is eaten so chol khawe is the right word so when you go to the school and you have to do Hindi and English you know how it could be for first 6 months you are going to you know, people will be laughing at you when you are translating and speaking because that’s the first way of speaking. You are not a native language speaker, so you will be translating and speaking.

That’s the linguistic nuance that you went after. So, you know, over a period of time, of course, we grew up. We were told that you have to learn English to succeed in life. So that’s another given which was there. And obviously, this opportunity came up. You know, there was already a concept which was there. And obviously, we started with, you know, one room office, first employee.

Sushant Kumar

When was this? Which year?

Amitabh Nag

This was in 2023.

Sushant Kumar

Okay. That’s recent. That’s recent.

Amitabh Nag

And then obviously, we started growing up as a team looking at various use cases. People started initially looking at the first thing was what’s the accuracy, which was the first question which used to come up. But then, you know. So our models were built up in a difficult condition because we didn’t have digital data to build up the AI model and which we collected the data through a brute force. And then we built up the models which were there because we went across to multiple places with translators who actually created the corpus, which is digital corpus. We still had deficient data, but we went across to build the model and deploy it. And under deployment, we had challenges which obviously came up from all aspects.

And today, when we have actually deployed the use cases, learned from it, improved it, we are now in a situation where we are running about 15 million inferences a day with a 200 GPU system and all having dashboards which actually give you every inference timeliness, how much time it takes, et cetera, et cetera. So we are able to real -time monitor what is happening in our system, who are our customers, how they are using it.

Sushant Kumar

Fantastic. It’s wonderful to hear about your personal motivations. And I’ll move to you. How many languages do you speak?

Ayah Bdeir

My native tongue is Arabic and then I speak French, English and I’m learning Spanish

Sushant Kumar

So very apt to move this personal and multilingual I have two questions for you One, tell us a little bit about Current AI and why this interest in this open hardware and partnership with Bhashini, how does this tie back to Current AI’s strategy and second, why is this personal to you?

Ayah Bdeir

So, Current AI was actually born out of the AI Action Summit last year in Paris It’s a public -private partnership with a mission to create AI for the public interest. And so it’s a partnership between philanthropy, government and the private sector to really say, we’re going to tackle public interest AI at scale. And the reason we’re going to do that is because the dominant companies that are governing our lives in AI operate at a scale, a financial scale, operate at an ambition level, that if we don’t match it, we don’t really have a chance to be a real alternative. And so Current AI was born out of that desire. The goal is to rally a global community and collaboratively and collectively to build a public staff for OpenAI that’s completely vertically integrated.

And so the way we work is we work with partners because the core premise is collaboration. Work with partners where we’ll identify an area of common interest and a priority and a gap in technology, and then we’ll zero in on that gap, work on it together, and then develop a piece of tech and release it as a public good. And so encourage this collaboration. This creation of technology that is put back in the public good, as well as have grant making under sort of like our fund pillar in order to encourage people already doing this work. And this topic is important to me, has been important to me for many years. I’m from Lebanon, from Beirut, and like I said, my native tongue is Arabic.

For the past many years, you know, our use of WhatsApp and mobile and social and everything, a lot of us in the Arab world lost use of Arabic. You know, my family and I, my sisters, my mom and my sisters and I speak in English to each other online all day. We speak on WhatsApp in English. The voice recognition is never good enough in Arabic. You spend more time correcting it than you do doing anything else. And so now it’s improved a little bit. But really, you know, technology has had an effect on the way we communicate with each other. And so for many years, it’s been a real concern for me that, you know, technology, if it’s not made by us, it’s not for us.

And so when I joined Current AI early this year, multilingual diversity was already a topic. And I was very happy about that. And sort of really… I really wanted to expand it into this idea of not just… language diversity, but cultural diversity and cultural preservation as a whole. And so this sort of idea came about and you can tell more about it.

Sushant Kumar

Fantastic. What a story of Genesis. And of course, Silicon Valley making devices on AI for local use cases is going to be as effective as giving power in the hands of people. So on inclusivity, Amitabhji, one of the visions of Bhashani is to expand access. So when you think of this partnership with current AI, what is the future you envision in terms of expanding access and creating inclusion with Bhashani as the linchpin?

Amitabh Nag

So a few things. So, you know, when you look at the size of the device, you know, we have almost reached a form factor, which is quite significant. It’s small, right? And it can be carried through at the last mile. And since it works offline, you are in a position to actually use it anywhere or more. So that’s the first part of inclusivity. We obviously have, you know, plans to look at smaller form factor as we go forward. The second thing which is there is to look at the language coverage. We currently cover 22 languages. In our system, we already have 16 languages, 14 more languages on text, a total of 36 languages. And we would like to increase that on breadth.

And recently we have digitized one of the tribal languages, which is Bheeli, which doesn’t have script. So that also gets added to it. So that is about breadth of languages which is there, which will be continuously added. So when we are talking about form factor, second, we are talking about offline. Third, we are talking about creating a breadth of languages so that no language is left behind. Hence, no person is left behind, including the tribal languages. The fourth factor is about… So how do we, you know, enrich the models which are there, which is a continuous activity which Vashni takes over. There can be, there are multiple things where the models still have to be enriched.

Means India has got about, means we were talking to Survey of India, and they have about 16 lakh places named, which are still to be digitized. So, you know, and put into the system. So those are glossaries which we are building. There are contextualization efforts which are happening. So over the period of time, the language enrichment as far as depth is concerned, is another thing which we are looking at. So we’re looking at breadth, depth, offline form factor as the four things which will move forward in this.

Sushant Kumar

Fantastic. I can certainly see the open hardware playing a big role in that as well. I have a question to you on how you look at future. So what gives you the most hope? and the most concern about the future of language? And you started talking about how, you know, you feel like Arabic and the nuances are getting lost. So what gives you most hope or most concern about the future of language in an AI -driven world? Could you talk about that?

Ayah Bdeir

So I’ll start with the concern. I’m concerned about this new frontier of embodied AI. So over the past, you know, year or so, every big tech company has released their version of an embodied AI device that wants to enter your home, wants to enter, that wants to be close to your body, wants to, you know, enter your personal space. So whether metal is glasses or whether the butts are robots or whether Amazon Alexa. And we’re in full control of these devices, and we don’t know how they’re developed, and we don’t know how they’re trained. You know, last week or the week before, Meta announces that that the glasses are going to start doing facial recognition on every person you encounter in the street.

So now, unknowingly, you’re walking down the street. If somebody is wearing meta glasses, you are being recorded and facially recognized. So we have these devices. We don’t know how they work. They’re continuously recording our data, sending it out to the cloud. We also don’t know how they’re trained, and oftentimes they’re trained on Western languages. And so hardware is where the lockup first starts. It’s how the iPhone locked up a lot of technology innovation, because what happens is these companies will then develop, give us APIs into their devices. Startups will start forming and building on top of these devices, and then the startups start building a dependency on the device, and you start to build a whole stack on.

a core piece of hardware that you do not control. So it’s really kind of like a core, you know, building block that we have to crack before we let them sort of own the entire stack or the supply chain. I spent 15 years, you know, before current AI in open source hardware. I’ve seen how powerful it is when you develop on an open platform and people do what they want with it. It’s, you know, the same power that you get from something like Linux. And so that’s sort of a big area of concern. The area of hope for me is, you know, there are many trajectories for us to kind of improve from here. On one side, you can improve the device itself.

You lower its cost. You improve its battery life. You shrink its size. You make it more beautiful. So, you know, that’s one access. Then there’s another access that you can develop. You can have multiple of these devices together, connect them in a mesh network, now you have a distributed inference that you can use. you can run something larger on. You can have a larger version of this device that’s stationary. It can be like a micro data center. You can put a solar panel on it. Now, suddenly, it doesn’t need a battery. So you can infinitely innovate on the possibilities of this core building block. And then the third kind of track is on what you do with it.

You make a device for a farmer to identify how to deal with their crops. You make a device for a parent who wants to give their kid a toy but doesn’t want the toy to be communicating their private data back to the cloud. You create some sort of, I don’t know, tourism device that you can put around your neck and helps you move around, various sorts of things. And the opportunities are infinite.

Sushant Kumar

Fantastic. And I wish we had more time to just continue going. We’re just scratching the surface. But we’re at time. And I thank you, Amitabhji, for the great work that you and your team are doing. Thank you. And I wish you all the best and all the luck for making that vision into a reality. Thank you very much. Thank you. and we move into our next segment which is another fireside chat and for that i would now hand the floor to a long -time friend and colleague martin tisney martin tisney leads the ai collaborative an organization working on building ai grounded in democratic values and principles and he’s also the chair of current ai. Martin over to you

Martin Tisne

Thanks very much um and my first task is going to be to welcome Abhishek singh who everyone knows who is the master and orchestrator of this entire summit congratulations Abhishek and amazed you’re still standing welcome and who is the orchestrator of the Paris summit welcome special envoy to the president thank you very much please I hope that was enough I think it was the next step so we are setting something with a resource to follow so as Sushant was saying and Aya I’m extraordinarily excited by Aya’s leadership when it comes to current AI and the work in really turning this work around linguistic diversity to the question of cultural preservation it seems to me that ensuring that AI isn’t squashing all of these incredible cultures that make up the beauty of the world into a monoculture or into a small number of monocultures is one of the most important questions that we have today so my first question to both of you maybe starting with you Abhishek and then to Anne it’s the same question what is your vision?

what is the world that you would like us to live in when it comes to this intersection of AI and culture if we get it right what does it look like? whether it’s five years ten years from now what does it look like if we get it right?

Abhishek Singh

languages. He knows only his local term, his bug term. He does not even know how to key in or how to navigate a captcha or he gets lost with the hashtags and the Amazon. So for such people, if they are able to talk to the developers, put their query into the internet or bandwidth or connectivity and get a reply back, that will be empowering. And that’s what I think the ultimate objective of this summit also. Democratizing use of AI and ultimately making AI work for all. Thanks.

Martin Tisne

Thank you very much, Abhishek. Anne, What is your vision?

Anne Bouverot

So, of course, I share a lot of what Abhishek said. I also think that using AI through our phones and one way to say this is that when I get online to my phone, I mean I love San Francisco, I love Shanghai, but I’d like to have a wider choice. I don’t necessarily want to be transported to Silicon Valley. who are transported to Shanghai when I get into AI. And that’s a little bit of a joke, but if all the cultural representation, if all the legal background, if all the customs that are taken as just the de facto way you interact with people, if that’s the choice, well, that’s just such a reduction of cultural diversity.

And I think it’s just not okay. It’s not just about being able to have access to a French AI or an Indian AI. It’s even more than that. If I’m interested in music and if I come from a particular area in France, well, I’d like to be able to have that community and its culture represented there. So I think that’s part of my vision.

Martin Tisne

Thank you. And if I can stay with you just a second, Anne, from a French perspective, from France’s point of view, how do you see? Um, culture. and AI playing together? What does it look like? So when I was a kid growing up in France, from a cultural perspective, it was at a time where it was, I actually think it was a good idea in retrospect, you’ll tell us what you think, that there was a law that mandated a certain percentage of music on radio to be sung in French. There was a law that mandated a certain amount of productions, movie productions to be in French. And that’s ended up, it seems to me, with a certain amount of, you know, sort of cultural patrimoine, as we say, to exist.

So from a policy perspective in France, when it comes to artificial intelligence and culture, do you think that at some point there needs to be a sort of a set norm, like we did in sort of in movies and radio? What do you think?

Anne Bouverot

That’s a good question. I don’t know whether we need a set norm, but yes, there’s mechanisms to encourage creation in France and in Europe. That’s quite important. With every movie that you go and see, which can be from any country, you can see that there’s a set norm. And I think that’s a good thing. I think that’s a good thing. that gives a certain tax on this, a certain amount of money, goes to a fund that then helps French creators to go and prepare whatever they want as their next film. And that mechanism doesn’t make it hegemonious. I mean, of course, we love culture from all over the world, but it helps ensure that there’s an element of French cultural creation.

And that’s what we definitely want to continue to have. And we want people to have the ability to see that in France, but all over the world, just like we love to see Indian movies or listen to Indian music or some symphony or some movement. So that city needs to be maintained, needs to be ensured in including through some mechanisms to fund it. Yes.

Martin Tisne

Thank you Anne so. Thank you very much,. Abhishek, similar question to you.

Abhishek Singh

I think if AI has to be like covering all aspects, then it has to be rooted into data sets that are diverse and data sets when you talk about in any cultural context, it will include not only languages but it will also include the culture, the heritage, the music, the movies, the songs and lots of folklore. Because in fact, if you look at across India, if you go to the rural areas and all, there are lots of traditions which are not even documented well. So those things are not even available in a digital format. They are known to people. Like in fact, recently I was watching a documentary on Netflix called Human in the Moon.

It’s set up in a state of India, Jharkhand, with a lot of tribal population and there are these tribal women who are doing data annotation for an American firm. And it shows that they have to, they are seeing leaves and pests and they have to mark it whether it’s a pest or not. So this young girl is there and what she does is that she sees a pest, an image, and she marks it not a pest. Her manager comes down heavily on her and says that this is obviously a pest. How are you saying there’s not a pest? She says that this tree grows in my local forest around where I live and I know that this worm eats only leaves which are dying.

In a way, it helps the plants. It’s not a pest. So again, having this traditional knowledge built into the corpus of data sets on which we train AI models will be very, very vital if we have to ensure that AI doesn’t hassle and hallucinate if AI becomes near to what human is. So it becomes very important to capture this cultural context from all across the world, from all communities, all cultures, all traditions, and we only will be able to be something which is not a pest. Because we are human like atrocious. This just technological pursuit of AGI and all will not solve the problem that we are living.

Martin Tisne

that’s a great example thank you but then maybe staying with you abhishek for a second and i’ll come back to you on the question of reciprocity so you talked about the data sets communities cultures and all their diversity are sharing the data we want them to be sharing their data with different with different ai models what does it look like from the community perspective do you think like should they be involved in it should they be have rights over the data how do you how do you think

Abhishek Singh

It is a very interesting question because when it was about sharing of data sharing of data across uh across companies across industry we have to kind of when the frameworks which allows data for public purposes that means data in a way which does not violate the privacy or the personal identity of the person who owns the data the person who the data belongs the data principle per se so when it is data sharing the data in the community will need to be involved If you don’t do that, in the interest of business and in the interest of commercial requirements, the possibility of missing the data goes up. So it’s very important to have standards, not only technical standards, but community standards which are rooted in the culture and the belief systems of a place from where the data is coming from in order to ensure that the models and the applications

Martin Tisne

Thank you. If I can get a little bit further on that question, there’s the question about the rights of the individual and the rights of the communities towards the data. Do you think, in the way that you’re working, is there also a reciprocity in terms of if data about them is used for a particular purpose, that then the community should benefit from it? How do you think about that? They should benefit whether it’s a translation or other device? How do you think about that?

Abhishek Singh

So you need to think about like the different use cases may have different applications. Like for example, it’s data about say agriculture and if I have aggregate data about a particular area and that kind of is used to generally advise me so partners with regard to what they should show for maximum benefits at what time they should show. Then that data should be shareable and that’s in benefit of everyone. But if we say for example health data, then there in the individual might not be wanting to share that data with the lab and ecosystem. So I think it will be context specific and we cannot have general rules about sharing of data and the reciprocity principles across different sectors.

Martin Tisne

Thank you very much. Anne I have a similar question for you on this question of reciprocity. What’s your take?

Anne Bouverot

I think that’s a very profound question. Part of the reason why you want to share cultural data is so that cultures are preserved and you don’t end up with one or two or three cultures in the world, but something that is more diverse. So it is in the interest of a cultural group, of a civilization, that in the world of AI this culture is represented. And from that perspective, you have a very natural reciprocity loop. But at the same time, creators are saying, I don’t want my data to be used if I don’t have a mode of being compensated or recognized or a way to oppose. And so you have this tension between artists, for example, who say, well, I want my rights to be maintained and I want some type of compensation.

If this is being used to feed AI. models and then for people to earn money out of it. But then on a collective basis, you do want that culture to be represented. So I’m not sure I have a solution, but I very clearly see the tension. And many ways we can navigate that is to have a right of opposition by specific artists so that they can say, no, my data, my creations are not going to be used. And at the same time, you can certainly have historical information and things that are not so subject to maybe having remuneration for living artists be part of the general cultural data that you use to train AI. But beyond these two obvious things, I’m not really sure.

So we need to continue to work on this.

Martin Tisne

Thank you. And again, just to go a bit deeper on the question. it really is, it’s a fascinating question because from the perspective of the communities I would imagine whose data it is it’s data about them, as you say you want people to know about your culture, you want the culture to be preserved and at the same time you want a certain degree of agency over how the data is used. In an earlier panel I was talking about, or we were talking about indigenous data sovereignty and we were talking about the Maori community in New Zealand and the degree to which, as I understand it in Maori culture, any data, any information that pertains to Maori culture is effectively part of Maori culture so there’s a real question of agency.

My question is, in the run up and when we were working together on the Paris summit we talked quite a bit about the relation between open source AI on one hand and then the governance of the data and the governance of the data then to be controlled in different ways so how do you think about this balance because it strikes me that getting the balance right between on one hand the open source components, and I’ll come to the same question to you in a second, that is and on the other hand, a more controlled approach around data governance, that’s the special source. What do you think?

Anne Bouverot

Yeah, I completely agree. And maybe that’s, as Abhishek was saying, maybe the example of health data is a good one there because for cultural data, you want the general benefit and you want to preserve artists’ rights. I think those are the two dynamics. For health data, you do want, as an individual, as a patient, if you’re being asked the question, do you want to protect your personal data, the answer is yes. If you’re being asked the question, are you willing to share your data with other people who have or are at risk of a similar illness so that it can help them, the answer is yes. And then how do you balance the two? And so you need to find some ways to share data in a platform or in a way that you have trust into.

And so it needs. It needs to be privacy -preserving. It needs to be held by an actor you trust, even if you don’t go and look at all the terms and conditions, but you need to understand that it’s an institution or a third party that you can trust. And then you want to be able to rely on that third party to make the right decisions, like, yes, sharing the data to enable research and find new cures, but maybe to sharing it to insurance companies so that you can be charged a different rate, depending on what your personal situation is. And then when you get into sovereignty, maybe you’re happy for this to be shared with innovative startups in your country or your region that will develop cures and new foods, but maybe not with some other actors.

So you get to a number of different levels and questions. And for that, having third parties, most third parties that can vote for you, we make the right decisions, I think, is very important.

Martin Tisne

Thank you. Let me take the same question to you. How do you see that balance between, on the one hand, we’ve talked a lot about open source, open source AI over the course of the week. How do you see the balance between, on the one hand, open source, on the other hand, the question of the cultural data that we’ve been talking about?

Abhishek Singh

Again, ultimately, I’ll go back to the end objectives. What is the purpose for which we are sharing the data? Is it serving public interest or is it serving private interest? Is there a benefit for the user to whom the data belongs? So, for example, health data is there. If aggregate level data is there, for example, we keep on hearing about outbreaks of, we are over COVID, but we keep on hearing about outbreaks of flu and other ailments. If aggregate data about incidence of such diseases and linkages with other factors, environmental factors, weather factors, rain factors, is shared so that people can think of devising a data. AI enables solutions of integrating various. data sets and trying to see why in a particular geography, in a particular locality, some element of this is happening.

That is the public interest. So, we will have to define in a case -by -case basis, whenever data is being shared, whether open source or in a proprietary solution, what is the end objective, what is the problem that I am going to solve? And is it serving the larger interest of the community? Is it serving larger public interest or is it being done to benefit a few corporations? Like, for example, the example she gave about insurance companies. If it leads to, if the data about health consumption or something leads to increase in my insurance premiums, that is like not fair. Because they are linking that data with the individuals to whom the data belongs. So, we will need to think of privacy preservation techniques, we will need to think of anonymizing techniques, so that in no ways the data principle to whom the data belongs is harmed in an adverse manner.

So, we will have to do this in a very new instrument. There is not one size fits all solution. If we do that, we will end the risks of the, the risks dominating the narrative and we will go somewhere towards the positives that would have been the most fun.

Martin Tisne

Thank you very much so then there’s a question i can’t resist asking you which is what’s your definition of sovereignty because then you mentioned the term we’ve talked a lot about this this week and in the context of this conversation it’s really interesting because there’s a question of sovereignty from a nation’s perspective there’s a question of sovereignty i mentioned that marie example indigenous data sovereignty from a community’s perspective and then both have been talking about health so there’s a question of you know at an individual level the sovereignty i have over a data about me so with your experience coming towards the end of the summit and the experience that india has how do you when you think about sovereignty and ai what do you what do you think of

Abhishek Singh

I feel like sovereignty of course traditionally it’s a science concept wherein it seems that nations which are sovereign need to have complete control over what they do how they do with the entire control of the decisions so when it apply when you apply to technology and when you apply to ai specifically the same concepts will apply with regard to what I want to do, with whom I want to do and how I want to do. Nobody else should decide to make decisions on my behalf. So maybe in the short term ideally a complete sovereign AI stack will mean that we should have complete control over all the five layers of AI. Whether it’s the energy layer, the data center, infrastructure, chips, models, applications, use cases.

We should have complete control over it. The technology is evolving right now. In fact, good for a few countries. In fact, good for humans. I don’t think any other country has complete control over the entire AI stack. Every other country has complete control. In the context of India, we are there. We are there on energy sufficiency. We have the data centers. We have our models, our applications, but we don’t have the complete. We have the capability to distribute the computer. We do hold them three to five years. We design our own chip. And in five to ten years, we’ll be able to have a fab which we can take it out also. In the short term, if I decide that which chip I want to use, how I want to use, how I procure rather than be subject to conditionality rather we force people something which will be sovereignty.

So sovereignty will apply the same concept of sovereignty that we apply at the beginning of political science where in complete control of the business live with the sovereign government that should be the way we should look at sovereignty in AI as well.

Martin Tisne

Thank you very much. So just as we’re ending and we’re now in the time, feel free to weave in the questions of sovereignty. Anne, curious, in the wake of President Macron state visit and the bilateral relationship between France and India what do you both see, so starting with Anne and then finishing with you Abhishek, what do you both see as opportunities for France and India to jointly work on these global norms, global approaches for a more contextual approach to artificial intelligence, for a culturally inclusive approach to AI?

Anne Bouverot

Well, I’ll try to be short, but this is the year of joint innovation between India and France. There’s many areas where we’re collaborating and will continue to collaborate. Clearly, current AI and this work on multilingual AI is one. Working on AI that is resilient and sustainable by design, as we were just discussing earlier with Abhishek, is clearly a priority. So, it’s a priority, joint research. And then just to weave, I can’t resist weaving the work on sovereignty. I think sovereignty, no one actually, not even the U .S., they don’t have the chips. So, nobody can do everything alone. I believe that means having a choice and building alternative solutions. And I really think we can and we will jointly build alternative solutions between France and India.

Abhishek Singh

I kind of echo her and in fact the partnership between India and France have been there for quite some time, in fact last year we coached with France Action Summit and the partnership has continued this year and this year of course as you know we have launched a year of innovation and many more activities have been announced by President Macron and our Prime Minister in the last week and we are looking forward to joining you at the World Tech in the next few months and there are many more activities, partnership at the university level, partnership at the research level, partnership at the business level, partnership at the government level, so I strongly believe that working jointly with especially a trusted partner like France and India we have complementary strengths and we can try to present an approach to building a solution that can become an example for the whole world.

Martin Tisne

Thank you very much thank you very much, it was an honour to launch Panteo and Paris and a pleasure to launch this partnership in India, thank you. Thank you.

Announcer

Hello. Thanks, Martin. Abhishek Singh sir, I request you to stay on stage. And Aya, we’d love to have you on to launch the Global Innovation Challenge in the spirit of what Anne said. And Amitabh Nagsir as well. Please.

Abhishek Singh

Am I going first? Okay, great. So, great session, great thoughts, great demo. So, all of us have seen the demo of the reference device, the device which has been built in partnership with Bhashini and Current AI. And in fact, I must mention that it was just a few weeks back when we had this discussion, because I have been discussing with Martin after the discussions and announcement at Public Interest AI, like what will Current AI do with the 400 million dollars that they have, euros that they have raised. And I was saying that let’s do something which can really make an impact and if we can do something at the impact summit, it will be worthwhile. But kudos to the teams and they have built the collaborative build design which was designed by both the engineers from Bhashini and the Current AI’s support is being done in such a way that it’s a platform, it’s a prototype on which we can innovate.

It’s completely open source. It’s hackable, it’s privacy preserving, it’s multilingual. And with on -device AI, this prototype is capable of functioning in remote locations in not only India but anywhere else in the world where connectivity is a challenge or for any reason, if there’s an earthquake or there’s a problem or if there is a natural calamity and we can’t have connectivity it can work. So that can be really transformational for people to access services. And in partnership with Current AI and Bhashini, in fact it’s my honor and privilege to announce the India AI Innovation Challenge. which will give an opportunity to researchers, to engineers, to developers, to entrepreneurs to build on this prototype. And this prototype will be available in an open source manner for everyone to hack it and make it smaller, you can make it more sleeker, you can solve individual use cases for different sectors and it’s based on an open source software and hardware design and the kind of use cases one can think of will be limitless.

So there will not be one but multiple solutions that can be built on it. And we are opening it today and in the next few weeks the date here says that submissions will open on 25th Feb. The 25th Feb on our website will launch the challenge on which applications can be submitted and there is some time to build the actual device and those who will win will get a very handsome reward that will be funded both by Bhashini and Current AI and together we will try to ensure that we are able to build a product that the whole world can use.

Amitabh Nag

so we will continue to you know support this effort to our quantization mechanism and also the technical support will be available with respect to the model enrichment etc so this will be a joint effort so people are supposed to put in the effort and come back to us on the challenges and we will work on that together

Ayah Bdeir

I just say for Amitabh because maybe he tried to say Bashi is offering I think $110 ,000 prize to the winners maybe no I guess should people make a demand how does the number increase On your way out please make a request everyone the number to go out so there’s a big page to it for also participants to make sure that they have support while they’re developing their hardware and software and showcase the work online to inspire many other people really the point of it is to kind of like expand imagination and start this conversation about making your own AI and start the conversation about AI being personal and multilingual and solving communities and individuals own problems and today it’s a piece of hardware tomorrow it could be something in the software the day after can be in data so really this is the beginning of the journey thank you so much for coming everyone thank you for doing such good partners thank you Amitabh and the Vashni team thank you to the current AI team thank you Martin for bringing us together and have a great rest of the week and rest hopefully for you bye

Related ResourcesKnowledge base sources related to the discussion topics (19)
Factual NotesClaims verified against the Diplo knowledge base (2)
Confirmedhigh

“The prototype was built in a six‑week (actually five‑week) sprint, made possible by pre‑existing partnership discussions between Current AI and Bhashani.”

The knowledge base states the project was undertaken in a six-week period, possibly closer to five weeks, and notes that the partnership with Bhashani had already been in discussion when the Current AI team joined [S32].

Additional Contextmedium

“Bhashani’s work on linguistic diversity and its large portfolio of models (250) inspired the collaboration.”

Bhashani’s focus on linguistic diversity is highlighted in the knowledge base, which describes admiration for its work on language diversity [S1] and mentions the partnership discussions centered on this theme [S32]; however, the exact number of models (250) is not specified in the sources.

External Sources (122)
S1
Inclusive AI_ Why Linguistic Diversity Matters — -Sushant Kumar- Session moderator/host
S2
Building Public Interest AI Catalytic Funding for Equitable Compute Access — – Dr. Shikha Gitao- Andrew Sweet- Sushant Kumar
S3
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Giordano Albertazzi — -Announcer: Role/Title: Event announcer/moderator; Area of expertise: Not mentioned
S4
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Takahito Tokita Fujitsu — -Announcer: Role as event announcer/host, expertise/title not mentioned
S5
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Cristiano Amon — -Announcer: Role/Title: Event announcer/moderator; Areas of expertise: Not mentioned
S6
Building Trusted AI at Scale – Keynote Anne Bouverot — -Anne Bouverot: Special Envoy for Artificial Intelligence, France; Diplomat and technologist; Former Director General of…
S7
How to make AI governance fit for purpose? — – Anne Bouverot- Chuen Hong Lew – Jennifer Bachus- Anne Bouverot
S8
Inclusive AI_ Why Linguistic Diversity Matters — – Amitabh Nag- Ayah Bdeir – Ayah Bdeir- Martin Tisne
S9
Inclusive AI_ Why Linguistic Diversity Matters — -Amitabh Nag- CEO of Bhashini
S10
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — – Kritika K.R.- Amitabh Nag – Prasanta Ghosh- Amitabh Nag
S11
Inclusive AI_ Why Linguistic Diversity Matters — – Ayah Bdeir- Martin Tisne
S12
Building Public Interest AI Catalytic Funding for Equitable Compute Access — The panelists challenged the narrow focus on compute ownership, with Martin Tisné warning against potential “white eleph…
S13
ElevenLabs Voice AI Session &amp; NCRB/NPMFireside Chat — -Shailendra Pal Singh: Role/title not explicitly mentioned, but appears to be a co-presenter/expert on Bhashini translat…
S14
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — -Shailendra Pal Singh- Senior General Manager, Bhashani
S15
Inclusive AI_ Why Linguistic Diversity Matters — -Shalindra Pal Singh- General manager at Bhashini, worked on integrating Bhashini models into the device
S16
Open Forum #30 High Level Review of AI Governance Including the Discussion — – **Abhishek Singh** – Under-Secretary from the Indian Ministry of Electronics and Information Technology Abhishek Sing…
S17
Announcement of New Delhi Frontier AI Commitments — -Abhishek: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S18
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Abhishek Singh:I can take that, no worries. Thank you, Abhishek. The floor is yours. You can give your question. Yeah, t…
S19
Mobile Working Group Peer Reviewed Document — –  Device : …’a piece of equipment with the mandatory capabilities of communication and the optional capabilities of se…
S20
Foreword — 16. BT. 2019. BT’s Cyber Index reveals the scale of today’s cyber threat . https://newsroom.bt.com/ bts-cyber-index-reve…
S21
WSIS+20 Open Consultation session with Co-Facilitators — – **Jennifer Chung** – (Role/affiliation not clearly specified)
S22
Inclusive AI_ Why Linguistic Diversity Matters — – Shalindra Pal Singh- Andrew Tergis
S23
Announcement of New Delhi Frontier AI Commitments — -Andrew: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S24
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Ante este panorama, los países del sur global debemos priorizar estrategias y normativas para un uso ético y responsable…
S25
IGF 2024 Global Youth Summit — Margaret Nyambura Ndung’u: Thank you, Madam Moderator. Good morning, good afternoon, and good evening to all of the di…
S26
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S27
AI for agriculture Scaling Intelegence for food and climate resiliance — The speaker stresses that moving beyond pilot projects to full‑scale platforms demands trust, investment, and a replicab…
S28
Building Climate-Resilient Systems with AI — The focus must shift from research and pilots to deployment and impact through coordinated international efforts
S29
Transforming Health Systems with AI From Lab to Last Mile — And it has become so sensitive that today a lot of our customers, they do ask us whether you have a continuous, continuo…
S30
How Multilingual AI Bridges the Gap to Inclusive Access — And I was like, oh, my gosh, this is so cool. and really the fact that they were going to sort of the source and getting…
S31
AI race shows diverging paths for China and the US — The US administration’s new AI action plan frames global development as anAI racewith a single winner. Officials argue A…
S32
https://dig.watch/event/india-ai-impact-summit-2026/inclusive-ai_-why-linguistic-diversity-matters — And so the way we work is we work with partners because the core premise is collaboration. Work with partners where we’l…
S33
Collaborative AI Network – Strengthening Skills Research and Innovation — So that is definitely a public rail. And I know that in different parts of the world, there are many such rails being cr…
S34
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Waqas Hassan:I’d like to add one thing to say, we would just start, and I said, she’s spoken about global cooperation as…
S35
The strategic imperative of open source AI — A similar dynamic unfolded in the 1990s as expensive, proprietary systems like commercial Unix and Microsoft Windows dom…
S36
China unveils new open-source operating system: reducing reliance on US technology — China’s first open-source desktop operating system, OpenKylin 1.0, was unveiled on 5 July, marking a significant milesto…
S37
DPI High-Level Session — He impressed upon the audience that even the largest tech corporations are increasingly dependent on open-source softwar…
S38
Generative AI presents the biggest data-risk challenge in history — Cybersecurity specialistswarnthat generative AI systems, such as large language models, are creating a data risk frontie…
S39
Decolonise Digital Rights: For a Globally Inclusive Future | IGF 2023 WS #64 — Ananya Singh:Yes, apparently it’s no longer oil, but it’s sunlight. Well, historically, the era of colonialism ushered i…
S40
Data first in the AI era — AI system, and particularly I’m thinking like large models like GPT-4, Lama, are trained on enormous data sets that are …
S41
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-good-using-technology-to-create-real-world-impact — And I think that’s what we’re doing. And to give you another example of how it reduces the complexity, there’s a very in…
S42
https://dig.watch/event/india-ai-impact-summit-2026/shaping-ais-story-trust-responsibility-real-world-outcomes — Well, AI is… …energy -intense, especially now in the training phase. I think some of the data that are out there, it…
S43
WS #119 AI for Multilingual Inclusion — To achieve multilingual inclusion in AI, there is a need for innovation and local solutions. Communities should create t…
S44
Multistakeholder platform regulation and the Global South | IGF 2023 Town Hall #170 — Sunil Abraham:Thank you so much for that. And a special thanks to all my friends and colleagues at CGI.br I’m very grate…
S45
WS #208 Democratising Access to AI with Open Source LLMs — Abraham Fifi Selby: you’d like to answer. Yeah, I agree with you 100%. There is no competition in terms of this. Tha…
S46
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Melinda Claybaugh:Yeah, so I think we’ve all given some examples of benefits of open source technology. And I think I’ll…
S47
How Small AI Solutions Are Creating Big Social Change — When asked about platform competition, the speakers showed different perspectives. Aisha emphasized that it’s not a zero…
S48
WSIS Action Line C8: Multilingualism in the Digital Age: Inclusive Strategies for a People-Centered Information Society — Low to moderate disagreement level with high strategic significance. While speakers agreed on fundamental goals of lingu…
S49
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Xianhong Hu:Thank you very much Mr. Ambassador. Good morning everyone. First of all please allow me, I’d like to be able…
S50
Agents of inclusion: Community networks &amp; media meet-up | IGF 2023 — It amalgamates a variety of open-source applications which can be deployed offline. Notably, ‘Local’ can be implemented …
S51
The Future of Digital Agriculture: Process for Progress — Dejan Jakovljevic:Thank you so much. I will, while I quickly share my screen. Wonderful. So first of all, thanks again f…
S52
1 Introduction — In the case of R&amp;D focused on Life sciences technologies/biotechnologies , the projects mostly deal more with the us…
S53
Positive disruption: Health and education in a digital age — Pathways for Prosperity Commissionpublishedtwo digital policy briefs onhealthandeducationthat provide guidelines for cou…
S54
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S55
Panel Discussion Data Sovereignty India AI Impact Summit — No, as you said, there are different ideas, different theories, different narratives going on in sovereign. Everybody ha…
S56
Technology Regulation and AI Governance Panel Discussion — And that’s one of the key words is sovereignty. From the government view, from the big companies view, we need to manage…
S57
Responsible AI for Shared Prosperity — The balance between open-source development and community sovereignty presents ongoing challenges. While open-source app…
S58
Inclusive AI_ Why Linguistic Diversity Matters — The conversation expanded to broader themes of cultural preservation, data sovereignty, and the balance between open-sou…
S59
Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach — Global AI governance was the focus of a high-levelforumat the IGF 2024 in Riyadhthat brought together leaders from gover…
S60
Open-source tech shapes the future of global AI governance — As the world marks a decade since China introduced the idea of building a ‘community of shared future in cyberspace,’ th…
S61
1.1 CHALLENGES IN ENVIRONMENTAL INNOVATION — 1 ‘Imperfect appropriability of knowledge creation due to positive externalities: due to the non-rivalry nature of many …
S62
Hardware for Good: Scaling Clean Tech — Ann Mettler: Because I work on these issues also every single day. The challenge in clean tech and innovation is that…
S63
Table of Contents — Tutorial: The introduction of new technology to replace traditional systems can result in new systems being deployed wit…
S64
The impact of regulatory frameworks on the global digital communications industry — Ms Ellie Templeton is a Cyber Security Research Assistant at the Geneva Centre for Security Policy. She has an Internati…
S65
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S66
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S67
Beneath the Shadows: Private Surveillance in Public Spaces | IGF 2023 — The debate centres around the issue of control and consent regarding users’ biometric and personal data. One perspective…
S68
Ethics and AI | Part 6 — Even if the Act itself does not make direct reference to “ethics”, it is closely tied to the broader context of ethical …
S69
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S70
WS #323 New Data Governance Models for African Nlp Ecosystems — This comment became a cornerstone for the rest of the discussion. Multiple speakers referenced this ownership vs. consen…
S71
WS #203 Protecting Children From Online Sexual Exploitation Including Livestreaming Spaces Technology Policy and Prevention — The disagreement level is moderate but significant for policy implications. While speakers largely agree on the severity…
S72
Policy Papers and Briefs – 1, 2014 — Based on these elements, two solutions can be envisaged: ‘software’ and ‘hardware’ inviolability.
S73
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Moderate disagreement with significant implications. The disagreements are not fundamental conflicts but represent diffe…
S74
INTERNET GOVERNANCE FOR DEVELOPMENT — – the importance of retaining policy space for developing countries with regard to the use of Free and Open Sourc…
S75
Global AI Policy Framework: International Cooperation and Historical Perspectives — -Sovereignty vs. Openness in AI Development: The concept of “open sovereignty” emerged as a key theme – the idea that co…
S76
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S77
Panel Discussion Data Sovereignty India AI Impact Summit — High level of consensus with complementary perspectives rather than conflicting viewpoints. The implications suggest a m…
S78
The perils of forcing encryption to say “AI, AI captain” | IGF 2023 Town Hall #28 — The analysis of the arguments reveals several important points regarding the use of technology in different contexts. On…
S79
AI and EDTs in Warfare: Ethics, Challenges, Trends | IGF 2023 WS #409 — We propose a session that will delve into the opportunities, challenges, and risks arising from the use of artificial in…
S80
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Alex Moltzau: I want to address this with an anecdote. Because I am Norwegian, I feel partly responsible here. I mean, I…
S81
C O N T E N T S — shall lead to more innovative and practical Blockchain solutions. – v. Creation of Supportive Legal and Regulatory Fram…
S82
 Network Evolution: Challenges and Solutions  — Miguel González-Sancho:Okay, I am Miguel Gonzalez-Sancho. I am head of unit at the European Commission in DigiConnect of…
S83
1 Introduction — EUlevel research and innovation support policy increasingly focuses on a ‘ mission-oriented innovation policy ‘, i.e. a …
S84
Inclusive AI_ Why Linguistic Diversity Matters — “And the best part is that the device is offline”[46]. “Four models operational on that particular device, no mean feat”…
S85
Open Forum #38 Harnessing AI innovation while respecting privacy rights — 2. Privacy-Enhancing Technologies and Techniques Audience: Hello. Thank you so much for the presentation. I’m from N…
S86
Multistakeholder platform regulation and the Global South | IGF 2023 Town Hall #170 — Sunil Abraham:Thank you so much for that. And a special thanks to all my friends and colleagues at CGI.br I’m very grate…
S87
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Melinda Claybaugh:Yeah, so I think we’ve all given some examples of benefits of open source technology. And I think I’ll…
S88
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — Audience: My name is Satish and I have a long background in open source. I am presently part of ICANN and DotAsia organi…
S89
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S90
How Small AI Solutions Are Creating Big Social Change — When asked about platform competition, the speakers showed different perspectives. Aisha emphasized that it’s not a zero…
S91
How Multilingual AI Bridges the Gap to Inclusive Access — Bedir expanded Current AI’s focus beyond language to broader cultural preservation, recognizing that culture encompasses…
S92
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Xianhong Hu:Thank you very much Mr. Ambassador. Good morning everyone. First of all please allow me, I’d like to be able…
S93
WSIS Action Line C8: Multilingualism in the Digital Age: Inclusive Strategies for a People-Centered Information Society — Low to moderate disagreement level with high strategic significance. While speakers agreed on fundamental goals of lingu…
S94
Agents of inclusion: Community networks &amp; media meet-up | IGF 2023 — Moreover, Wagenrad expanded the applicability of her solar controllers beyond their conventional use. New prototype cont…
S95
The Future of Digital Agriculture: Process for Progress — Dejan Jakovljevic:Thank you so much. I will, while I quickly share my screen. Wonderful. So first of all, thanks again f…
S96
Positive disruption: Health and education in a digital age — Pathways for Prosperity Commissionpublishedtwo digital policy briefs onhealthandeducationthat provide guidelines for cou…
S97
State of Play: Chips / DAVOS 2025 — Kosmowski provides examples such as healthcare equipment (MRI and CAT scan machines) and fast food restaurant ordering s…
S98
https://dig.watch/event/india-ai-impact-summit-2026/inclusive-ai_-why-linguistic-diversity-matters — But as it’s working, I also would like you to imagine what you could do with it and where you could take it. And from my…
S99
Panel Discussion Data Sovereignty India AI Impact Summit — No, as you said, there are different ideas, different theories, different narratives going on in sovereign. Everybody ha…
S100
Panel #3: « Gouverner les données : entre souveraineté, éthique et sécurité à l’ère de l’interconnexion » — Drudeisha Madhub Merci beaucoup de m’avoir invitée à l’OIF. C’est vraiment un joli atelier depuis hier, c’est une belle …
S101
Technology Regulation and AI Governance Panel Discussion — And that’s one of the key words is sovereignty. From the government view, from the big companies view, we need to manage…
S102
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S103
Main Session on Artificial Intelligence | IGF 2023 — Audience:Okay. Hello, everybody. This is Hossein Mirzapour from data for governance lab for the record. Thank you for br…
S104
Opening address of the co-chairs of the AI Governance Dialogue — Tomas Lamanauskas: Thank you, thank you very much Charlotte indeed, and thank you everyone coming here this morning to j…
S105
Harnessing Collective AI for India’s Social and Economic Development — The discussion began with the moderator asking the audience whether they believed technology was reserved for the elite,…
S106
Open Internet Inclusive AI Unlocking Innovation for All — Anandan acknowledged the economic reality that makes open-source challenging: “if you invest a trillion dollars, you can…
S107
Digital Safety and Cyber Security Curriculum | IGF 2023 Launch / Award Event #71 — Moderator:perhaps. If she can launch it again. Steemed attendees, allow me to welcome you on behalf of the creators unio…
S108
Seeing, moving, living: AI’s promise for accessible technology — TheRYO bionic handdemonstrated what95% of natural movement looks like. Visitors watched it handle delicate objects, perf…
S109
(Plenary segment) Summit of the Future – General Assembly, 5th plenary meeting, 79th session — Nikos Christodoulides: Excellencies, distinguished colleagues, last year’s Summit marked the beginning of a new phase …
S110
Driving Indias AI Future Growth Innovation and Impact — And lastly, goes back to the same thing. And maybe I’ll use the same example. You know, we had the UPI of money. We need…
S111
AI 2.0 Reimagining Indian education system — So these are the fundamental shifts which we have witnessed post -COVID. And then if you look at the artificial intellig…
S112
India deploys AI to modernise its military operations — In amovereflecting its growing strategic ambitions,Indiais rapidly implementingAIacross its defence forces. The country’…
S113
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Although I did check, and I can gently point out that England remains just ahead of India in the ICC test rankings, so n…
S114
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — <strong>Naveen GV:</strong> out a long, lengthy form of information for that to be processed much later by another human…
S115
Founders Adda Raw Conversations with India’s Top AI Pioneers — A group photo was planned to conclude the session and maintain networking connections
S116
https://dig.watch/event/india-ai-impact-summit-2026/digital-democracy-leveraging-the-bhashini-stack-in-the-parliamen — mostly from my understanding and experience with the English that has happened, in the past. Yeah. interesting points, P…
S117
Webinar session — Darkwah contends that regardless of whether participants viewed the process as successful or not, everyone can identify …
S118
https://dig.watch/event/india-ai-impact-summit-2026/keynote-vishal-sikka — Thank you so much. Thank you so much. Wow, wonderful introduction and what an amazing event. I want to share three point…
S119
Day 0 Event #59 How to Develop Trustworthy Products and Policies — The project timeline was estimated at six months, though government integration requirements might extend this timeframe…
S120
The strategic shift toward open-source AI — The release of DeepSeek’s open-source reasoning model in January 2025, followed by the Trump administration’s July endor…
S121
OpenAI joins dialogue with the EU on fair and transparent AI development — The US AI company, OpenAI,has metwith the European Commission to discuss competition in the rapidly expanding AI sector….
S122
Bridging the AI innovation gap — LJ Rich, as the moderator, acknowledges and reinforces the invitation for new partners to join the collaborative AI for …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sushant Kumar
3 arguments109 words per minute999 words546 seconds
Argument 1
Inclusive AI must be personal, local, and multilingual to serve everyone
EXPLANATION
Sushant frames the need for AI systems that are tailored to individual users, work in local contexts, and support many languages so that no community is left behind. He links this vision to the broader goal of making AI work for everyone.
EVIDENCE
He opens the session by asking how to develop a paradigm that makes AI work for everyone and introduces the session titled “The case for personal, local and multilingual AI” [1-4]. He later emphasizes real-world impact, population reach, and a vision of AI not governed by any single country or company [12-15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for personal, local, multilingual AI is highlighted in discussions on linguistic diversity and offline, on-device models [S1], as well as broader calls for inclusive AI norms in the Global South [S24] and the role of multilingual AI in bridging access gaps [S30].
MAJOR DISCUSSION POINT
Personalized, locally relevant, multilingual AI
Argument 2
AI initiatives must move beyond pilots to real‑world impact that reaches whole populations.
EXPLANATION
Sushant stresses that India’s AI journey is no longer about experimental pilots or promises, but about delivering clear use cases at scale to the last mile, ensuring that AI benefits entire communities.
EVIDENCE
He notes that “India’s real journey is no longer about pilots or promises. It’s about populations’ reach, clear use cases, last mile delivery” and that this represents “real world impact” [12-13]. He also links this vision to a connected AI ecosystem that is not governed by a single country or company [15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Several reports stress the shift from pilot projects to population-scale deployment for real impact, e.g., AI for agriculture and climate-resilient systems [S27][S28], and inclusive AI agendas calling for deployment at scale [S24].
MAJOR DISCUSSION POINT
Shift from pilot projects to scalable, population‑wide AI deployment
Argument 3
Offline, on‑device AI processing is essential for last‑mile deployment and resilience.
EXPLANATION
Sushant highlights that the prototype operates entirely offline, with all inference happening locally, which makes the technology usable in remote areas or during connectivity outages.
EVIDENCE
He points out that “the best part is that the device is offline” and that “all those queries, all the AI processing was happening on the device” [98-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Offline, on-device inference is presented as a core requirement for last-mile AI in inclusive AI discussions [S1] and in the Bhashini stack description [S10], reinforced by calls for resilient, connectivity-independent AI [S24].
MAJOR DISCUSSION POINT
Importance of offline capability for inclusive AI
A
Ayah Bdeir
6 arguments163 words per minute1650 words606 seconds
Argument 1
Current AI’s mission is to build public‑interest, multilingual AI that can compete with dominant players
EXPLANATION
Ayah explains that Current AI was created as a public‑private partnership to develop AI that serves the public interest, especially in multilingual contexts, and to offer an alternative to the large, profit‑driven tech firms. The mission is to rally a global community to create open, vertically integrated AI.
EVIDENCE
She describes Current AI’s origin at the AI Action Summit, its public-private partnership model, and its aim to tackle public-interest AI at scale while matching the financial and ambition scale of dominant companies [135-141]. She also notes the collaborative approach and grant-making to support partners [140-143].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The mission aligns with global inclusive AI agendas emphasizing public-interest, multilingual solutions [S24], and the importance of multilingual AI for equitable access [S30]; collaborative India-France initiatives also underline competitive, public-good AI development [S33].
MAJOR DISCUSSION POINT
Public‑interest, multilingual AI mission
Argument 2
Current AI co‑develops with partners and releases outcomes as public goods
EXPLANATION
Ayah outlines a collaboration model where Current AI works closely with partners to identify shared priorities, builds technology together, and then releases the results as open, public‑good resources. This approach is meant to democratize AI development.
EVIDENCE
She states that Current AI works with partners to learn about interests, zero in on a collaboration, build together, and release as a public good [41-44]. She further emphasizes the partnership model, grant-making, and the goal of creating public-interest technology [140-143].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The partnership model mirrors described collaborative approaches that identify common gaps, co-create technology, and release it publicly [S32], and reflects the strategic role of open-source in fostering innovation [S35].
MAJOR DISCUSSION POINT
Co‑development and public‑good release
Argument 3
Open‑source hardware empowers community innovation like Linux
EXPLANATION
Ayah draws a parallel between open‑source hardware and the Linux operating system, arguing that an open platform enables anyone to innovate and build on top of it, fostering a vibrant ecosystem. She reflects on her 15‑year experience in open‑source hardware.
EVIDENCE
She recounts 15 years in open-source hardware, noting its power to let people do what they want, comparing it to Linux’s impact [210-213].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The analogy to Linux and the broader strategic imperative of open-source AI are discussed in analyses of open-source ecosystems [S35][S36][S37].
MAJOR DISCUSSION POINT
Open‑source hardware as an innovation catalyst
Argument 4
Proprietary embodied AI devices risk uncontrolled data collection and Western‑centric training
EXPLANATION
Ayah expresses concern that consumer‑facing embodied AI (e.g., glasses, robots, voice assistants) are often closed, collect data continuously, and are trained primarily on Western languages, creating privacy and cultural bias risks. She argues that hardware control is the first line of defense.
EVIDENCE
She outlines the proliferation of embodied AI devices, their unknown training data, continuous recording, and bias toward Western languages, citing recent announcements like Meta’s facial-recognition glasses [195-206].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Risks of data-intensive generative AI and bias toward Western languages are highlighted in data-risk assessments [S38] and decolonisation of digital rights discussions [S39][S40]; privacy-preserving frameworks are also noted [S29].
MAJOR DISCUSSION POINT
Risks of closed embodied AI
DISAGREED WITH
Sushant Kumar, Andrew Tergis
Argument 5
Hope for cheaper, smaller, longer‑battery devices and distributed inference
EXPLANATION
Ayah envisions a future where the core AI hardware becomes more affordable, compact, and energy‑efficient, and can be networked together to provide distributed inference capabilities. She sees multiple pathways for improving the device and its applications.
EVIDENCE
She lists trajectories such as lowering cost, improving battery life, shrinking size, making the device beautiful, and creating mesh networks or larger stationary versions with solar power [215-221][222-226].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Future hardware trajectories emphasizing lower cost, energy efficiency, and mesh networking are mentioned in offline multilingual AI visions [S1] and energy-intensity considerations for AI hardware [S42][S41].
MAJOR DISCUSSION POINT
Future hardware improvements and distributed AI
Argument 6
Aim to inspire community to create personal, multilingual AI solutions for diverse problems
EXPLANATION
Ayah concludes by urging participants to expand their imagination, build personal multilingual AI applications, and contribute to an open‑source ecosystem that can address a wide range of community needs. She frames the challenge as the beginning of a broader journey.
EVIDENCE
She calls for expanding imagination, making personal multilingual AI, and notes that the hardware could evolve into software or data solutions, emphasizing the start of a journey [425].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for community-driven multilingual AI solutions are echoed in multilingual inclusion workshops [S30][S43] and inclusive AI policy discussions [S24].
MAJOR DISCUSSION POINT
Inspiring community‑driven AI creation
A
Amitabh Nag
6 arguments164 words per minute814 words297 seconds
Argument 1
Bhashini aims to bring offline, language‑agnostic AI to the last mile
EXPLANATION
Amitabh describes Bhashini’s goal of delivering AI that works without connectivity, is portable, and can serve remote users, thereby extending AI reach to the “last mile.” He highlights the device’s small form factor and offline capability as key to inclusion.
EVIDENCE
He notes the device’s small size, portability, and offline operation, which enable use anywhere, especially at the last mile [163-168].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bhashini’s offline, language-agnostic design is documented in the Bhashini stack overview [S10] and in broader inclusive AI narratives stressing offline capability for last-mile reach [S1][S24].
MAJOR DISCUSSION POINT
Offline, last‑mile AI delivery
DISAGREED WITH
Abhishek Singh
Argument 2
Bhashini supports 22+ languages, recently added the tribal Bheeli language without script
EXPLANATION
Amitabh reports that Bhashini currently covers 22 languages, with a total of 36 language models, and has recently digitized the tribal Bheeli language, which previously lacked a written script, demonstrating a commitment to linguistic breadth.
EVIDENCE
He lists coverage of 22 languages, 36 total including text languages, and the addition of the script-less tribal Bheeli language [170-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The expansion of language coverage, including low-resource and script-less languages, aligns with multilingual AI inclusion efforts described in community-centric language projects [S30] and multilingual inclusion sessions [S43].
MAJOR DISCUSSION POINT
Expanding language coverage
Argument 3
Plans to shrink form factor, expand language breadth, enrich models, and enable mesh networking
EXPLANATION
Amitabh outlines a roadmap that includes making the device even smaller, adding more languages, continuously enriching model quality, and connecting multiple devices in a mesh to create distributed inference capabilities. These steps aim to broaden inclusion and technical capability.
EVIDENCE
He discusses the near-final form factor, plans for smaller devices, language breadth expansion, model enrichment, and mesh networking for distributed inference [164-169][170-176][180-186][222-226].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Future roadmap items such as smaller devices, broader language support, and mesh networking are discussed in offline multilingual AI visions [S1] and in collaborative India-France AI research plans [S33].
MAJOR DISCUSSION POINT
Future scaling and networking
Argument 4
Current operation handles 15 million daily inferences, showing scalability
EXPLANATION
Amitabh shares operational metrics indicating that Bhashini’s platform processes around 15 million inference requests per day on a 200‑GPU system, demonstrating that the solution can operate at large scale.
EVIDENCE
He mentions running about 15 million inferences a day with a 200-GPU system and real-time monitoring dashboards [128-129].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Scalable AI deployments moving beyond pilots are highlighted in agriculture and climate-resilient AI case studies [S27][S28].
MAJOR DISCUSSION POINT
Demonstrated scalability
Argument 5
Bhashini will provide quantization expertise and technical support for participants
EXPLANATION
Amitabh commits Bhashini’s team to continue supporting the innovation challenge by offering their quantization know‑how and technical assistance for model enrichment, ensuring participants can build on the prototype effectively.
EVIDENCE
He states that Bhashini will continue supporting quantization mechanisms and provide technical help for model enrichment as part of the joint effort [424].
MAJOR DISCUSSION POINT
Technical support for the challenge
Argument 6
Continuous model enrichment and contextualisation are needed to deepen language coverage and improve accuracy.
EXPLANATION
Amitabh describes ongoing work to enrich existing language models with glossaries, contextual data, and domain‑specific knowledge, ensuring that the AI becomes more accurate and relevant for diverse use cases.
EVIDENCE
He mentions “we are looking at breadth, depth, offline form factor as the four things which will move forward” and details efforts such as building glossaries for 16 lakh place names and contextualisation activities [180-186].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Enriching models with community-sourced linguistic knowledge is emphasized in multilingual AI inclusion workshops [S30] and local solution initiatives [S43].
MAJOR DISCUSSION POINT
Depth and quality improvement of multilingual models
A
Andrew Tergis
4 arguments161 words per minute546 words202 seconds
Argument 1
Device runs inference offline, supports multiple models, and enables a vision‑impaired use case
EXPLANATION
Andrew demonstrates that the prototype can perform on‑device inference without internet, host several AI models, and power a specific application for vision‑impaired users that combines speech, translation, image understanding, and audio output. This showcases the device’s versatility and accessibility.
EVIDENCE
He explains that the device is designed for any user or use case, runs inference locally, and describes a vision-impaired application that uses ASR, translation, LLM, and TTS to answer questions in the user’s native language [53-56][58-62]. Later he notes that all AI processing happens offline and that four or five models are operational on the hardware [98-101].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Offline, multi-model inference for accessibility mirrors the offline multilingual AI prototypes described in inclusive AI discussions [S1] and the Bhashini offline stack [S10][S24].
MAJOR DISCUSSION POINT
Offline, multi‑model inference with accessibility use case
DISAGREED WITH
Ayah Bdeir, Sushant Kumar
Argument 2
Hardware built on Jetson but platform‑agnostic, allowing any model deployment
EXPLANATION
Andrew clarifies that while the current prototype uses the NVIDIA Jetson platform, the software architecture is not tied to it, enabling deployment of models on other hardware platforms in the future. This design choice promotes flexibility and broader adoption.
EVIDENCE
He states that the prototype is currently based on the Intel (actually NVIDIA) Jetson platform but the processing does not depend on it, allowing support for other platforms [88-90].
MAJOR DISCUSSION POINT
Platform‑agnostic hardware design
Argument 3
Six‑week rapid build demonstrates feasibility of joint open‑hardware effort
EXPLANATION
Andrew highlights that the prototype was conceived, designed, and built within roughly five to six weeks, illustrating that a tight, collaborative effort between Current AI and Bhashini can quickly produce functional open‑hardware. This rapid timeline validates the collaborative model.
EVIDENCE
He notes that the project was undertaken in a six-week (actually five-week) period and that this is the first collaborative build between the two organisations, now being demonstrated live [34-35][46-47].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Rapid collaborative hardware development aligns with described partnership-driven co-creation models [S32] and the strategic role of open-source in accelerating innovation [S35].
MAJOR DISCUSSION POINT
Fast, collaborative hardware development
Argument 4
The prototype proves that AI can function in zero‑connectivity settings, enabling use in remote or disaster‑affected locations.
EXPLANATION
Andrew demonstrates that the device runs inference locally without any network connection, allowing users to run applications such as vision‑impaired assistance even when internet is unavailable.
EVIDENCE
He explains that the device “runs inference locally in their hand” and that “all those queries, all the AI processing was happening on the device” [55][98-99]. He also notes that the hardware is platform-agnostic, supporting deployment on various processors, which further enhances its suitability for offline use [88-90].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Deployments that function without connectivity are advocated in scaling AI for agriculture and climate resilience [S27][S28] and in inclusive AI frameworks for remote contexts [S24].
MAJOR DISCUSSION POINT
Offline AI for remote and emergency contexts
S
Shalindra Pal Singh
1 argument136 words per minute108 words47 seconds
Argument 1
Integrated pipeline (ASR → translation → LLM → TTS) retains accuracy thanks to quantization
EXPLANATION
Shalindra explains the end‑to‑end processing chain of the device, from automatic speech recognition through neural machine translation, large language model inference, and text‑to‑speech synthesis, and notes that careful quantization allowed the models to fit on‑device without sacrificing accuracy.
EVIDENCE
She describes the sequence of ASR, translation, LLM response, and TTS, and mentions that their quantization approach avoided the usual accuracy trade-off [70-72].
MAJOR DISCUSSION POINT
Quantized pipeline with maintained accuracy
A
Anne Bouverot
3 arguments157 words per minute1107 words422 seconds
Argument 1
Trusted third parties are needed to manage privacy‑preserving data sharing
EXPLANATION
Anne argues that for sensitive data—especially health data—privacy can be protected if a trusted, neutral third party holds and governs the data, enabling controlled sharing for research while preventing misuse. She stresses the need for institutional trust and clear governance mechanisms.
EVIDENCE
She discusses the necessity of a trusted third party to ensure privacy-preserving data sharing, giving examples of health data use, consent, and the balance between research benefit and potential commercial exploitation [330-342].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of trusted intermediaries for privacy-preserving data sharing is discussed in health-AI privacy frameworks [S29] and data-consent guidelines [S40].
MAJOR DISCUSSION POINT
Role of trusted intermediaries for privacy
DISAGREED WITH
Abhishek Singh
Argument 2
Artists need opt‑out and compensation mechanisms for cultural data used in AI
EXPLANATION
Anne points out that creators should retain control over their works when those works are used to train AI models, proposing opt‑out rights and compensation to reconcile cultural preservation with commercial exploitation. She highlights the tension between collective cultural representation and individual creator rights.
EVIDENCE
She notes that artists may demand compensation or the right to oppose the use of their creations in AI training, and suggests mechanisms such as opt-out and differentiated treatment for historical versus contemporary works [318-324].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for cultural data governance, opt-out rights, and compensation for creators appear in decolonising digital rights debates [S39] and data-rights discussions [S40].
MAJOR DISCUSSION POINT
Protecting artists’ rights in AI training data
Argument 3
Joint research on multilingual, resilient AI design strengthens both countries
EXPLANATION
Anne describes ongoing and future joint research initiatives between India and France on multilingual, resilient AI, emphasizing that collaboration leverages complementary strengths and contributes to shared innovation and global leadership in AI.
EVIDENCE
She references the year of joint innovation, collaborative work on multilingual AI, resilient design, and joint research as priority areas for India-France cooperation [391-396].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India-France collaborative AI research on multilingual, resilient systems is highlighted in partnership reports [S33] and inclusive AI cooperation agendas [S24][S30].
MAJOR DISCUSSION POINT
India‑France joint AI research
A
Abhishek Singh
7 arguments182 words per minute2010 words659 seconds
Argument 1
Diverse datasets must capture cultural practices and indigenous knowledge
EXPLANATION
Abhishek stresses that AI models need training data that reflect local customs, traditional knowledge, and indigenous practices to avoid mis‑classification and ensure culturally appropriate outcomes. He cites real‑world examples where lack of such data leads to errors.
EVIDENCE
He recounts a Netflix documentary showing tribal women annotating pest data, illustrating how local knowledge can differ from generic labels and why such cultural context must be incorporated into datasets [281-295].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of incorporating indigenous knowledge and culturally diverse data is emphasized in decolonising digital rights dialogues [S39] and community-driven multilingual AI projects [S30].
MAJOR DISCUSSION POINT
Incorporating cultural and indigenous knowledge into data
Argument 2
Communities should retain rights over their data and benefit from its utilization
EXPLANATION
Abhishek argues that data originating from communities must be governed by community standards, ensuring that the data is used ethically and that the community gains tangible benefits, whether through services or compensation.
EVIDENCE
He emphasizes the need for community involvement, standards rooted in local culture, and benefit sharing when data is used, especially to avoid commercial exploitation without community gain [299-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Community data rights and benefit-sharing are discussed in privacy frameworks for health data [S29] and broader data-consent principles [S40], as well as in decolonisation contexts [S39].
MAJOR DISCUSSION POINT
Community data rights and benefit sharing
DISAGREED WITH
Anne Bouverot
Argument 3
Community‑driven standards are essential to protect privacy and ensure ethical use
EXPLANATION
Abhishek highlights that beyond technical safeguards, culturally informed community standards are required to safeguard privacy and guide ethical AI deployment, especially when data sensitivity varies across sectors.
EVIDENCE
He notes the importance of technical and community standards, citing the need for context-specific rules for health versus agricultural data, and the risk of misuse without proper standards [306-310].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for culturally grounded standards to safeguard privacy aligns with data-rights and decolonisation discussions [S39][S40].
MAJOR DISCUSSION POINT
Need for culturally grounded standards
Argument 4
Sovereignty means full control over the entire AI stack—from chips to applications
EXPLANATION
Abhishek defines AI sovereignty as a nation’s ability to control every layer of the AI ecosystem, from hardware (chips) through data centers, models, and applications, thereby avoiding dependence on external providers.
EVIDENCE
He outlines the five layers-energy, data centers, chips, models, applications-and asserts that full control over these layers constitutes sovereignty, noting that most countries lack such complete control [368-373].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI sovereignty and control over the full stack are examined in analyses of AI race dynamics and national self-reliance [S31] and in policy calls for sovereign AI development [S24].
MAJOR DISCUSSION POINT
Definition of AI sovereignty
Argument 5
Nations need independent AI infrastructure to avoid dependence on external providers
EXPLANATION
Abhishek observes that while India has many components (energy, data centers, models), it still lacks full end‑to‑end control, and calls for building domestic chip‑fabrication and procurement capabilities to achieve true AI independence.
EVIDENCE
He points out that no country fully controls the entire stack, describes India’s current capabilities and gaps (e.g., chip design, future fab), and argues that choosing one’s own hardware and procurement pathways is essential for sovereignty [374-382].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for national AI infrastructure and reduced dependence on foreign providers are featured in AI race and sovereignty literature [S31] and inclusive AI policy recommendations [S24].
MAJOR DISCUSSION POINT
Building national AI self‑reliance
Argument 6
India and France can combine complementary strengths to shape global AI norms
EXPLANATION
Abhishek highlights the long‑standing partnership between India and France, noting recent joint initiatives and expressing confidence that their combined expertise can help define inclusive, culturally aware AI standards for the world.
EVIDENCE
He references past collaborations, the France Action Summit, upcoming joint events, and the belief that the partnership can produce a model for global AI governance [401-403].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Strategic India-France AI collaboration is highlighted in joint innovation network reports [S33] and broader cooperative AI governance agendas [S24].
MAJOR DISCUSSION POINT
Strategic India‑France AI partnership
Argument 7
Launch of an open‑source challenge with prize funding to hack the prototype device
EXPLANATION
Abhishek announces the India AI Innovation Challenge, an open‑source competition offering prize money to developers who build applications or improvements on the Bhashini‑Current AI prototype, aiming to catalyze community‑driven innovation.
EVIDENCE
He details the challenge’s open-source nature, prize funding, submission timeline (starting 25 Feb), and its goal to inspire diverse solutions for remote or disaster-affected areas [418-424].
MAJOR DISCUSSION POINT
AI Innovation Challenge launch
DISAGREED WITH
Ayah Bdeir
M
Martin Tisne
1 argument221 words per minute1206 words326 seconds
Argument 1
Balancing open‑source innovation with controlled cultural data governance is crucial
EXPLANATION
Martin raises the question of how to reconcile the openness of open‑source AI development with the need for controlled governance of cultural data, emphasizing that both approaches must be balanced to protect community interests while fostering innovation.
EVIDENCE
He asks directly about the balance between open-source components and a more controlled approach to cultural data governance [329-330].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The tension between open-source development and cultural data governance is discussed in open-source strategy analyses [S35] and decolonising digital rights debates on data control [S39][S40].
MAJOR DISCUSSION POINT
Tension between open‑source and cultural data control
DISAGREED WITH
Anne Bouverot, Abhishek Singh
A
Announcer
1 argument152 words per minute40 words15 seconds
Argument 1
The Global Innovation Challenge should involve key leaders to ensure broad participation and impact.
EXPLANATION
The Announcer calls on Abhishek Singh, Aya, and Amitabh Nag to stay on stage and help launch the challenge, emphasizing that their involvement is crucial for a successful, inclusive competition.
EVIDENCE
The announcement says, “Abhishek Singh sir, I request you to stay on stage… And Aya, we’d love to have you on to launch the Global Innovation Challenge… And Amitabh Nagsir as well” [404-408].
MAJOR DISCUSSION POINT
Call for inclusive leadership in the innovation challenge
D
Device
1 argument113 words per minute11 words5 seconds
Argument 1
The device can identify objects and provide multilingual audio feedback, illustrating practical AI applications.
EXPLANATION
When queried, the device correctly lists the candy wrappers on the table, showing its ability to perform visual recognition and generate a spoken response in the user’s language.
EVIDENCE
The device responds, “The table has candy wrappers of Twix, Milky Way, and KitKat” [79].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multilingual, offline object recognition demos are cited in inclusive AI prototypes that run locally [S1] and in multilingual AI inclusion workshops emphasizing practical applications [S30].
MAJOR DISCUSSION POINT
Demonstration of multimodal, multilingual AI capability
Agreements
Agreement Points
Offline, on‑device AI is essential for last‑mile deployment and resilience
Speakers: Sushant Kumar, Andrew Tergis, Amitabh Nag, Ayah Bdeir
Sushant highlights that the device operates entirely offline, enabling use anywhere [98-99] Andrew notes that inference runs locally on the handheld device [55][98-99] Amitabh stresses the offline capability as a key inclusion factor [163-168] Ayah envisions future devices that are low-cost, energy-efficient and can operate without constant connectivity [215-221]
All speakers emphasized that running AI inference locally without internet is crucial for reaching remote users and ensuring resilience [98-99][55][163-168][215-221].
AI must be multilingual and locally relevant to serve diverse communities
Speakers: Sushant Kumar, Ayah Bdeir, Amitabh Nag, Andrew Tergis, Abhishek Singh
Sushant frames the need for personal, local and multilingual AI [1-4] Ayah describes Current AI’s mission to build public-interest multilingual AI [135-141] Amitabh reports support for 22+ languages and addition of the tribal Bheeli language [170-176] Andrew demonstrates a multilingual application for vision-impaired users [58-62] Abhishek stresses the importance of cultural and linguistic data for AI accuracy [281-295]
The participants agreed that supporting many languages and tailoring AI to local contexts is vital to avoid leaving any community behind [1-4][135-141][170-176][58-62][281-295].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on linguistic diversity and local relevance mirrors the Inclusive AI discussion on why linguistic diversity matters and the need for culturally appropriate data, as highlighted in Inclusive AI_ Why Linguistic Diversity Matters [S58] and the data-governance models for African NLP ecosystems [S70].
Open‑source, collaborative development and release as a public good
Speakers: Ayah Bdeir, Andrew Tergis, Sushant Kumar, Amitabh Nag
Ayah outlines a co-development model that releases outcomes as public goods [41-44] Andrew calls the prototype the first collaborative build between Current AI and Bhashini [45-46] Sushant introduces the project as a seminal open-source AI hardware device [4] Amitabh announces an open-source innovation challenge with technical support [424]
All highlighted that the hardware and software should be open-source, co-created with partners and made freely available as a public good [41-44][45-46][4][424].
POLICY CONTEXT (KNOWLEDGE BASE)
Open-source as a public-good aligns with the balance between open-source development and community sovereignty noted in Responsible AI for Shared Prosperity [S57], the role of open-source tech in shaping global AI governance [S60], and calls for FOSS policy space for developing countries [S74].
Community/data sovereignty and benefit‑sharing are required for ethical AI
Speakers: Abhishek Singh, Anne Bouverot, Martin Tisne, Ayah Bdeir
Abhishek argues that communities must retain rights over their data and receive benefits [299-304] Anne stresses the need for trusted third parties to manage privacy-preserving data sharing [330-342] Martin raises the need to balance open-source innovation with controlled cultural data governance [329-330] Ayah warns about uncontrolled data collection and Western-centric training in embodied AI [195-206]
There is a shared view that data originating from communities should be governed with rights, reciprocity and trusted oversight to protect privacy and cultural integrity [299-304][330-342][329-330][195-206].
POLICY CONTEXT (KNOWLEDGE BASE)
Community data sovereignty and benefit-sharing are reinforced by the African NLP data-governance framework [S70], the Responsible AI for Shared Prosperity analysis of community rights over cultural data [S57], and ethical AI policy tools referencing EU Trustworthy AI guidelines [S66][S68].
Future hardware should become cheaper, smaller, energy‑efficient and networkable
Speakers: Ayah Bdeir, Amitabh Nag
Ayah envisions lower cost, better battery life, smaller size and mesh networking for distributed inference [215-226] Amitabh outlines plans for a smaller form factor and mesh networking of multiple devices [164-169][222-226]
Both speakers share a vision of evolving the device into a more affordable, compact, low-power platform that can be linked in a mesh for distributed AI [215-226][164-169][222-226].
POLICY CONTEXT (KNOWLEDGE BASE)
The push for affordable, energy-efficient hardware echoes the clean-tech scaling challenges and goals for cheaper, smaller devices described in Hardware for Good: Scaling Clean Tech [S62] and broader innovation spillover considerations [S61].
Similar Viewpoints
Both see offline, on‑device AI as essential for reaching users without connectivity [98-99][55].
Speakers: Sushant Kumar, Andrew Tergis
Sushant stresses offline operation as a key inclusion factor [98-99] Andrew confirms that inference runs locally on the handheld device [55][98-99]
Both emphasize linguistic diversity as a core requirement for inclusive AI [135-141][170-176].
Speakers: Ayah Bdeir, Amitabh Nag
Ayah describes Current AI’s multilingual mission [135-141] Amitabh reports support for 22+ languages and addition of tribal Bheeli [170-176]
Both advocate for community‑centric data governance that protects rights and ensures benefit sharing [299-304][330-342].
Speakers: Anne Bouverot, Abhishek Singh
Anne calls for trusted third parties to ensure privacy-preserving data sharing [330-342] Abhishek argues that communities must retain rights and benefit from data use [299-304]
Both recognize a tension between openness and the need to protect cultural/creative rights [329-330][318-324].
Speakers: Martin Tisne, Anne Bouverot
Martin asks how to balance open-source AI with controlled cultural data governance [329-330] Anne describes the tension between open innovation and artists’ rights to opt-out or be compensated [318-324]
Unexpected Consensus
Hardware control as a means to safeguard privacy and promote inclusive AI
Speakers: Ayah Bdeir, Sushant Kumar
Ayah warns that closed embodied AI devices collect data continuously and are trained on Western languages, posing privacy and cultural bias risks [195-206] Sushant celebrates an open-source, offline hardware platform that puts control in users’ hands and avoids reliance on external providers [4][98-99]
Despite Ayah’s cautionary stance on existing proprietary devices and Sushant’s promotional tone for a new open device, both converge on the idea that controlling the hardware layer is crucial for privacy, data sovereignty and inclusive deployment [195-206][4][98-99].
POLICY CONTEXT (KNOWLEDGE BASE)
Advocating hardware control for privacy reflects arguments for user control to prevent data misuse in surveillance debates [S67] and the policy notion of hardware inviolability [S72], which also ties into tech-sovereignty discussions [S73].
Overall Assessment

The discussion shows strong convergence among speakers on four pillars: (1) offline, on‑device AI for last‑mile reach; (2) multilingual, locally relevant AI; (3) open‑source collaborative development released as a public good; (4) robust community‑centric data governance and sovereignty. Additional shared visions include future hardware miniaturisation and mesh networking.

High consensus – the majority of participants align on the same strategic directions, indicating a solid foundation for coordinated policy and technical actions to advance inclusive, multilingual, and privacy‑preserving AI.

Differences
Different Viewpoints
How to balance open‑source AI development with cultural data governance and ownership
Speakers: Martin Tisne, Anne Bouverot, Abhishek Singh
Balancing open‑source innovation with controlled cultural data governance is crucial Trusted third parties are needed to manage privacy‑preserving data sharing Communities should retain rights over their data and benefit from its utilization
Martin asks how to reconcile the openness of open-source AI with the need for controlled governance of cultural data [329-330]. Anne responds that privacy-preserving data sharing should be overseen by trusted third parties and that artists need opt-out and compensation mechanisms [318-324][330-342]. Abhishek argues that community-driven standards and benefit-sharing are essential, emphasizing context-specific rules and community involvement [299-304][306-310]. All agree that cultural data needs protection but propose different mechanisms (trusted institutions vs community standards vs opt-out rights).
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between open-source AI and cultural data governance is a core issue in Responsible AI for Shared Prosperity [S57], the Inclusive AI focus on linguistic and cultural rights [S58], and the open-sovereignty framing in the Global AI Policy Framework [S75].
Approach to data sharing reciprocity and benefit‑sharing with communities
Speakers: Abhishek Singh, Anne Bouverot
Communities should retain rights over their data and benefit from its utilization Trusted third parties are needed to manage privacy‑preserving data sharing
Abhishek stresses that data about a community must be governed by community standards and that the community should receive tangible benefits, especially in sectors like agriculture or health [299-304][306-310]. Anne emphasizes that a trusted, neutral third party should hold and govern data to ensure privacy while enabling research, suggesting institutional trust rather than direct community control [330-342]. Both aim for ethical data use but differ on whether control resides primarily with the community or with an external trusted entity.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on reciprocity and benefit-sharing draw on the African NLP data-governance model that emphasizes ownership vs consent and community benefit [S70] and ethical AI governance tools that stress benefit-sharing mechanisms [S66].
Definition and implementation of AI sovereignty
Speakers: Abhishek Singh, Amitabh Nag
Sovereignty means full control over the entire AI stack— from chips to applications Bhashini aims to bring offline, language‑agnostic AI to the last mile
Abhishek defines AI sovereignty as complete national control over all five layers of the AI stack (energy, data centres, chips, models, applications) and calls for domestic chip fabrication and procurement [368-373][374-382]. Amitabh focuses on delivering an offline, portable device with expanding language coverage and model enrichment, without addressing full-stack control [163-168][170-176]. Their visions share the goal of self-reliance but differ on the scope: full technological independence versus targeted offline language solutions.
POLICY CONTEXT (KNOWLEDGE BASE)
The concept of AI sovereignty is explored in multiple policy analyses, including European Tech Sovereignty’s sovereignty-vs-openness debate [S73], the ‘open sovereignty’ approach in the Global AI Policy Framework [S75], and varied national perspectives on AI sovereignty [S76][S77].
Concern about embodied AI devices versus promotion of offline, on‑device AI
Speakers: Ayah Bdeir, Sushant Kumar, Andrew Tergis
Proprietary embodied AI devices risk uncontrolled data collection and Western‑centric training Offline, on‑device AI processing is essential for last‑mile deployment and resilience Device runs inference offline, supports multiple models, and enables a vision‑impaired use case
Ayah warns that closed embodied AI (glasses, robots, voice assistants) continuously record data, are trained on Western languages, and pose privacy risks [195-206]. In contrast, Sushant highlights that the demonstrated prototype operates entirely offline, enabling use in remote or disaster settings [98-99][163-168]. Andrew reinforces the offline, multi-model capability of the device, emphasizing its suitability for zero-connectivity environments [55-56][98-99]. The disagreement lies in focus: Ayah cautions against closed, cloud-dependent AI, while the others champion offline, open hardware as a solution.
Details of prize funding for the Innovation Challenge
Speakers: Ayah Bdeir, Abhishek Singh
I just say for Amitabh because maybe he tried to say Bashi is offering I think $110,000 prize to the winners Launch of an open‑source challenge with prize funding to hack the prototype device
Ayah mentions a possible $110,000 prize for winners of the challenge [425]. Abhishek announces the India AI Innovation Challenge, noting funding from Bhashini and Current AI but does not specify the amount, only describing “handsome reward” [419-424]. The mismatch in prize amount details reflects a disagreement or lack of alignment on the exact funding commitment.
Unexpected Differences
Prize amount ambiguity for the Innovation Challenge
Speakers: Ayah Bdeir, Abhishek Singh
I just say for Amitabh because maybe he tried to say Bashi is offering I think $110,000 prize to the winners Launch of an open‑source challenge with prize funding to hack the prototype device
Ayah references a specific $110,000 prize figure, while Abhishek’s announcement mentions a “handsome reward” funded by Bhashini and Current AI without quantifying it, indicating an unexpected lack of alignment on the exact prize amount [425][419-424].
Different emphases on hardware control versus software openness
Speakers: Ayah Bdeir, Andrew Tergis
Open‑source hardware empowers community innovation like Linux Device runs inference offline, supports multiple models, and enables a vision‑impaired use case
Ayah stresses the strategic importance of open-source hardware as a foundational platform for innovation, while Andrew focuses on the specific functional capabilities of the prototype without explicitly addressing the broader hardware openness, revealing a subtle divergence in priorities that was not anticipated [210-213][55-56].
POLICY CONTEXT (KNOWLEDGE BASE)
The hardware-vs-software emphasis reflects the software and hardware inviolability solutions discussion [S72] and the broader sovereignty versus openness tension in European tech policy [S73], as well as the ‘open sovereignty’ third-way model [S75].
Overall Assessment

The discussion revealed several substantive disagreements: (1) the best mechanism for governing cultural data within open‑source AI (trusted third parties vs community standards vs opt‑out rights); (2) how reciprocity and benefit‑sharing should be structured; (3) the scope of AI sovereignty, ranging from full stack national control to targeted offline language solutions; (4) contrasting concerns about closed embodied AI versus promotion of offline, open hardware; and (5) unclear details on prize funding for the Innovation Challenge. While participants shared a common vision of inclusive, multilingual, and privacy‑preserving AI, they diverged on governance models, implementation pathways, and concrete funding commitments.

Moderate to high – the core vision is shared, but the lack of consensus on data governance, sovereignty, and funding details could impede coordinated policy or collaborative actions unless reconciled. These disagreements highlight the need for clearer frameworks that balance open‑source innovation with cultural data rights and national AI autonomy.

Partial Agreements
All three agree that AI should be accessible and respect user privacy, but Ayah stresses avoiding closed, cloud‑dependent devices, while Sushant and Andrew promote an offline, open‑hardware solution as the means to achieve that goal [195-206][98-99][55-56].
Speakers: Ayah Bdeir, Sushant Kumar, Andrew Tergis
Proprietary embodied AI devices risk uncontrolled data collection and Western‑centric training Offline, on‑device AI processing is essential for last‑mile deployment and resilience Device runs inference offline, supports multiple models, and enables a vision‑impaired use case
Both emphasize ethical data handling and the need for safeguards, yet Anne proposes institutional trusted intermediaries, whereas Abhishek advocates community‑driven standards and benefit‑sharing mechanisms [330-342][299-304].
Speakers: Anne Bouverot, Abhishek Singh
Trusted third parties are needed to manage privacy‑preserving data sharing Communities should retain rights over their data and benefit from its utilization
Takeaways
Key takeaways
Personal, local, multilingual AI is essential for inclusive access; AI must work offline and be adaptable to any language or community. Current AI’s mission is to create public‑interest, multilingual AI that can compete with dominant proprietary platforms. Bhashini’s prototype demonstrates offline, on‑device inference with a full pipeline (ASR → translation → LLM → TTS) and supports vision‑impaired use cases. Quantization techniques allowed a high‑fidelity LLM to run on a handheld device without noticeable loss of accuracy. The hardware is built on Jetson but is platform‑agnostic, enabling any model deployment and future form‑factor reductions. Collaboration between Current AI and Bhashini follows an open‑source philosophy: co‑develop, release as a public good, and empower community hacking. Language coverage is expanding (22+ languages now, 36 total including tribal Bheeli) and will continue to grow in breadth and depth. Privacy and data sovereignty are critical; uncontrolled embodied AI poses risks of surveillance and Western‑centric bias. Open‑source hardware is likened to Linux – it provides a neutral foundation that prevents lock‑in to any single vendor. Scalability is already proven (15 million daily inferences) and future plans include smaller devices, mesh networking, and solar‑powered micro‑data‑centers. Reciprocity and community rights over data are necessary; cultural creators need opt‑out and compensation mechanisms. AI sovereignty means full control over the entire stack (chips, models, data, applications) for nations and communities. India–France partnership is seen as a model for joint research, resilient multilingual AI, and shaping global AI norms.
Resolutions and action items
Launch of the India AI Innovation Challenge – open‑source competition to hack the Bhashini‑Current AI prototype, with submissions opening 25 Feb and prize funding from both organisations. Bhashini commits to provide quantization expertise and technical support for challenge participants. Current AI will continue to release the prototype hardware and software as public‑good assets and encourage community‑driven extensions. Both organisations will pursue further reduction of device form‑factor, battery improvements, and mesh‑network capabilities. Commitment to expand language coverage (including tribal languages) and enrich existing models with contextual glossaries. Agreement to explore joint India‑France research initiatives on multilingual, resilient AI and to contribute to global AI governance norms.
Unresolved issues
Exact standards and mechanisms for community‑driven data governance and how to enforce opt‑out/compensation for cultural creators. How to balance open‑source hardware innovation with controlled, privacy‑preserving data sharing at scale. Long‑term funding and manufacturing pathways to bring the device to mass‑market pricing. Specific policies or regulatory frameworks needed to ensure AI sovereignty without fragmenting global interoperability. Details of how third‑party trusted entities will be selected and governed for privacy‑preserving data stewardship. Concrete metrics for measuring the impact of multilingual AI on marginalized communities beyond the demo.
Suggested compromises
Adopt an open‑source hardware base while applying controlled access to culturally sensitive datasets, allowing community opt‑out and compensation. Use trusted third‑party institutions to manage privacy‑preserving data sharing, balancing openness with individual/collective rights. Combine open‑source innovation (e.g., hackable prototype) with public‑good licensing that mandates any commercial derivative to contribute back to the ecosystem. Implement mesh networking and solar‑powered nodes to reduce reliance on proprietary cloud services while still enabling scalable inference.
Thought Provoking Comments
India’s real journey is no longer about pilots or promises. It’s about populations’ reach, clear use cases, last‑mile delivery – a connected vision for AI not governed by any one country or one company.
Frames AI development as a collective, inclusive effort rather than a proprietary race, setting a collaborative tone for the entire session.
Established the overarching theme of the discussion, prompting speakers to emphasize partnership, open‑source models, and multilingual inclusivity throughout the conversation.
Speaker: Sushant Kumar
Current AI was born out of the AI Action Summit… a public‑private partnership with a mission to create AI for the public interest, working with partners to identify gaps, develop technology together and release it as a public good.
Clarifies the strategic purpose behind Current AI’s involvement, highlighting a model of co‑creation and open‑source ethos that contrasts with typical corporate AI development.
Guided the dialogue toward how collaborations can be structured, influencing later remarks about open hardware, community ownership, and the launch of the India AI Innovation Challenge.
Speaker: Ayah Bdeir
I’m concerned about this new frontier of embodied AI… devices that continuously record, send data to the cloud, are trained on Western languages, and lock up innovation behind proprietary hardware – similar to how the iPhone created a hardware lock‑in.
Raises a critical ethical and technical risk of AI proliferation, linking privacy, cultural bias, and hardware monopoly in a single, compelling argument.
Shifted the conversation from showcasing technology to questioning its societal implications, leading to deeper discussion on data sovereignty, privacy‑preserving designs, and the need for open, controllable hardware.
Speaker: Ayah Bdeir
We have already digitized a tribal language, Bheeli, which has no script, and we aim to cover 22‑plus languages now, expanding to 36, ensuring no language or community is left behind.
Demonstrates concrete progress in linguistic inclusion, moving the abstract idea of multilingual AI into tangible achievements and highlighting the importance of preserving minority languages.
Prompted follow‑up questions about breadth vs. depth of language coverage, reinforced the theme of cultural preservation, and inspired optimism about scaling the platform to underserved communities.
Speaker: Amitabh Nag
I’m concerned about embodied AI… but I also see hope: open hardware can be like Linux – you can shrink size, improve battery, mesh devices, create micro‑data‑centers, and build applications for farmers, kids, tourists… the possibilities are infinite.
Balances the earlier warning with a visionary outlook, illustrating how open, modular hardware can democratize AI and foster endless innovation across sectors.
Catalyzed the discussion on practical use‑cases, sparked excitement about future applications, and set the stage for announcing the India AI Innovation Challenge.
Speaker: Ayah Bdeir
Should there be a set norm for AI, similar to French radio quotas that require a percentage of French music and film, to ensure cultural representation and funding for local creators?
Introduces the policy dimension of AI governance, drawing a parallel with existing cultural protection mechanisms and questioning how similar safeguards could be applied to AI.
Opened a new line of debate on regulatory frameworks, leading participants to discuss reciprocity, compensation for creators, and the balance between open source and cultural rights.
Speaker: Anne Bouverot
Sovereignty in AI means having complete control over all five layers – energy, data‑center, chips, models, applications – so no external entity decides for us. India is progressing but still lacks full stack control.
Provides a clear, layered definition of AI sovereignty, linking national security, economic independence, and technological autonomy.
Steered the conversation toward strategic national considerations, influencing later remarks on Indo‑French collaboration and the need for diversified, sovereign AI ecosystems.
Speaker: Abhishek Singh
What is the world you would like us to live in when AI and culture get it right? If we get it right, what does it look like in five or ten years?
A provocative, forward‑looking question that reframes the discussion from technical demos to long‑term societal vision.
Prompted panelists to articulate aspirational goals, linking technical work to broader cultural preservation, inclusivity, and policy, thereby deepening the conversation’s scope.
Speaker: Martin Tisne
Overall Assessment

The discussion was anchored by a series of pivotal remarks that moved it from a product showcase to a nuanced debate about the future of AI in society. Sushant’s opening set a collaborative agenda, which was fleshed out by Ayah’s articulation of Current AI’s public‑good mission and her warnings about embodied AI’s privacy and bias risks. Amitabh’s concrete example of digitizing a tribal language and Anne’s policy analogy introduced the cultural‑preservation and regulatory dimensions. Ayah’s hopeful vision of open hardware and the launch of the India AI Innovation Challenge turned concerns into actionable pathways. Finally, Martin’s visionary question and Abhishek’s definition of AI sovereignty broadened the dialogue to include long‑term societal and geopolitical implications. Together, these comments redirected the conversation from a technical demo to a strategic, inclusive, and ethically grounded roadmap for personal, local, multilingual AI.

Follow-up Questions
How can we involve communities in data sharing and ensure they have rights over their data?
Ensures ethical AI development and community empowerment by giving data contributors agency over their information.
Speaker: Martin Tisne (to Abhishek Singh)
Should there be reciprocity where communities benefit from the use of their data?
Addresses fairness and creates incentives for communities to contribute data if they receive tangible benefits.
Speaker: Martin Tisne (to Abhishek Singh)
How can we balance open‑source AI components with controlled data governance for cultural data?
Seeks a framework that preserves openness while protecting cultural heritage and respecting ownership.
Speaker: Martin Tisne (to Anne Bouverot)
How should open‑source AI be balanced with cultural‑data considerations?
Looks for practical ways to keep AI tools open while safeguarding culturally sensitive datasets.
Speaker: Martin Tisne (to Abhishek Singh)
What is the definition of sovereignty in the context of AI?
Clarifies national, community, and individual control over the full AI stack, informing policy and strategy.
Speaker: Martin Tisne (to Abhishek Singh)
What opportunities exist for France and India to jointly develop global norms for culturally inclusive AI?
Identifies avenues for bilateral cooperation to set standards that protect cultural diversity in AI systems.
Speaker: Martin Tisne (to Anne Bouverot and Abhishek Singh)
How can language coverage be expanded to include more languages, especially tribal languages without script?
Ensures no language is left behind, supporting linguistic diversity and inclusion.
Speaker: Amitabh Nag
How can models be enriched with contextual data such as place‑names from the Survey of India?
Improves model relevance and accuracy by incorporating localized geographic knowledge.
Speaker: Amitabh Nag
What privacy‑preserving, trusted‑third‑party platforms are needed for sharing sensitive data (e.g., health data) while enabling public benefit?
Balances individual privacy with societal gains, requiring robust governance and trust mechanisms.
Speaker: Anne Bouverot (and Abhishek Singh)
What technical and community standards are required for data sharing that respect cultural belief systems?
Creates ethically sound frameworks for data use that align with local customs and values.
Speaker: Abhishek Singh
How can mesh networks of devices be built to enable distributed inference and larger workloads?
Extends the capability of low‑power hardware, allowing scalable AI processing in offline or remote settings.
Speaker: Ayah Bdeir
How can the device’s cost, battery life, size, and aesthetics be improved to increase accessibility?
Reduces barriers to adoption, especially in low‑resource environments.
Speaker: Ayah Bdeir
What novel application domains (e.g., agriculture, toys, tourism) can be built on the open‑source hardware platform?
Demonstrates the versatility of the platform and its potential societal impact across sectors.
Speaker: Ayah Bdeir
How does multilingual AI affect cultural preservation and prevent language loss?
Addresses concerns that dominant languages may erode minority languages without targeted AI support.
Speaker: Ayah Bdeir
What are the trade‑offs in model quantization and how can accuracy be maintained?
Technical optimisation is crucial for fitting high‑fidelity models on edge devices without degrading performance.
Speaker: Shalindra Pal Singh
How can the prototype be scaled to production, including exploring hardware platforms beyond Jetson?
Ensures the solution can move from demo to widespread deployment across varied environments.
Speaker: Andrew Tergis
What grant mechanisms and funding models can sustain public‑good AI projects?
Provides financial support for ongoing development, community contributions, and open‑source maintenance.
Speaker: Ayah Bdeir
How can real‑world impact be measured and monitored (e.g., inference counts, latency dashboards)?
Enables performance tracking, informs improvements, and demonstrates value to stakeholders.
Speaker: Amitabh Nag
What legal frameworks are needed to protect cultural content in AI (e.g., analogous to French media quotas)?
Policy tools can ensure cultural representation and prevent homogenisation by dominant players.
Speaker: Anne Bouverot
How can artists retain rights, opt‑out, or receive compensation when their works are used to train AI models?
Addresses the tension between open data use and creators’ intellectual‑property interests.
Speaker: Anne Bouverot
How can an open‑source hardware platform be leveraged to foster hackathons and sector‑specific solutions?
Encourages community‑driven innovation, leading to diverse applications and rapid iteration.
Speaker: Abhishek Singh (India AI Innovation Challenge)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

How AI Drives Innovation and Economic Growth

How AI Drives Innovation and Economic Growth

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how artificial intelligence can both accelerate development and deepen inequalities, focusing on emerging economies [6-7]. Johannes highlighted AI’s capacity to boost productivity in sectors such as agriculture, health, and finance, noting that 15-16 % of South Asian jobs show strong AI complementarity [12-14][15-19]. He warned that automation may eliminate entry-level, knowledge-based positions and that many low-income countries lack basic infrastructure like reliable electricity and internet [22-24][26-30]. To address these gaps, the World Bank promotes “small AI”-affordable, locally relevant applications that function with limited connectivity and skills [34-36].


India was presented as a leading example, with its digital identity system, AI-enabled farmer tools, and AI-generated weather forecasts that reached 38 million farmers and improved planting decisions [39-41][133-154]. The Bank’s role is advisory, helping governments create sandbox environments and ensuring data reliability, while private firms develop the actual apps [49-53]. Ufuk distinguished a high-barrier foundational layer (compute, data, talent) that tends toward concentration from a low-barrier application layer that fuels creative destruction [86-94][95-98]; without a competitive foundational layer, the benefits of AI applications may be limited, especially in developing economies [97-100].


Anu emphasized the difficulty of regulating AI, citing the EU’s rights-based AI Act as a model but noting that India can adapt such frameworks to its own priorities [168-176][177]. Michael stressed that AI for public goods-such as AI-driven weather forecasts and digital IDs-requires government and multilateral investment because the private sector lacks incentives [129-132][145-152]. He proposed evidence-based innovation funds and a four-stage evaluation framework (model performance, user impact, scalability, continuous improvement) to guide effective deployment [266-274][284-292].


Panelists agreed that hype must be tempered; Iqbal warned that trust gaps and institutional inertia can prevent promising tools from scaling, as seen in a GST fraud-detection pilot that was halted [309-318]. Both Ufuk and Iqbal highlighted rising market concentration and the migration of talent from academia to large incumbents as a systemic risk to inclusive innovation [325-332][342-348]. The discussion concluded that AI offers transformative potential in health, education, and agriculture, but realizing these gains will depend on proactive policy, robust regulation, and safeguards against job loss and concentration [383-386][394-398]. Overall, the panel underscored that coordinated public-private effort and careful governance are essential to ensure AI narrows rather than widens development gaps [57][414-416].


Keypoints

Major discussion points


AI as a development catalyst for emerging markets – The speakers highlighted AI’s capacity to “fundamentally reshape… economies and societies” and to “leapfrog longstanding development challenges” by complementing 15-16 % of jobs in South Asia and helping farmers, nurses, and financial institutions [6-12][15-20][34-38].


Infrastructure, skills, and job-displacement risks – While AI offers gains, the panel warned that many developing countries lack “reliable electricity,” “internet backbone,” and basic “literacy and numeracy” to use it, and that “entry-level… knowledge-based” jobs are already being reduced [21-30].


Coordinated policy and multilateral support – The World Bank’s focus on “small AI,” advisory work, and sandbox environments, together with government-backed digital ID and AI-driven weather forecasts, were presented as concrete policy levers; Michael Kremer added that “innovation funds” and evidence-based financing can bridge gaps where the private sector will not [49-53][124-132][266-274].


Governance, regulation, and AI sovereignty – Anu Bradford stressed the need for the Global South to craft its own AI rules, noting the EU’s “rights-driven” approach and the broader geopolitical contest between the US, China, and other powers that shapes who sets the rules [165-176][357-363].


Market concentration and labor-market implications – Ufuk Akcigit warned that the “foundational layer” of AI is “compute-heavy… talent-heavy,” fostering concentration, while Iqbal Dhaliwal presented evidence of rising market concentration, incumbent dominance in innovation, and the migration of talent from academia to industry, all of which could exacerbate inequality [75-100][322-330][342-350].


Overall purpose / goal of the discussion


The panel was convened to examine whether AI will narrow or widen the development gap between advanced and emerging economies, to share concrete use-case experiences (e.g., agriculture, health, education), and to identify the policy, regulatory, and institutional actions needed to harness AI’s benefits while mitigating its risks for inclusive growth.


Overall tone and its evolution


– The conversation opened with a optimistic, forward-looking tone, emphasizing AI’s transformative promise.


– It then shifted to a cautious, problem-focused tone, acknowledging infrastructure deficits, job losses, and governance challenges.


– Mid-discussion the tone became pragmatic and solution-oriented, detailing concrete policy tools, public-private collaborations, and multilateral initiatives.


– Towards the end, the tone grew more critical and reflective, stressing concentration risks, labor market threats, and the need for careful regulation.


– The final rapid-fire segment blended hopeful optimism about sectoral gains (health, education) with warnings about concentration and governance failures, ending on a balanced but vigilant note.


Speakers

Jeanette Rodrigues


– Role/Title: Moderator / Host of the panel discussion


– Areas of Expertise: AI policy, development economics, panel facilitation


Johannes Zutt


– Role/Title: World Bank representative (referred to as “John” in the discussion), Regional Vice President of the World Bank Group


– Areas of Expertise: AI applications in development, agriculture, health, and finance; emerging market impacts of AI


Anu Bradford


– Role/Title: Policy researcher/analyst focusing on AI regulation and governance (affiliated with research institutions on AI policy)


– Areas of Expertise: AI regulatory frameworks, AI sovereignty, comparative analysis of EU, US, and Indian AI policies


Ufuk Akcigit


– Role/Title: Macroeconomist (academic researcher)


– Areas of Expertise: Creative destruction, AI’s impact on economic growth, foundational vs. application layers of AI, concentration in AI markets


Iqbal Dhaliwal


– Role/Title: Global Director, J‑PAL at MIT


– Areas of Expertise: Impact evaluation, education technology, AI interventions in public services, evidence‑based policy


Michael Kremer


– Role/Title: Economist, Nobel laureate, senior researcher in development economics (affiliated with Harvard University and the World Bank)


– Areas of Expertise: Development economics, AI for public goods, AI in agriculture, health, education, and policy design


Additional speakers:


– None identified beyond the listed speakers.


Full session reportComprehensive analysis and detailed insights

The panel opened with Jeanette Rodrigues introducing the session and handing the floor to Johannes Zutt, who described artificial intelligence (AI) as a technology that is “fundamentally reshaping our world” and driving a “structural transformation with profound implications for economies and societies” [??]. He called AI a “game‑changer” for emerging markets, offering a chance to “leap‑frog longstanding development challenges” [??]. Zutt cited recent World Bank work in South Asia showing that roughly 15‑16 % of jobs have strong complementarity with AI, meaning AI can boost workers’ skills and productivity [??]. He illustrated this with sector‑specific examples: AI helps farmers detect pests and diseases, assists nurses in diagnosing unfamiliar ailments, and enables financial institutions to better assess borrowers’ creditworthiness [??].


Ufuk Akcigit then presented a structural lens, distinguishing a foundational layer (compute‑heavy, data‑heavy, talent‑heavy) from an application layer where entry barriers are low [??]. He warned that the foundational layer is “highly concentration‑prone” and that its dynamics will spill over to the application layer, potentially limiting AI benefits for developing economies [??]. Akcigit called for early‑indicator monitoring to anticipate how creative destruction will unfold in both advanced and emerging markets [??].


Michael Kremer followed with development‑oriented use cases. He highlighted the World Bank’s digital identity programme and a digital‑payments platform that provide a solid foundation for AI deployment [??]. He described AI‑enhanced weather forecasts that reached 38 million Indian farmers, improving planting decisions and demonstrating measurable uptake [??]. Kremer added two further concrete examples: (i) automated traffic‑camera systems that improve road safety, and (ii) the “HAB” AI‑driven driver‑license testing programme (Microsoft Research India) that reduced unsafe‑driver ratings by 20‑30 % [??]. He warned that existing public‑sector procurement systems can create lock‑in risk, limiting competition and stifling innovation [??].


Anu Bradford then turned to governance and sovereignty. She argued that the Global South must pursue its own AI sovereignty, crafting rights‑based regulatory frameworks inspired by the EU’s AI Act to protect fundamental rights and broaden the distribution of AI benefits [??]. Bradford listed four structural reasons the EU lags behind the US: (a) the absence of a digital single market, (b) a weak capital‑markets union, (c) a risk‑averse legal and cultural environment, and (d) challenges in attracting talent (including the “You are not alone” repetition) [??]. She cautioned, however, that full sovereignty is constrained by the global AI supply chain—high‑end semiconductors are designed in the United States, manufactured in Taiwan, equipped by Dutch firms, and rely on raw materials from China—so any techno‑nationalist approach must balance interdependence [??].


Iqbal Dhaliwal presented a “small‑AI” school example and a GST‑fraud‑detection pilot. He explained that an AI model raised the detection rate of bogus firms from 38 % to 55 %, but the government refused to scale it because the model removed human discretionary power, highlighting the “power” and “institutional alignment” dimensions of AI deployment [??]. Dhaliwal also shared upstream evidence of market concentration: (i) rising US market concentration since 1980, (ii) an increasing share of innovative resources residing in firms with more than 1,000 employees, and (iii) soaring earnings of the top 1 % of AI scientists in academia and industry, with break‑points in 2012 and 2017 [??].


Returning to Michael Kremer, he advocated for evidence‑based innovation funds such as Development Innovation Ventures, which provide tiered financing—small grants for pilots, larger grants for rigorous testing, and further funding for scaling successful solutions [??]. Kremer outlined a four‑stage evaluation framework—model performance, user impact, scalability, and continuous improvement—to ensure AI interventions deliver real‑world benefits and can be iteratively refined [??]. He emphasized the need for public‑sector investment in AI‑driven public goods that the private market will not fund, citing again the weather‑forecast example that reached millions of farmers [??].


In the subsequent discussion, Iqbal warned that unchecked market concentration could become a regrettable legacy if not addressed [??], and cautioned that over‑reliance on generative models might make humanity “dumber” by outsourcing critical thinking [??]. Ufuk Akcigit highlighted labour‑market shocks, especially the rapid erosion of entry‑level coding jobs that underpin India’s tech hubs, and called for competition‑friendly regulation, access to finance, and entrepreneurship‑supporting policies [??]. Johannes Zutt concluded with a rapid‑fire remark that “we may have the tools to target poverty reduction on individuals,” underscoring the need for coordinated policy [??].


Points of consensus emerged across the panel: (i) AI can be a powerful catalyst for development in agriculture, health, finance, education, and other public‑good services; (ii) “small AI”—affordable, locally‑relevant applications that can operate despite limited connectivity, data, and device capability—is essential for low‑resource settings; (iii) coordinated public‑private collaboration and robust governance are required to prevent misuse when targeting individuals for poverty reduction [??].


Key disagreements centered on three axes: (1) the primary mechanism to harness AI—Zutt championed technical deployment of small AI and sandbox support, Akcigit argued that broader business‑environment reforms are needed, and Kremer advocated staged, evidence‑based funding [??]; (2) AI sovereignty versus interdependence—Bradford stressed the limits imposed by global supply chains, while Zutt reported that Indian governments have an “AI for all” objective (as conveyed by Zutt) [??]; (3) regulatory approach—Zutt advocated pragmatic standards and sandbox environments, whereas Bradford called for a comprehensive, rights‑based framework modeled on the EU AI Act [??].


Overall, the panel concluded that AI offers a historic opportunity to accelerate development, but real‑world impact hinges on proactive policy, robust rights‑based regulation, evidence‑based scaling mechanisms, and measures to mitigate concentration and labour displacement. A multi‑track, balanced approach—combining affordable small‑AI pilots, advisory standards and sandboxes, staged funding, rights‑based regulatory frameworks, and policies preserving competition in the foundational AI layer while guarding against geopolitical supply‑chain vulnerabilities—will be needed to turn AI’s promise into inclusive, sustainable development for emerging economies [??].


Session transcriptComplete transcript of the session
Jeanette Rodrigues

all around the Bharat Mandapam. So once again, thank you very much for your time this afternoon and for choosing us to have a conversation with. To start off, I would like to introduce John, who will make some opening comments for the World Bank.

Johannes Zutt

So thank you very much, Jeanette. It’s a great pleasure to be here speaking to all of you this afternoon. Over the past week, we’ve heard from a lot of world leaders, tech leaders, experts from across many, many countries about how AI is fundamentally reshaping our world, presenting not just a technological shift but a structural transformation with profound implications for economies and societies everywhere. For emerging markets and developing economies, as for all economies, AI could be a game changer. So sorry, that probably helps. I thought the mics were on. So, you know, for all countries, but especially for emerging markets and developing economies, AI can be a game changer, a unique opportunity to leapfrog longstanding development challenges.

It offers clear opportunities to enhance growth and productivity. We recently did some work in South Asia at the World Bank Group to see what sort of impact AI was having on jobs in the region, and we found that approximately 15 or 16 percent of jobs here have strong complementarity with AI. AI enables people in those jobs to expand their skills and their effectiveness in delivering the products and services that they are trying to provide. It also helps, you know, very, very diverse groups of people in many, many different sectors of the economy. It helps farmers to identify pests on their crops. It helps farmers to identify pests on their crops, diseases in their crops, and also how to address them.

It helps farmers to identify pests on their crops, diseases in their crops, and also how to address them. It helps nurses to identify the ailments and illnesses that their patients may be suffering, particularly the ones that they’re not very familiar with, but that they can research using appropriate AI applications. It helps financial institutions to understand better the ability of borrowers to take on loans, which, of course, expands the ability of the borrower to expand his or her business. So there’s clearly enormous potential for AI to fill skill gaps in the areas that I mentioned, also in education, in health care services, to detect patterns, to generate forecasts, to guide the allocation of public resources, and so on.

Of course, at the same time, on the flip side, AI also creates a number of challenges. One of them is there will be some job losses, particularly sort of entry -level jobs that are very much knowledge or document -based, performing relatively rote work that can be taken over by automation. And we’re actually seeing this in the World Bank Group. We went and looked at the number – the types of jobs that we are advertising these days compared to a couple of years ago, and what we found is that that layer, sort of at the bottom of the professional classes inside the bank group, there’s just fewer of those types of jobs being advertised in the World Bank Group today than there were a few years ago.

At the same time, you know, particularly for developing economies and emerging markets, many of them are going to struggle to harness the potential that AI offers because of very basic issues around the foundations for effective AI use. They may not have reliable electricity. We can start with that very basic one. They may not have an internet backbone that’s sufficiently strong. People in these countries may not have very, very basic skills of literacy and numeracy that enable them to work effectively with higher end devices. They may need to use very, very basic devices, not even smartphones, and rely on voice communication, asking a question and hearing a response. So there may be struggles of that kind in developing countries and emerging markets.

And I’m not even talking about all the governance and regulatory safeguards that can also come into play. So the question, of course, is how can emerging economies, developing markets, harness the potential of AI and avoid the pitfalls? And for us in the World Bank group, we’ve been very, very focused on focused recently on basically small AI. Small AI meaning practical, affordable, locally relevant AI that addresses specific problems and also works where connectivity, data, skills, infrastructure are fairly limited. And this is extremely important in countries like India where all of those conditions can apply. And yet there’s tremendous potential for people to expand their, to grow their productivity if they have timely access to information of the right kind in their local language tailored to their specific circumstances.

So that’s what we are trying to do in South Asia today and across the globe actually. And this is really about some of the examples that I mentioned earlier, having bespoke… applications that help farmers to do very basic investigation of the types of issues that they’re facing using their phone to analyze what’s going on to identify it to find out how to address it even to find out who within their local area in their market space can help them by providing the tools or the products that are necessary to address whatever they’re running into so India of course is a very strong example of what’s possible India has been a leading country in digital innovation for quite some time after the United States and China it has the largest if you like digital universe you in the in the world today it’s got some very good foundations there’s the the digital identity program as well as the digital payment platform that currently exists.

There are lots of Indian firms that are innovating in AI, including in the small AI applications that I’ve been talking about. And the governments of India have an objective of ensuring that there is AI for all. So they are very, very aware of the challenges that need to be overcome to make AI accessible to a very, very broad spectrum of the population and not just the very rich that, to some extent, need assistance the least, right? It’s the poorer parts of the country that benefit the most because they will be leveraging a tool that they are not very familiar with and have not been using that much in the past. So we’re working in India.

We’re working in a lot of different states, Uttar Pradesh, Maharashtra, Kerala, Haryana, Telgana. these different aspects working with governments to work on the foundational elements, interoperability, making sure that the accessibility is possible, that programs can run offline as it were so that people who aren’t able to get online all the time can benefit and so on. And then we’re also working with private sector investors who are developing apps. I mean we’re not actually developing many apps ourselves. That’s not really in our comparative advantage. Our comparative advantage as the World Bank Group is to do the more advisory work, make sure that the backbone information that’s embedded in the application is reliable and trustworthy because of course that’s critical for ensuring successful uptake.

But we are helping governments to create. We are helping governments to create the space that enables experimentation in AI sandbox to develop the different applications that people in this incredibly creative country are coming up with to help people get on with their work and become more productive. So I think it’s important to recognize that if we’re going to make effective use of this tool, we need both a public -facing effort to address the standards and the other issues, the interoperability and so on that I mentioned before, but also a private -sector -facing effort because it’s the private sector that’s actually generating, creating most of these applications that are working, particularly in the small AI area.

We’re doing a little bit on bigger AI. There’s obviously a connection between the two. Big AI can, through computational power, generate new knowledge that can help us to do things that we haven’t done so well in the past much, much better. But for… There are countries like India translating that. into small AI will also be very, very important for uptake. So I’m looking forward to hearing from all the distinguished speakers in this panel about their thoughts on what’s happening today in this sector. So thank you very much.

Jeanette Rodrigues

Thank you very much, John. John spoke about, of course, the use cases for AI, and on the other side of the spectrum we have the large language models, we have the foundational AI. But no matter where you sit on the spectrum, no matter where your interests lie, AI, innovation never disperses and never diffuses equally. Today on this panel, I hope to unpack what determines whether AI narrows the development gap or whether it widens the development gap. Especially we are looking to talk about the real world. What should policymakers in the real world think about and keep at the top of their mind as they go ahead preparing policies considering AI? Before I start, just setting the stage.

To a man, to a woman, everybody I spoke with who’s attended the first AI summit to today, this is, I think, the fourth AI summit being held. The first one was held in the UK. And without exception, all of them made it a point to tell me how the first session was full of fear. It was, oh, my God, AI is this terrible technology which is going to steal all our jobs, make us redundant. And when they come to India, they see the hope that technology and AI brings. And that’s the spirit of the discussion this afternoon, to figure out how can we balance both of those extremes, hope and concern, and go ahead in a pragmatic, policy -first way to prepare for the real world.

So if I could start with you, Ufuk, how do you think about AI? And especially, where do you see areas of creative destruction? To foster the innovation that we need.

Ufuk Akcigit

Thank you very much. And so, of course, creative destruction is an important driver of economic growth in the long run. So that’s why, you know, it’s an interesting question how AI will affect creative destruction in general. Of course, we are at a very early phase of AI, and it’s a GPT. And typically, you know, when GPTs are emerging, there’s a huge surge of new businesses. And this should not be misleading. I think the main question we should be asking ourselves is what will happen to the creative destruction in the future? How does the future look like in terms of creative destruction? And I’m a macroeconomist, so that’s why I like to look at this with a, you know, bird’s eye view.

And I would like to, you know, separate advanced economies from emerging or developing economies. So when it comes to advanced economies, there, again, we need to split the issue into two layers. One, the foundational layer. and the other one is the application layer. When we look at the application layer, it’s great. You know, the entry barriers are low. Small businesses can do what only large businesses could do in the past, and, you know, they can do their accounting, marketing. You know, there are so many opportunities now. The entry barrier is low. As a result, this suggests that, you know, this is going to be more, you know, friendly for creative destruction on the application. But then there’s also the foundation layer, and I think that’s exactly where the bottleneck is.

When we look at the foundation layer, the entry barrier is really, really high, and, you know, the compute is very compute -heavy. It’s very data -heavy. It’s very talent -heavy. So as a result, you know, this market, at least this layer, is very concentration -prone. Of course, it’s very early. But, you know, normally we have to be concerned about the foundational layer and how things will pan out because this is the upstream to the application layer, which is downstream to foundation layer. So that’s why whatever will happen at the foundational layer will potentially spill over to application layer two. So that’s why I think we need to look at early indicators. But, you know, in the interest of time, I don’t want to go into the empirical evidence yet.

Maybe we can come back in the second layer. When we look at the developing countries, so I think, you know, I agree with Johannes. You know, I think AI is creating fantastic opportunities. So that’s why I think it’s really important to understand the opportunities as well as the risks for developing countries. And together with the World Bank, we are working on the world development. Report 2026, which is going to be on AI and development. And these are exactly the issues that we are focusing on. But I think before we go into those details, we should ask ourselves one major question. Why was there no entrepreneurship and dynamism before the AI revolution in emerging economies? Why was, you know, when we looked at the firm’s life cycle, for instance, why was it not up or out?

Why was it not, you know, very competition friendly? Why did the best predictor of firm size in emerging economies or developing economies was the size of the family and or the number of male children? These are still lingering issues and AI is not, you know, will not bring magic unless we understand and fix the business environment in these economies. You know, AI will just create new tools. But at the end of the day, we need to make sure that the business friendly environment is there for entrepreneurs to come and exercise their ideas

Jeanette Rodrigues

Ufuk, that’s a very interesting leaping of point, the real world. And the intention of this panel is to get exactly there. So if I may turn to you, quite literally turn to you, Michael, and ask you about the real world. You’re obviously doing a lot of work on the ground. Where do you see the potential for AI to spur gains? And are there any really transformative breakthrough areas that you’re looking at right now?

Michael Kremer

Yes. Thank you. Thanks very much. You know, I don’t want to minimize the existence of forces that may widen gaps. I think that if policymakers, primarily at the national level, but also in multilateral development banks, take appropriate actions and make appropriate investments, then I think AI has the potential to substantially narrow some of the gaps. And, you know, I think the… which policy actions to take can be informed by thinking through relevant market failures and relevant government failures. Let me give a concrete example or two. So private firms have incentives to develop and improve applications of AI that can generate profits. But there are some very important applications of AI for public goods, for example, that will not attract commercial investment to measure it with their needs.

And that’s an area where I think governments and multilateral development banks can play an important role. And I think some of this very much echoes what you were saying about small models, but also I’ll mention the link between the two. So an obvious example where I think India has been a leader for the world is in the development of digital identity. You know, this is… will enable, as Ufuk was saying, this enables a lot of work by individual entrepreneurs, a lot of other applications. So that’s a huge success, and I think multilateral development banks together with India can help bring that to many other countries. Let me take another example, one that’s not as well -known, but picks up on your comment about farmers.

So one thing that’s critical for farmers, they have to make a bunch of decisions that are weather -dependent. You know, when do you plant, for example? What varieties do you use? A drought -resistant variety, another variety. That, most farmers don’t have access to state -of -the -art weather forecasts around the world. I’m not talking about one country. In low – and middle -income countries, they don’t have access to that. Now, there’s a huge advance. We tend to think of large language models, but obviously AI is pushing science forward, and that includes in weather forecasting. There’s really a revolution driven by AI. But weather forecasts are non -rival. They’re largely non -excludable. They’re the classic definition of a public good.

So there’s a strong rationale for national governments, in some cases supported by multilateral development banks, to make investments in producing and disseminating AI weather forecasts. Again here, India is a leader. So if you, India in particular, in particular, India’s, the Indian government distributed forecasts to AI weather forecasts to 38 million farmers last year. And the evidence suggests that farmers, both from India, from this particular case, that in areas, I’ll say a little bit about last year’s monsoon, it came early in Kerala and southern India, but then there was an unexpected delay in the progression. The AI forecasts got that right, that was the only source of information that reached farmers with that. In the areas, we did a survey above that line, and farmers are responding, and they transplant more, they use hybrid seeds more.

Evidence from around the world is consistent with this. Farmers respond to these AI weather forecasts. So I think that’s one example, but many others, and happy to discuss them in education and traffic enforcement and elsewhere.

Jeanette Rodrigues

Michael, your answer should be read the book. Okay. We’ve spoken about the use cases of India, but setting up digital IDs, of course, is a sovereign decision. It’s something India could do unilaterally. When it comes to the large language models, that’s not reality. The large language models are concentrated in the US, in China now with DeepSeek. Anu, in a world where you largely have the rules being set by the two large powers, the US and China, arguably, there’s of course the EU as well, and you’ve done a lot of work on that. Who sets the AI rules for the Global South? Is there even the possibility for the Global South to talk about sovereignty?

Anu Bradford

So I think the Global South has the same kind of incentive for their own AI sovereignty, including then regulatory sovereignty, to design the rules that better work for their economies, for their societies, for what the public interest in these jurisdictions calls for. But regulating AI is really difficult even for very established bureaucracies. You need to be able to make sure that it is an innovation -friendly, and yet you at the same time need to be careful in managing the risks for individuals and societies. So even very established regulators like the European Union have found it one of the most challenging tasks to come up with the AI Act. So there’s probably something to be learned from these jurisdictions that have gone ahead and done the kind of thinking that had then resulted into some of those regulatory frameworks that we have now in place.

So if you think about the choices that India has when it looks around, one of them is to think about, okay, how does the EU go about this? The EU follows what I would call a rights -driven approach to regulation. So what is really characterizing this, the first horizontal binding, so economy -wide regulation that the Europeans enacted, it is a regulation that seeks to protect the fundamental rights of individuals, the democratic structures of the society, and that also seeks to ensure a greater distribution of the benefits from AI revolution. So the European approach is very conscious that it wants to also share some of the benefits so they don’t all go to the large developers of these models, but individual use as society at large.

smaller companies benefit from AI as well. So there’s something I think the Europeans can teach in terms of that regulatory approach in addition to maybe then some details of how that regulation in the end was constructed. But just one word, India is a formidable economy that doesn’t need to take a template and plug it into the economy as such. I think India is in a very good position to take the lessons that serves its needs yet make the kind of local modification and variations that are more reflecting the distinct priorities of this country.

Jeanette Rodrigues

Anu, before I turn to Iqbal, a quick follow -up question to you. As India makes its own rules, where does the trade -off lie between regulation and innovation?

Anu Bradford

So this is very interesting because often I am based in the U .S., but I’m initially from Europe, and these two jurisdictions are described as the U .S. develops technologies and the Europeans regulate those technologies. many ways does India want the innovation path or the regulation path? And I think there are many votes who would go for innovation. But I really would like to debunk this myth that to me it’s a false choice to say that the reason we don’t see these large language models being developed in Europe is not because there’s a GDPR, the General Data Protection Regulation. It’s not because there is AI Act. So the reason there is a perceived innovation gap between the United States and Europe is, I think, four things.

So first, there is no digital single market in Europe. It’s very hard for these AI companies to scale across 27 distinct markets. Second, there’s no deep, robust capital markets union. 5 % of the global venture capital is in Europe, over 50 % in the United States. That explains why the U .S. has been able to take much greater steps in developing AI technologies. Third, there are legal frameworks and cultural attitudes to risk -taking. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone.

You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. I wouldn’t encourage you to replicate that because it’s very hard to innovate on the frontier of technological innovation because sometimes you fail. But you need to be then given the second chance.

And the fourth, I think, the sort of foundational pillar of the robust U .S. tech ecosystem is that the U .S. has been spectacularly successful in harnessing the global talent that has chosen to come to the U .S., including many Indian data scientists, engineers, who think that U .S. is the place where they can start their companies, scale their companies, fund their companies, U .S. universities can attract them. So the idea that choosing to follow… Or imitate aspects of the European rights protective regulation would come at the cost of innovation, we need to understand better what drives the technological innovation and whether regulation should

Jeanette Rodrigues

Thank you, Anu. Iqbal, turning to you. You’re working in an area of the world, South Asia, where what is regulation? What is enforcement? At the risk of sounding like a provocateur, it’s the Wild West a little bit. And therefore, we talk a lot in our part of the world about small AI, about targeted AI. My question to you is that what should policymakers keep in mind when designing AI -enabled interventions, especially when it comes to small AI and the targeted use cases?

Iqbal Dhaliwal

vulnerable public schools all the way from 11th to becoming the second best performing state in just a matter of two or three years. Phenomenal results, right? But then you start saying, let’s unpack this. What was this thing doing? The first thing that they find out is that a lot of people are like, oh, does this mean that I don’t need teachers anymore? No, you still need the teachers. What it replaces is the road task of the teacher having to correct spelling mistakes, calling you to the room and saying, hey, you forgot your comma, you forgot to capitalize. Instead, AI takes care of all of that. And now the teacher can sit with you in the free time and say, how did you set up the structure of this essay?

Did you think about this analytically or not? And that’s the first insight that comes from evaluation. It frees up the teacher time. Everything that we do in the field ends up adding to teacher’s time, adding to the nurse’s time, adding to the Anganwadi worker’s time. Very few teachers do that. Free up time. So if your AI application can free up the time of the health frontline workers, first of all, that’s a winner. The second thing that is really important here was that this is a demand -driven thing, right? Like there was a demand by the kids to improve their essays. There was a demand by the teachers to free up their time. But most importantly, there was a demand by the school districts to show progress.

So I think those is kind of a great example of how everything comes together if you think about it ahead of time.

Jeanette Rodrigues

Ladies and gentlemen, a topper of India’s notoriously difficult civil services exam. So take Iqbal more seriously than you would as just a normal.

Iqbal Dhaliwal

Thank you. I thought that was history now.

Jeanette Rodrigues

It’s never history in India, Iqbal. Michael, turning to you, almost as equal in accomplishment by winning a Nobel. What risks should multilaterals like the World Bank keep in mind? Or let me rephrase that actually. Is there a risk that multilaterals are moving too slowly relative to the technology?

Michael Kremer

I think there certainly is. As I noted before, there are certain areas where the private sector is going to move, but there are other areas where they’re not going to move quickly, and it’s going to be very important for governments and for multilateral development banks and for philanthropy to move. I think there are a number of approaches to this. One way is by encouraging innovation by setting up institutions like innovation funds, particularly evidence -based, to echo Iqbal, I think evidence -based innovation funds. So I’ll give you one example of something that I’m involved in. Development Innovation Ventures, that was initially set up in the U .S. government, but it’s now been relaunched independently. It has tiered funding, so there’s initially very small…

grants to pilot new ideas. Then there’s somewhat larger grants to rigorously test them as Iqbal emphasized and then for those that are most successful there’s funds to help transition them to scale up. I think why is that important? Well that’s important because if we’re thinking about the services that public services and there are other sectors where this is needed but there’s probably going to be insufficient competition. Private developers are going to come up with innovations but then there if they have to sell them to the government they’re facing a monopsonistic buyer. They’re not going to probably not going to get rich doing that. So some support to generate more in that market, generate more entrance in that market, well I think is very important.

It’ll also mean that prices will go down and quality will go up when the government does that thing. Does that. Let me, I’ll just again let me give a example of the potential of how you know we we tend to focus on certain examples time after time here let me give another another example that is you know something that I doubt many people here are thinking of when they think of AI you know one of the things that you know traffic safety and we’ve all been exposed to traffic in the past few days you know traffic is a real problem interfering with urbanization which may drive growth there are a lot of deaths from from traffic a lot of citizens around the world have very difficult and painful experiences with traffic enforcement well you know you can have automated traffic cameras that have the opportunity to improve improve traffic outcomes but also improve people’s perception of fairness in government India’s moving in this let me mention another thing that within traffic safety that’s being done Microsoft Research India developed a program called the India Research Program and it’s a program that’s been developed by the government and it’s a program called HAB that is for driver’s licenses and that it automatically uses AI to test are that are the drivers until they actually pass in their exams they when this was introduced it’s been introduced I believe in 56 sites across India hundreds of thousands of people have taken tests this way we took a leaf from a false book we followed up the we’ve got information from Ola on ratings on and the number of drivers who were rated as driving unsafely that went down 20 to 30 percent where hams had been installed so you know that’s something that was developed not by Microsoft’s main business but by Microsoft research we can just create some support for more ideas like that to be developed to be rigorously tested that can benefit India can benefit the whole world we are we are running out of time probably this is this is one place in in India where time is really respected and we have to end in time.

So I had a list of wonderful questions, but if I could now move to a space where we are really giving shorter answers and quick answers and the deeply, deeply interesting ones about who’s winning and who’s losing. Michael, if I could start with you, actually. We’ve seen many promising technologies fail to live up to their promise. How should we think when we are evaluating AI interventions? How should we think about it? What should be the metrics that we use? Okay. First, model evaluation. So AI companies typically do that part. How good is the model output for specific tasks? You know, forecasting the weather. Does it do a good job? Does it match your local language well?

Second, user impact. Here, I think there’s a role both for sort of initial pilots akin to a medical efficacy trial. If you put the work into trying it, does it lead to improvements and outcomes for the users? Second… scalability and usage at scale that’s more like an effectiveness trial in medicine that it’s important to think not just about the tech but also about the human systems are the teachers actually going to use the product I think is it is an example how can you get the teachers to use the product and then the fourth area is continuous improvement you want a system that improves the underlying models so I think in procurement we might want to think about requiring continuous a B test publicity about what the what the impact usages and impact is and perhaps even thinking about requiring open access as part of the procurement package

Jeanette Rodrigues

thank you Michael. Iqbal, I want to flip that question to you where do you see where do you see hype in the promises of AI that you don’t think will play out

Iqbal Dhaliwal

I think hype is natural because the technology is exciting. It’s a general -purpose technology. It’s evolving so quickly. The marginal cost of deployment for the next users is very low. It’s multimodal. Today you are doing it in text. Tomorrow you’re doing it in video. Day after tomorrow you’re doing it on audio. Everybody who has a smartphone has it. So I can understand the hype, right, like where it is coming from. But I think what we really need to do is separate the hype from the reality on the ground. And the reality on the ground is that many of these technologies are not having the final impact that we are having. And I see kind of two, you know, like once again my job at J -PAL always, you know, sitting at the top is like to say not worry about one professor’s evaluation or one researcher’s evaluation, but say when I connect all these dots, what am I seeing?

And I’m seeing two patterns. One is about trust in technology, and the second part is about the reality of the policy world. Let me elaborate quickly on both. Trust in technology. There are studies which found that even if you give doctors and frontline health care workers access to diagnostic tools, including radiology, tools, using AI, AI enabled prediction of the diseases, oftentimes it doesn’t lead to an improvement in results. And when you try and unpack that, even though this technology worked even better than the human intervention in the lab, right? So some of these diagnostic things can work, have better predictability in the lab, but in the field, they end up decreasing, not only is their efficiency lower, but it lowers the efficiency of the doctors, because we have not trained them enough important.

And the second thing is the enabling mechanism, the world around us. We just assume that just because the technology works, even if it works in the field, the rest of the system will adapt to it. No, you have to adapt the system to the rest of the world. So this example quickly comes from India, where, you know, we have a with one particular state government, we try to improve the collection of value added taxes, it’s called GST in India, there is a whole worry about bogus firms that are created to get these GST or value added tax thing. The machine learning algorithm is able to increase the probability of predicting a bogus firm from 38 % to 55 % in one shot at a very, very low cost.

When it came time to scale up this program by the government, they refused to scale it up because you think about it, you have taken away the discretion of the human to decide whether they should raid Michael’s firm or they should raid Iqbal’s firm. That is power. And if you haven’t thought through that point, what is the point of technology?

Jeanette Rodrigues

I won’t terrify anyone in the room by asking why they didn’t want to scale up this tech. But talking about weeding out the bad actors, talking about firm -level decisions, moving on to UFOOC, does the firm -level evidence show productivity gains diffusing evenly across?

Iqbal Dhaliwal

So just going back quickly to the question of the firm. In the earlier model that I highlighted, I think it’s important to understand what’s happening at the upstream. so that we can then understand where things will be going in the future. And the evidence there, the early signs, is a bit worrying. So first of all, when we look at, for instance, the dynamism or market concentration in the U .S., market concentration has been increasing since 1980 but in an accelerating way after 2000. So that’s the first set of evidence. The second set of evidence comes from how innovative resources are allocated across firms. And when we look at the inventors who are creating the creative destruction and technologies, there’s a massive shift towards market incumbents.

And when I say incumbents, those firms that have more than 1 ,000 employees. In around 2000, 50 % of employees used to work for incumbent firms in just 10 years. That shifted. To more than 60%. A massive reallocation of innovative resources. And the final piece of evidence, and we are going to release this study next week, we looked at the universities, how AI is impacting universities, and we look at the AI publishing scientists. And AI publishing scientists in academia, the top 1%, used to make around $300 ,000 in 2000. It went up to $390 ,000 over two decades. Similar people in industry used to make around $550 ,000. Now it went up to $2 million. And there has been two breakpoints. One of them was in 2012. The other one was in 2017.

Of course, image processing and then the foundational model revolution in 2017. The more worrying part about this, which brings me back to the foundational model side of things, is that this created a massive out -migration from academia to industry.

Ufuk Akcigit

And after 2017 especially, B2B. When the compute and infrastructure became so important. And then we saw the rise of AI. The target or the destination is large incumbent information companies, which again highlights where things are going in terms of the concentration. And the worrying part also is that when people are moving to industry from academia, their publication record goes down by 50%. They start patenting by 600 % more after they move, which means that we are moving from open science to more protected science. Now, spillover is extremely important for creative destruction, for the future of innovation. So that’s why, and if we will keep the foundational layer contestable, I think that the fundamental players there will be universities.

And keeping universities in a healthy way is extremely important, but there is very little discussion on this, which I think before it gets too late. Because once you start buttoning the wrong button, and then the rest will follow wrong as well. So that’s why I think we have to have this frank conversation early on in the game, otherwise it might… too late.

Jeanette Rodrigues

Ufuk, what you spoke about boils down to something Iqbal mentioned as well, power. Because power still makes decisions in this world today. So Anu, before I move to the final section of this panel, if I could ask you if the finance minister of a developing country let’s say India, comes to you and asks you, Anu, how should I think? What would you tell her?

Anu Bradford

So today if you think about how much political power but also geopolitical power is shaping our conversations around AI it is something where I think each country is now pushed towards greater techno -nationalism, techno -protectionism AI sovereignty has become almost a sort of uniformly goal for everyone. But I would remind even when encountering players like the United States and China that nobody in today’s world will be completely sovereign when it comes to AI space. If I just take one layer of the AI stack as an example. What is now driving a lot of the global AI race is this idea that we want to do frontier AI we want to have these powerful foundation models.

That means you need to have a lot of computers. You can’t have a lot of compute unless you have access to the high -end semiconductors. The U .S. is well positioned there. It is hosting companies like NVIDIA. The U .S. leads in the design of semiconductors. But who is manufacturing them? We really need to think about the role of Taiwan there. But then the Europeans have ASML in the Netherlands that leads in the high -end manufacturing with the equipment needed for manufacturing. But that is dependent on chemicals where Japan is leading. And the entire supply chain relies on raw materials from China. So ultimately, all these choke points can in principle be weaponized, but that is not ultimately a sustainable strategy.

Even President Trump had to walk back some of the export controls to China because Chinese were saying, okay, then the raw materials are not coming your way. So there are the potential ways to weaponize these interdependencies that ultimately make us all poorer. So as a finance minister of India, when approaching other middle powers, the great powers,

Jeanette Rodrigues

Easily said than done. Our final, final section is, of course, the rapid fire round. We all love this in this room. In one sentence, in one sentence, if I could ask all of you, and Johannes, you’re not getting away easily, you’re going to answer this as well. So in one. if I could ask you, we’re sitting in New Delhi 2035. Could you predict one development outcome that will have dramatically improved with the use of AI and one risk we’ll regret not addressing now? I guess you already know my second answer.

Iqbal Dhaliwal

I think the concentration, the future of market concentration is something that we should be concerned about and we might regret not having discussed this sufficiently in 10 years. On what will change in a positive direction, clearly health care and education, I think. It’s a no -brainer.

Jeanette Rodrigues

Anu?

Anu Bradford

So first of all, it’s so inspiring to hear all the use case examples, whether we talk about traffic or agriculture or education, because I often talk about the risks and the downsides, so it’s a really good reminder. I’m personally very excited, especially what happens in the education space but also in the health space. In terms of the risks, I think one thing that we are not paying attention to, and what I would even call a systemic risk, is the idea that many worry about AI getting almost too smart. But I am more worried about us getting dumber as a humanity. There is a temptation to start skipping steps, outsourcing your thinking and your creativity to these models.

And as an educator, when I think about how I will teach my students to use generative AI to enhance but not substitute their capabilities, we will just make a tremendous mistake if we just forewent that hard work, that beautiful moment of thinking hard problems and creating and investing in our own capabilities. And all that just cannot be so outsourced, because otherwise we don’t even know what kind of questions we should be asking the AI going forward.

Jeanette Rodrigues

Michael.

Michael Kremer

I agree that there is huge potential in health. and education. I think we’ll see big improvements in that, but the risk is that the public sector won’t adopt these, and therefore the poor won’t have access to them. And that’s because the public sector, as Iqbal indicated, the government systems and the government workers may not adapt to use these. There’s also risks of copycat regulation that are over -focused on certain problems that other countries may be worrying about, but might not be relevant for emerging economies. And then final risk is that the procurement systems are just set up in such a way that we don’t get sufficient competition, we get lock -in, and then we just don’t wind up with good quality.

Jeanette Rodrigues

Thank you, Michael. The buzzer’s down, but I’ll take a risk and quickly run through the other.

Ufuk Akcigit

Yes. I think I am much more optimistic about the government actually adopting this thing. Whether it is when you call 100, your call is going to get answered very quickly. The PCR van is going to be at your house much faster. The hospitals are able to be able to link your health record. So I think the government sector productivity is going to improve leapfrogs. The biggest risk, I think, is definitely the labor market. If there was a dial where I could slow down the adaptation and give time to the labor market to catch up, that’s my biggest worry. You talked about entry -level jobs. An entry -level coding job might be an entry -level job in the United States.

It’s the aspirational job that created Gurgaon’s and Noida’s and Mohali’s of this country. And those people are going to be running out of jobs very, quickly. And I think the labor market, whether it is ESI, Provident Fund, Gratuity, we are piling on and making it harder and harder to hire labor. when, on the other hand, capital is not taxed. We are giving incentives to people to use AI, and we are taxing them through provident fund and labor market regulations to hire labor. And I think that, for me, is the biggest risk, actually.

Johannes Zutt

So I think that for the first time in human history, we may actually have the tools available to enable us to target poverty reduction, poverty elimination initiatives on individuals. And that could be tremendously transforming. But at the same time, I do worry that we will not get the governance right or we won’t be able to make that governance sufficiently robust to prevent abuses.

Jeanette Rodrigues

Thank you very much to all of our panelists and to you for your time and attention once again. I had the very rare fortune of being able to peek into Michael’s screen while he was speaking, and I saw all the messy human notes. Our panelists are definitely not outsourcing their thinking anytime soon, and thank God for that. Thank you, ladies and gentlemen

Related ResourcesKnowledge base sources related to the discussion topics (39)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“AI can be a game‑changer for emerging markets, offering a unique opportunity to leap‑frog longstanding development challenges.”

The knowledge base explicitly states that AI is a game changer for emerging markets and can help leapfrog longstanding development challenges [S6].

Confirmedhigh

“The structural lens distinguishes a foundational layer (compute‑heavy, data‑heavy, talent‑heavy) from an application layer where entry barriers are low.”

A similar description of a foundational layer versus an application layer is provided in the source, confirming the terminology and distinction [S30].

Additional Contextmedium

“AI helps farmers detect pests and diseases.”

The source on AI for Good in food and agriculture outlines precision-agriculture applications, including pest and disease detection, offering concrete examples that add nuance to the claim [S33] and a related agrotech talk also mentions AI-driven farming tools [S105].

Additional Contextlow

“Roughly 15‑16 % of jobs have strong complementarity with AI, meaning AI can boost workers’ skills and productivity.”

While the exact percentage is not given, the knowledge base reports productivity gains from AI in specific sectors such as call centers and software development, providing broader context on AI’s complementarity with work [S15].

External Sources (106)
S1
How AI Drives Innovation and Economic Growth — -Jeanette Rodrigues: Moderator/Host of the panel discussion This comprehensive discussion at the Bharat Mandapam, moder…
S2
Extreme poverty and human rights * — 16 Jeanette Rodrigues, ‘India ID program wins World Bank praise despite ‘Big Brother’ fears’, Bloomberg, 16 March 201…
S3
AI Meets Agriculture Building Food Security and Climate Resilien — -Johannes Zutt- Regional Vice President, World Bank
S4
How AI Drives Innovation and Economic Growth — -Johannes Zutt: World Bank representative (referred to as “John” in the discussion)
S5
Keynotes — Michael O’Flaherty: EuroDIG, dear friends. Last Saturday, we watched as the newly elected Pope explained why he had ch…
S6
How AI Drives Innovation and Economic Growth — – Johannes Zutt- Ufuk Akcigit- Anu Bradford
S7
How AI Drives Innovation and Economic Growth — – Johannes Zutt- Ufuk Akcigit- Anu Bradford – Ufuk Akcigit- Johannes Zutt
S8
How AI Drives Innovation and Economic Growth — – Ufuk Akcigit- Johannes Zutt
S9
New Development Actors for the 21st Century / DAVOS 2025 — – Iqbal Dhaliwal – Global Director of J-PAL at MIT
S10
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — – Iqbal Dhaliwal- Ronnie Chatterji – Iqbal Dhaliwal- Sanjiv Bikhchandani
S11
DIGITAL DIVIDENDS — – Cantijoch, Marta, Silvia Galandini, and Rachel Gobson. 2014. ‘Civic Websites and Community Engagement: A Mixed Metho…
S12
Rights and Permissions — – Aboud, Frances E., and Kamal Hossain. 2011. ‘The Impact of Preprimary School on Primary School Achievement in Banglade…
S13
How AI Drives Innovation and Economic Growth — – Johannes Zutt- Michael Kremer – Michael Kremer- Iqbal Dhaliwal
S14
A Digital Future for All (afternoon sessions) — AI is enabling economic progress and entrepreneurship, especially in emerging markets. It can boost productivity across …
S15
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Higher productivity potential exists in agriculture, manufacturing, healthcare, and construction sectors
S16
Turbocharging Digital Transformation in Emerging Markets: Unleashing the Power of AI in Agritech (ITC) — By addressing these challenges and improving data management and accessibility within the agricultural sector, the overa…
S17
Artificial Intelligence &amp; Emerging Tech — The analysis explores multiple aspects of the relationship between artificial intelligence (AI) and developing countries…
S18
AI sandboxes pave path for responsible innovation in developing countries — At theInternet Governance Forum 2025in Lillestrøm, Norway, experts from around the worldgatheredto examine how AI sandbo…
S19
AI Safety at the Global Level Insights from Digital Ministers Of — “Things like regulatory sandboxes or like policy lab type things where you can try limited pilot approaches seem to be g…
S20
Advancing Scientific AI with Safety Ethics and Responsibility — “So for example, we are going to launch a global south network for trustworthy AI and we are going to launch a global so…
S21
How Small AI Solutions Are Creating Big Social Change — African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so lan…
S22
Multistakeholder Partnerships for Thriving AI Ecosystems — For instance, the National Skilling Mission, the skilling mission that is undertaken by NASCOM, which is the IT industry…
S23
Building Inclusive Societies with AI — Public-private collaboration is crucial for national growth and inclusive technology adoption
S24
9821st meeting — Robust accountability frameworks and national policies aligned with human rights standards are essential to prevent pote…
S25
Unveiling Trade Secrets: Exploring the Implications of trade agreements for AI Regulation in the Global South — Such instances underline the importance of robust regulation to prevent future abuses and protect individual rights. Fur…
S26
Building Scalable AI Through Global South Partnerships — It’s like a case management system for tuberculosis patients. We’ve integrated everything. We developed algorithms into …
S27
Digital Health at the crossroads of human rights, AI governance, and e-trade (SouthCentre) — Technological innovation has led to a significant transformation in health systems, particularly through advancements in…
S28
Can we test for trust? The verification challenge in AI — Anja Kaspersen: Massively so. So let me, I’m just gonna rewind a little bit to our title of this session if you allow me…
S29
Building Sovereign and Responsible AI Beyond Proof of Concepts — “The second is around governance failures.”[65]. “And then there’s also a failure around misalignment.”[66]. “So I put h…
S30
https://dig.watch/event/india-ai-impact-summit-2026/how-ai-drives-innovation-and-economic-growth — And I’m seeing two patterns. One is about trust in technology, and the second part is about the reality of the policy wo…
S31
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Amin Nasser emphasizes that successful AI scaling requires establishing clear operational models, including processes fo…
S32
AI for agriculture Scaling Intelegence for food and climate resiliance — “We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution…
S33
AI for Good – food and agriculture — – Creation of predictive models for planting and harvesting decisions – Use of remote sensing and geospatial platforms …
S34
AI for Social Good Using Technology to Create Real-World Impact — Nilekani provided compelling examples of how open networks enable rapid capability integration and new economic opportun…
S35
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — – Ashwini Vaishnaw- Khalid Al-Falih Economic | Development | Infrastructure IMF Index of Preparedness evaluates physic…
S36
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — Emerging technologies and artificial intelligence can be true enablers of economic growth and social well-being. The an…
S37
Is AI a catalyst for development? — The Economist argues that AI has the potential to revolutionise developing countries by transforming their economies and…
S38
What policy levers can bridge the AI divide? — **Lithuania** emphasized leveraging small country advantages through flexibility and reduced bureaucracy, proposing regu…
S39
WS #82 A Global South perspective on AI governance — An audience member points out that Global South countries often regulate companies and organizations outside their juris…
S40
Building Trusted AI at Scale – Keynote Anne Bouverot — This comment shifts the discussion from acknowledging competition to actively proposing strategic alliances. It introduc…
S41
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — Brunner summarizes Trump’s AI approach as: American AI is number one and must remain the leader, compete with China, the…
S42
How AI Is Transforming Indias Workforce for Global Competitivene — “Because kind of when we have a small set of institutions or companies or talent pools pull ahead disproportionately bec…
S43
How AI Drives Innovation and Economic Growth — Artificial intelligence | Social and economic development | Human rights and the ethical dimensions of the information s…
S44
AI for Social Empowerment_ Driving Change and Inclusion — He asks how governments and institutions can govern AI responsibly to minimise labour market disruption and ensure a smo…
S45
Keeping AI in check — Societies should not be forgetful of the fact that technology is a product of the human mind and that the most intellige…
S46
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Technology is moving at an incredibly fast pace, and this rapid advancement is seen in various sectors such as AI, semic…
S47
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Bhan argues that AI’s impact on jobs cannot be viewed in isolation but must be considered alongside broader economic dis…
S48
Labour market stability persists despite the rise of AI — Public fears of AI rapidly displacing workershave not yet materialisedin the US labour market. A new study finds that th…
S49
The UK labour market feels a sharper impact from AI use — Companies are reporting net job losses linked to AI adoption, with research showing a sharper impact than in other major…
S50
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — The development of these technologies is highly concentrated in few countries and firms
S51
How AI Is Transforming Indias Workforce for Global Competitivene — “Because kind of when we have a small set of institutions or companies or talent pools pull ahead disproportionately bec…
S52
Secure Finance Risk-Based AI Policy for the Banking Sector — India’s regulatory thinking reflects this balance, encouraging experimentation while reinforcing institutional responsib…
S53
How AI Drives Innovation and Economic Growth — And I would like to, you know, separate advanced economies from emerging or developing economies. So when it comes to ad…
S54
Harnessing the potential of artificial intelligence in developing countries — The Economistarguesthat there are three main reasons for optimism about AI and development: First, the technology is imp…
S55
The mismatch between public fear of AI and its measured impact — Artificial intelligencehas become one of the loudest topics in public discourse. Headlines speak of mass job displacemen…
S56
From Innovation to Impact_ Bringing AI to the Public — Whilst maintaining an optimistic outlook, the discussion acknowledges important limitations and risks. Sharma emphasises…
S57
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — This brings me to the international dimension. AI is a truly global challenge whose effects transcend national borders. …
S58
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Advocates for a harmonised approach to regulation and policy-making believe that this method can yield positive outcomes…
S59
Keynotes — The main areas of disagreement center on regulatory philosophy (soft vs. comprehensive regulation) and the role of crisi…
S60
WS #35 Unlocking sandboxes for people and the planet — The level of disagreement among speakers was moderate. While there were clear differences in approaches and perspectives…
S61
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — I mean, if a mobile operator arbitrarily starts turning off SIM cards because they think maybe that traffic looks a bit …
S62
The Foundation of AI Democratizing Compute Data Infrastructure — The emphasis on community participation, data sovereignty, and alternative technical architectures suggests AI developme…
S63
Building Inclusive Societies with AI — Public-private collaboration is crucial for national growth and inclusive technology adoption
S64
Open Forum #33 Building an International AI Cooperation Ecosystem — Development | Economic | Capacity development Innovation Ecosystems and Practical Implementation The speaker argues th…
S65
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — Economic | Development | Infrastructure IMF Index of Preparedness evaluates physical infrastructure, labor skills capab…
S66
How AI Drives Innovation and Economic Growth — “So, you know, for all countries, but especially for emerging markets and developing economies, AI can be a game changer…
S67
Emerging Markets: Resilience, Innovation, and the Future of Global Development — Economic | Infrastructure | Development Technology enables leapfrogging development opportunities for emerging markets
S68
Harnessing Collective AI for India’s Social and Economic Development — Kushe Bahl believes that AI will fundamentally reshape jobs rather than just replacing them outright. He suggests this t…
S69
Is AI a catalyst for development? — The Economist argues that AI has the potential to revolutionise developing countries by transforming their economies and…
S70
How AI Drives Innovation and Economic Growth — Countries may not have reliable electricity, sufficient internet backbone, basic literacy and numeracy skills, or may ne…
S71
Artificial Intelligence &amp; Emerging Tech — Jennifer Chung:Thank you, Nazar. I actually do see two more questions from the Bangladesh Remote Hub. This is good. This…
S72
AI/Gen AI for the Global Goals — Christopher Lu points out that many areas, particularly in developing countries, lack basic infrastructure such as inter…
S73
What policy levers can bridge the AI divide? — **Lithuania** emphasized leveraging small country advantages through flexibility and reduced bureaucracy, proposing regu…
S74
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Call for individuals in government and private companies to actively bridge the research-policy gap in their work
S75
WS #82 A Global South perspective on AI governance — AUDIENCE: Thank you for the wonderful thought provoking conversation. I wanted to ask, I only attended half of the ses…
S76
Who Watches the Watchers Building Trust in AI Governance — So there is no end to the story of how regulators should design the regulations. That is the main question. All countrie…
S77
Building Trusted AI at Scale – Keynote Anne Bouverot — This comment shifts the discussion from acknowledging competition to actively proposing strategic alliances. It introduc…
S78
Global South’s role in AI governance explored at IGF 2024 — The inclusion of the Global South, particularly theMENA region, in AI governance emerged as a key focus in a recentpanel…
S79
How AI Is Transforming Indias Workforce for Global Competitivene — “Because kind of when we have a small set of institutions or companies or talent pools pull ahead disproportionately bec…
S80
Better governance for fairer digital markets: unlocking the innovation potential and leveling the playing field (UNCTAD) — Overall, the analysis highlights the pressing need for stronger governance in the digital economy. It provides evidence …
S81
AI for Social Empowerment_ Driving Change and Inclusion — She warns that AI is exacerbating inequality by increasing capital concentration while labour’s share of income shrinks….
S82
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — The discussion maintained a consistently optimistic and forward-looking tone throughout. Speakers expressed confidence i…
S83
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S84
AI 2.0 The Future of Learning in India — The tone was consistently optimistic and forward-looking throughout the conversation. Speakers maintained an enthusiasti…
S85
Science AI &amp; Innovation_ India–Japan Collaboration Showcase — The tone was consistently optimistic and forward-looking throughout the conversation. The panelists demonstrated genuine…
S86
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — The conversation maintains a consistently optimistic and enthusiastic tone throughout. Both speakers demonstrate genuine…
S87
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S88
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — The tone of the discussion was largely serious and concerned, given the gravity of the issues being discussed. However, …
S89
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S90
WS #187 Bridging Internet AI Governance From Theory to Practice — The discussion maintained a thoughtful but increasingly cautious tone throughout. It began optimistically, with speakers…
S91
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S92
Swiss AI Initiatives and Policy Implementation Discussion — The discussion maintained a professional, collaborative tone throughout, with speakers presenting both opportunities and…
S93
Law, Tech, Humanity, and Trust — The discussion maintained a consistently professional, collaborative, and optimistic tone throughout. The speakers demon…
S94
WS #462 Bridging the Compute Divide a Global Alliance for AI — The discussion maintained a constructive and collaborative tone throughout, with participants building on each other’s i…
S95
Revamping Decision-Making in Digital Governance and the WSIS Framework — The discussion maintained a constructive and collaborative tone throughout, with speakers building upon each other’s poi…
S96
Rewriting Development / Davos 2025 — The tone was largely serious and analytical, with speakers offering critical assessments of current development models. …
S97
Afternoon session — The discussion began with a collaborative and appreciative tone as various stakeholders shared their visions and commitm…
S98
WS #25 Multistakeholder cooperation for online child protection — The tone of the discussion was serious and concerned, reflecting the gravity of the issues being discussed. However, it …
S99
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S100
Meeting REPORT — In summation, the meeting chaired by Stefano Belfiore exemplified focused and action-oriented governance and delivered e…
S101
WS #283 AI Agents: Ensuring Responsible Deployment — The discussion maintained a balanced, thoughtful tone throughout, combining cautious optimism with realistic concern. Pa…
S102
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S103
Deepfakes for good or bad? — The tone was thoughtful and pragmatic throughout, balancing concern with cautious optimism. The panelists acknowledged s…
S104
High Level Session 3: AI &amp; the Future of Work — Jennifer Bacchus: Thank you very much indeed. Let me move on to our next question now, question two. So what guiding pri…
S105
Lightning Talk #173 Artificial Intelligence in Agrotech and Foodtech — Alina Ustinova: Hello, everyone. My name is Alina. I represent the Center for Global IT Cooperation, and today I want to…
S106
https://dig.watch/event/india-ai-impact-summit-2026/founders-adda-raw-conversations-with-indias-top-ai-pioneers — So for example, anything and everything that is required we are basically making the entire suite of the… automation l…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
J
Johannes Zutt
6 arguments141 words per minute1450 words612 seconds
Argument 1
AI can be a game‑changer for emerging markets, boosting productivity across sectors such as agriculture, health, and finance (Johannes Zutt)
EXPLANATION
Johannes argues that artificial intelligence offers a unique opportunity for emerging and developing economies to leapfrog longstanding development challenges. By enhancing productivity in sectors like farming, healthcare, and financial services, AI can drive inclusive growth.
EVIDENCE
He notes that AI complements about 15-16 % of jobs in South Asia, enabling workers to expand skills and effectiveness (e.g., farmers identifying pests and diseases, nurses diagnosing unfamiliar ailments, and financial institutions better assessing borrower risk) and highlights the broad potential for AI to fill skill gaps in education, health, and public resource allocation [6-10][12-20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The transformative potential of AI for productivity in agriculture, health and finance in emerging economies is highlighted in S14 and S15, while sector‑specific AI applications for farming are documented in S33 and S34.
MAJOR DISCUSSION POINT
AI as a development catalyst
AGREED WITH
Michael Kremer, Ufuk Akcigit
Argument 2
AI may displace entry‑level, routine jobs and many developing countries lack basic infrastructure (electricity, connectivity, literacy) to harness it (Johannes Zutt)
EXPLANATION
He warns that AI automation will likely eliminate certain low‑skill, document‑based jobs, especially at the bottom of professional hierarchies. Moreover, many emerging economies lack the foundational infrastructure needed to deploy AI effectively.
EVIDENCE
Johannes cites observed reductions in entry-level job advertisements within the World Bank Group and points to basic constraints such as unreliable electricity, weak internet backbones, low literacy and numeracy, and reliance on very basic devices in many developing countries [22-31].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Infrastructure constraints such as unreliable electricity and limited internet are described in S1 and S17, and the risk of jobless growth despite productivity gains is discussed in S15.
MAJOR DISCUSSION POINT
Risks of job displacement and infrastructure gaps
Argument 3
Governments and multilateral bodies must create standards, sandboxes, and advisory support to enable safe AI deployment (Johannes Zutt)
EXPLANATION
He emphasizes the need for public‑facing efforts that establish standards, interoperability, and sandbox environments, alongside private‑sector‑facing initiatives that foster application development. This dual approach is essential for trustworthy AI uptake.
EVIDENCE
He describes the World Bank’s focus on “small AI” that works under limited connectivity and data conditions, and outlines the Bank’s role in advising on data reliability, creating sandbox spaces for experimentation, and coordinating public-private efforts [33-36][49-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of regulatory sandboxes and collaborative policy labs for responsible AI innovation is covered in S18, S19 and S20, while multistakeholder partnership models are outlined in S22.
MAJOR DISCUSSION POINT
Institutional frameworks for AI
AGREED WITH
Michael Kremer, Iqbal Dhaliwal
DISAGREED WITH
Anu Bradford
Argument 4
The World Bank focuses on “small AI”: affordable, locally relevant solutions, and provides advisory and sandbox environments (Johannes Zutt)
EXPLANATION
Johannes explains that the Bank concentrates on practical, low‑cost AI applications that can operate with limited connectivity, data, and skills. The Bank’s comparative advantage lies in advisory work, ensuring trustworthy data and facilitating sandbox environments for local innovators.
EVIDENCE
He details work in South Asia where the Bank supports state governments (e.g., Uttar Pradesh, Maharashtra, Kerala) to build offline-capable applications, partners with private investors, and avoids direct app development, focusing instead on advisory and standards work [34-38][39-46][49-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Examples of low‑cost, locally‑tailored AI solutions and partnership‑driven deployments are provided in S21 and S26, and the sandbox approach is reinforced in S18 and S19.
MAJOR DISCUSSION POINT
World Bank’s small‑AI strategy
AGREED WITH
Iqbal Dhaliwal, Michael Kremer
Argument 5
Coordination between public agencies and private innovators is crucial to build a vibrant AI ecosystem (Johannes Zutt; Michael Kremer)
EXPLANATION
Both speakers stress that effective AI deployment requires close collaboration between governments, multilateral institutions, and private sector innovators. Such coordination ensures that AI solutions are both demand‑driven and scalable.
EVIDENCE
Johannes mentions working with governments and private-sector investors to develop locally relevant AI applications and create sandbox environments, while Michael describes the Development Innovation Ventures fund that bridges public-private collaboration through staged financing and evidence-based pilots [45-49][266-271].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Public‑private collaboration frameworks for AI ecosystems are discussed in S22, S23 and the need for clear operational pathways for scaling pilots is highlighted in S31.
MAJOR DISCUSSION POINT
Public‑private coordination
Argument 6
AI enables precise, individual‑level poverty targeting, but robust governance is essential to prevent abuse
EXPLANATION
Zutt suggests that AI tools can allow governments and development agencies to direct anti‑poverty interventions at the individual level, a transformative capability that must be paired with strong governance safeguards.
EVIDENCE
He remarks that “for the first time in human history, we may actually have the tools available to enable us to target poverty reduction, poverty elimination initiatives on individuals” and follows with “I do worry that we will not get the governance right or we won’t be able to make that governance sufficiently robust to prevent abuses” [414-416].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Robust accountability and human‑rights‑aligned AI governance needed to avoid abuses are emphasized in S24 and S25, with additional insights on governance challenges in digital health (S27) and trust verification (S28).
MAJOR DISCUSSION POINT
Governance of AI‑driven poverty targeting
I
Iqbal Dhaliwal
3 arguments183 words per minute1151 words375 seconds
Argument 1
Small‑scale AI applications free up teachers’ and health workers’ time, improving education and healthcare outcomes (Iqbal Dhaliwal)
EXPLANATION
Iqbal illustrates how AI can automate routine tasks such as spelling correction, allowing teachers to focus on higher‑order learning activities. Similar time‑saving benefits apply to health frontline workers.
EVIDENCE
He describes AI taking over low-level tasks like correcting spelling and punctuation, thereby freeing teachers to engage students in analytical thinking, and notes comparable gains for nurses and Anganwadi workers [241-247].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Productivity gains from AI in education and health settings are noted in S14, and efficiency improvements through AI‑driven agritech illustrate similar time‑saving effects in S16.
MAJOR DISCUSSION POINT
Productivity gains through task automation
AGREED WITH
Johannes Zutt, Michael Kremer
Argument 2
Trust gaps and mismatches between technology performance and real‑world systems can cause pilots to fail or be rejected (Iqbal Dhaliwal)
EXPLANATION
Iqbal points out that even when AI tools outperform humans in controlled settings, lack of trust and misalignment with existing workflows can undermine adoption. Successful pilots must consider system‑level adjustments.
EVIDENCE
He cites studies where AI diagnostic tools, despite higher lab accuracy, reduced doctor efficiency due to insufficient training, and gives an Indian GST fraud-detection example where the government rejected scaling because the AI removed human discretion, highlighting trust and governance issues [309-318].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Studies on trust deficits in AI deployments for health workers (S30), governance and misalignment failures (S29), and the importance of trust for scaling (S31) provide supporting context.
MAJOR DISCUSSION POINT
Implementation challenges due to trust and system fit
AGREED WITH
Johannes Zutt, Michael Kremer, Anu Bradford
Argument 3
Demand‑driven pilots that demonstrate clear impact (e.g., education tools) are key for scaling successful AI projects (Iqbal Dhaliwal)
EXPLANATION
He argues that pilots should arise from genuine demand by end‑users—students, teachers, and school districts—and should show measurable improvements before scaling.
EVIDENCE
Iqbal notes that the AI education tool responded to demand from students for better essays, teachers for reduced grading workload, and districts for demonstrable progress, illustrating a successful demand-driven rollout [250-254].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidance on moving from pilots to full deployment through demand‑driven evidence and trust considerations is outlined in S31 and S32.
MAJOR DISCUSSION POINT
Importance of demand‑driven pilots
AGREED WITH
Michael Kremer, Ufuk Akcigit
M
Michael Kremer
5 arguments160 words per minute1592 words593 seconds
Argument 1
Public‑good AI tools—e.g., AI‑enhanced weather forecasts—can dramatically help farmers and other low‑income users (Michael Kremer)
EXPLANATION
Michael highlights AI‑driven weather forecasting as a non‑rival public good that can improve agricultural decisions for millions of smallholder farmers, thereby increasing productivity and resilience.
EVIDENCE
He references India’s AI weather-forecast service that reached 38 million farmers, noting that accurate forecasts led to earlier transplanting and greater hybrid-seed use, and that similar positive responses have been observed globally [133-155].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI‑enhanced weather services for millions of farmers are described in S34, with complementary examples of predictive agricultural models in S33 and broader productivity impacts in S14.
MAJOR DISCUSSION POINT
AI for public‑good agriculture
AGREED WITH
Johannes Zutt, Iqbal Dhaliwal
Argument 2
Slow public‑sector adoption and procurement lock‑ins risk leaving the poor without access to AI benefits (Michael Kremer)
EXPLANATION
Michael warns that without proactive government action and well‑designed procurement processes, AI innovations may remain confined to the private sector, excluding the most vulnerable populations.
EVIDENCE
He describes the need for governments and multilateral banks to address market failures, cites examples where private firms lack incentives for public-good AI, and stresses that procurement structures can create monopsony power, limiting competition and scaling [263-277][284-292].
MAJOR DISCUSSION POINT
Risks of delayed public adoption
AGREED WITH
Johannes Zutt, Anu Bradford, Iqbal Dhaliwal
Argument 3
Innovation funds such as Development Innovation Ventures provide staged, evidence‑based financing to pilot, test, and scale AI interventions (Michael Kremer)
EXPLANATION
Michael outlines a tiered funding model that starts with small grants for pilots, moves to larger grants for rigorous testing, and culminates in scale‑up financing, thereby de‑risking AI projects and encouraging evidence‑based scaling.
EVIDENCE
He details the Development Innovation Ventures structure, noting its origins in the U.S. government, its independent relaunch, and its three-stage grant system that supports pilots, rigorous evaluation, and eventual scaling [266-271].
MAJOR DISCUSSION POINT
Staged funding for AI innovation
AGREED WITH
Ufuk Akcigit, Iqbal Dhaliwal
DISAGREED WITH
Johannes Zutt, Ufuk Akcigit
Argument 4
Coordination between public agencies and private innovators is crucial to build a vibrant AI ecosystem (Johannes Zutt; Michael Kremer)
EXPLANATION
Michael reinforces the need for joint effort between governments, multilateral institutions, and private firms to ensure AI solutions are both locally relevant and scalable.
EVIDENCE
He references the Development Innovation Ventures fund as an example of public-private collaboration that supports evidence-based pilots, complementing Johannes’s description of the World Bank’s advisory role and sandbox creation for small-AI projects [266-271][45-49].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Public‑private collaboration frameworks for AI ecosystems are discussed in S22, S23 and the need for clear operational pathways for scaling pilots is highlighted in S31.
MAJOR DISCUSSION POINT
Public‑private partnership for AI
Argument 5
AI can improve traffic safety and enforcement through automated cameras and AI‑driven driver‑licensing tests, enhancing fairness and reducing accidents
EXPLANATION
Kremer highlights AI applications that automate traffic monitoring and driver assessment, which can lower accident rates and increase perceived fairness in enforcement.
EVIDENCE
He describes “automated traffic cameras that have the opportunity to improve traffic outcomes” and cites the “India Research Program” where AI-based driver-license testing (HAB) led to a “20 to 30 percent” reduction in unsafe drivers across 56 sites [276-278].
MAJOR DISCUSSION POINT
AI for public‑good safety applications
U
Ufuk Akcigit
6 arguments163 words per minute1041 words382 seconds
Argument 1
AI offers sizable opportunities for developing economies, but only if the broader business environment supports entrepreneurship (Ufuk Akcigit)
EXPLANATION
Ufuk stresses that AI’s potential can be realized only when underlying business conditions—such as competition, regulatory clarity, and entrepreneurial ecosystems—are conducive. Without these, AI tools may not translate into growth.
EVIDENCE
He questions why, historically, firm size in emerging economies correlated with family size rather than competition, arguing that without a business-friendly environment AI cannot deliver its promised gains [104-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The link between AI‑driven entrepreneurship and supportive ecosystems is highlighted in S14, while multistakeholder partnership and skilling initiatives that foster such environments are detailed in S22 and S23.
MAJOR DISCUSSION POINT
Business climate as prerequisite for AI benefits
AGREED WITH
Johannes Zutt, Michael Kremer
DISAGREED WITH
Johannes Zutt, Michael Kremer
Argument 2
The foundational AI layer is compute‑, data‑, and talent‑intensive, creating high concentration and threatening labor markets (Ufuk Akcigit)
EXPLANATION
He differentiates between the application layer (low entry barriers) and the foundational layer (high barriers due to heavy requirements for compute, data, and skilled talent). This concentration risks limiting competition and exacerbating labor market disruptions.
EVIDENCE
Ufuk notes that the foundational layer demands substantial compute, data, and talent, making it prone to concentration, and that outcomes at this layer will spill over to the application layer, influencing overall innovation dynamics [93-100].
MAJOR DISCUSSION POINT
Concentration in foundational AI
AGREED WITH
Anu Bradford, Michael Kremer
Argument 3
Strengthening the overall business climate is essential for entrepreneurs to exploit AI’s potential (Ufuk Akcigit)
EXPLANATION
He reiterates that a supportive entrepreneurial environment—characterized by competition, clear regulations, and access to finance—is vital for firms to harness AI technologies effectively.
EVIDENCE
His earlier remarks about the need to fix business environments, citing the lack of competition-friendly conditions and the historical reliance on family size for firm growth, underscore this point [104-115].
MAJOR DISCUSSION POINT
Need for pro‑entrepreneurial policies
Argument 4
The application layer lowers entry barriers for startups, while the foundational layer remains highly concentrated, shaping future innovation dynamics (Ufuk Akcigit)
EXPLANATION
Ufuk observes that AI applications can be built by small firms due to low entry costs, but the underlying models and infrastructure remain dominated by a few large players, influencing the trajectory of innovation.
EVIDENCE
He describes the application layer as having low entry barriers, enabling small businesses to perform tasks previously reserved for large firms, contrasted with the foundational layer’s high barriers and concentration risk [87-93].
MAJOR DISCUSSION POINT
Dual‑layer structure of AI markets
Argument 5
Preserving strong university research capacity is vital to keep the foundational AI layer contestable and prevent excessive concentration (Ufuk Akcigit)
EXPLANATION
Ufuk argues that universities are essential for maintaining a competitive foundational AI ecosystem; weakening academic research would tilt power toward large incumbents.
EVIDENCE
He states that keeping the foundational layer contestable requires healthy universities, warning that without them the sector could become overly concentrated among large incumbent information companies [349-351].
MAJOR DISCUSSION POINT
Role of academia in AI foundations
Argument 6
AI‑driven creative destruction is a key engine of long‑term economic growth, but early indicators must be monitored to manage risks
EXPLANATION
Akcigit stresses that creative destruction, accelerated by AI, fuels sustained growth, yet stresses the importance of tracking early signals to avoid adverse side‑effects.
EVIDENCE
He states “creative destruction is an important driver of economic growth in the long run” and later adds “we need to look at early indicators” when discussing AI’s impact [76-78][101-102].
MAJOR DISCUSSION POINT
Creative destruction and AI
A
Anu Bradford
4 arguments199 words per minute1374 words412 seconds
Argument 1
The Global South needs AI sovereignty and rights‑based regulatory frameworks, drawing lessons from the EU while tailoring to local priorities (Anu Bradford)
EXPLANATION
Anu contends that countries in the Global South should develop AI regulations that protect fundamental rights and ensure equitable benefit distribution, while adapting successful elements from the EU’s rights‑driven approach to fit local contexts.
EVIDENCE
She references the EU’s AI Act as a rights-based, economy-wide framework that protects individual rights and promotes broader benefit sharing, suggesting India can learn from this while customizing its own rules [167-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for rights‑based, accountable AI regulation and safeguards against abuse are made in S24 and S25, and the establishment of trustworthy AI networks for the Global South is discussed in S20.
MAJOR DISCUSSION POINT
AI sovereignty through rights‑based regulation
AGREED WITH
Johannes Zutt, Michael Kremer, Iqbal Dhaliwal
DISAGREED WITH
Johannes Zutt
Argument 2
Full AI sovereignty is limited by global supply‑chain interdependence; techno‑nationalism must be balanced against cooperation (Anu Bradford)
EXPLANATION
Anu highlights that AI development relies on a globally interlinked supply chain for compute, semiconductors, and raw materials, making absolute sovereignty unrealistic. She warns against weaponising these interdependencies.
EVIDENCE
She outlines the AI stack’s reliance on U.S. semiconductor design, Taiwanese manufacturing, Dutch equipment (ASML), Japanese chemicals, and Chinese raw materials, noting that attempts to weaponise these choke points can backfire and harm all parties [357-371].
MAJOR DISCUSSION POINT
Limits of AI sovereignty due to supply chains
AGREED WITH
Ufuk Akcigit, Michael Kremer
DISAGREED WITH
Jeanette Rodrigues
Argument 3
Over‑regulation, weak governance, and geopolitical weaponisation of AI pose systemic risks (Anu Bradford)
EXPLANATION
Anu warns that overly stringent or poorly designed regulations can stifle innovation, while geopolitical competition may lead to the weaponisation of AI components, creating systemic vulnerabilities.
EVIDENCE
She notes the difficulty even established regulators like the EU face in crafting AI legislation, and later expands on how geopolitical rivalries over compute, semiconductors, and raw materials could be weaponised, undermining global stability [166-176][357-374].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Systemic risks from poorly designed regulations and geopolitical competition are examined in S24, S25 and S29, with additional emphasis on trust and verification challenges in S28.
MAJOR DISCUSSION POINT
Systemic risks from regulation and geopolitics
Argument 4
AI regulation should be adapted to local contexts; countries like India must tailor lessons from the EU rather than copy‑paste frameworks
EXPLANATION
Bradford argues that while the EU provides a rights‑based regulatory template, each nation should modify it to reflect its unique priorities and institutional realities.
EVIDENCE
She says “India is a formidable economy that doesn’t need to take a template and plug it into the economy as such. I think India is in a very good position to take the lessons that serves its needs yet make the kind of local modification and variations that are more reflecting the distinct priorities of this country” [177-178].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for context‑specific, rights‑based AI policy frameworks is reinforced in S24 and S25, which stress adapting global standards to national realities.
MAJOR DISCUSSION POINT
Context‑specific AI regulation
J
Jeanette Rodrigues
3 arguments174 words per minute1039 words356 seconds
Argument 1
AI innovation does not disperse equally, risking a widening of development gaps
EXPLANATION
Rodrigues observes that AI and related innovations fail to spread uniformly across societies, which can exacerbate existing inequalities rather than close them.
EVIDENCE
She explicitly states that “AI, innovation never disperses and never diffuses equally” [61].
MAJOR DISCUSSION POINT
Inequitable diffusion of AI
Argument 2
Policymakers need a balanced, pragmatic, policy‑first approach that reconciles hope and concern about AI
EXPLANATION
She argues that the discussion should aim to manage both the optimism surrounding AI’s benefits and the fears about its risks by adopting practical, policy‑driven solutions for real‑world implementation.
EVIDENCE
Rodrigues says “we need to balance both of those extremes, hope and concern, and go ahead in a pragmatic, policy-first way to prepare for the real world” [71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy‑lab approaches and collaborative sandbox models that balance innovation with risk management are described in S19 and S20.
MAJOR DISCUSSION POINT
Balanced, policy‑first AI governance
Argument 3
Governance of large language models is dominated by the US and China, limiting Global South sovereignty; the Global South must develop its own AI governance frameworks
EXPLANATION
She points out that the concentration of foundational AI models in a few major powers raises questions about who sets the rules for developing economies, implying the need for independent AI sovereignty.
EVIDENCE
Rodrigues notes that “large language models are concentrated in the US, in China now with DeepSeek… Who sets the AI rules for the Global South?” [163-166].
MAJOR DISCUSSION POINT
AI governance concentration and Global South sovereignty
Agreements
Agreement Points
AI is a powerful catalyst for development in emerging and developing economies, offering productivity gains in agriculture, health, finance and other sectors.
Speakers: Johannes Zutt, Michael Kremer, Ufuk Akcigit
AI can be a game‑changer for emerging markets, boosting productivity across sectors such as agriculture, health, and finance (Johannes Zutt) Public‑good AI tools—e.g., AI‑enhanced weather forecasts—can dramatically help farmers and other low‑income users (Michael Kremer) AI offers sizable opportunities for developing economies, but only if the broader business environment supports entrepreneurship (Ufuk Akcigit)
All three speakers highlight that AI can substantially improve productivity and outcomes in key sectors for low-income countries, from farm-level pest detection and health diagnostics to AI-driven weather forecasts, provided the right ecosystem exists [6-10][12-20][133-155][104-106].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with analyses of AI-driven economic growth and social development in the literature, such as the discussion of AI as a catalyst for productivity in agriculture, health and finance in S43, and the distinction between foundational and application layers for emerging economies in S53 and S54.
Effective AI deployment requires coordinated public‑private collaboration and institutional mechanisms such as advisory support, sandboxes and staged funding.
Speakers: Johannes Zutt, Michael Kremer, Iqbal Dhaliwal
Governments and multilateral bodies must create standards, sandboxes, and advisory support to enable safe AI deployment (Johannes Zutt) Coordination between public agencies and private innovators is crucial to build a vibrant AI ecosystem (Michael Kremer) Demand‑driven pilots that demonstrate clear impact (e.g., education tools) are key for scaling successful AI projects (Iqbal Dhaliwal)
Johannes stresses standards and sandbox environments, Michael describes the Development Innovation Ventures fund that bridges public and private effort, and Iqbal points to demand-driven pilots as a practical way to align stakeholders, all underscoring the need for coordinated ecosystems [45-53][49-53][266-271][250-254].
POLICY CONTEXT (KNOWLEDGE BASE)
Public‑private partnerships and sandbox approaches are repeatedly advocated as essential for responsible AI rollout, e.g., the sandboxes for data governance framework in S58, the IGF discussion on sandbox value in S60, and broader PPP recommendations in S63 and S64. Risk‑based policy guidance for experimentation versus oversight is outlined in S52.
AI also brings significant risks, notably job displacement for low‑skill workers, infrastructure gaps, and the danger of slow public‑sector adoption that could leave the poor behind.
Speakers: Johannes Zutt, Ufuk Akcigit, Michael Kremer, Iqbal Dhaliwal
AI may displace entry‑level, routine jobs and many developing countries lack basic infrastructure (Johannes Zutt) The biggest risk is the labor market – entry‑level jobs may disappear faster than workers can adapt (Ufuk Akcigit) Slow public‑sector adoption and procurement lock‑ins risk leaving the poor without access to AI benefits (Michael Kremer) Trust gaps and mismatches between technology performance and real‑world systems can cause pilots to fail or be rejected (Iqbal Dhaliwal)
Johannes warns of automation-driven job loss and weak electricity/internet, Ufuk flags rapid erosion of entry-level employment, Michael cautions that governments may lag in adopting AI for public goods, and Iqbal notes that lack of trust and system alignment can stall deployment-all pointing to structural and adoption risks [22-31][405-412][263-277][309-318].
POLICY CONTEXT (KNOWLEDGE BASE)
Labour‑market disruption concerns are central to policy debates, as seen in the call for responsible governance to minimise displacement in S44, the Davos 2026 panel linking AI impact to broader economic shocks in S47, and empirical studies showing limited displacement in the US (S48) versus sharper job losses in the UK (S49).
Localized, low‑cost “small AI” solutions are essential for contexts with limited connectivity, data and skill resources.
Speakers: Johannes Zutt, Iqbal Dhaliwal, Michael Kremer
The World Bank focuses on “small AI”: affordable, locally relevant solutions, and provides advisory and sandbox environments (Johannes Zutt) Small‑scale AI applications free up teachers’ and health workers’ time, improving education and healthcare outcomes (Iqbal Dhaliwal) Public‑good AI tools—e.g., AI‑enhanced weather forecasts—can dramatically help farmers and other low‑income users (Michael Kremer)
Johannes describes the Bank’s “small AI” strategy, Iqbal gives concrete classroom examples where AI automates routine tasks, and Michael cites AI weather services that operate at scale with minimal user cost, collectively emphasizing the importance of affordable, context-aware AI [34-38][39-46][241-247][133-155].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for affordable, context‑adapted AI is highlighted in discussions of democratizing compute and data infrastructure in S62, and the cost barriers to training large models that may limit diffusion to low‑resource settings in S54.
Robust governance, regulation and standards are needed to prevent misuse of AI, especially when targeting individuals for poverty reduction or public services.
Speakers: Johannes Zutt, Michael Kremer, Anu Bradford, Iqbal Dhaliwal
AI enables precise, individual‑level poverty targeting, but robust governance is essential to prevent abuse (Johannes Zutt) Slow public‑sector adoption and procurement lock‑ins risk leaving the poor without access to AI benefits (Michael Kremer) The Global South needs AI sovereignty and rights‑based regulatory frameworks, drawing lessons from the EU while tailoring to local priorities (Anu Bradford) Trust gaps and mismatches between technology performance and real‑world systems can cause pilots to fail or be rejected (Iqbal Dhaliwal)
Johannes highlights the need for governance when using AI for poverty targeting, Michael stresses that without proper public mechanisms the poor may be excluded, Anu advocates rights-based, locally adapted regulation, and Iqbal points to trust and system-fit as governance challenges, all converging on the necessity of strong, context-sensitive oversight [414-416][263-277][167-176][309-318].
POLICY CONTEXT (KNOWLEDGE BASE)
Governance imperatives are underscored in S45’s emphasis on human responsibility, the call for guardrails in S46, the caution against full AI control over sensitive services in S56, and the broader regulatory philosophy debate captured in S59.
The foundational AI layer is highly concentrated in a few countries and firms, creating systemic risks for competition and innovation.
Speakers: Ufuk Akcigit, Anu Bradford, Michael Kremer
The foundational AI layer is compute‑, data‑, and talent‑intensive, creating high concentration and threatening labor markets (Ufuk Akcigit) Full AI sovereignty is limited by global supply‑chain interdependence; techno‑nationalism must be balanced against cooperation (Anu Bradford) Big AI can, through computational power, generate new knowledge that can help us do things much better, but small AI translation is also important (Michael Kremer)
Ufuk describes the concentration risk of the compute-heavy foundational layer, Anu maps the global supply-chain that ties AI capability to a few power-houses, and Michael notes the distinction between “big AI” and “small AI” while acknowledging the dominance of large players, together underscoring systemic concentration concerns [93-100][357-371][55-57].
POLICY CONTEXT (KNOWLEDGE BASE)
Concentration risks are documented in the UNCTAD report on the digital economy in S50, the India workforce case noting data and compute concentration in S51, and the foundational vs. application layer analysis in S53.
Evidence‑based evaluation and early indicators are crucial for guiding AI policy and scaling interventions.
Speakers: Michael Kremer, Ufuk Akcigit, Iqbal Dhaliwal
Innovation funds such as Development Innovation Ventures provide staged, evidence‑based financing to pilot, test, and scale AI interventions (Michael Kremer) AI‑driven creative destruction is a key engine of long‑term growth, but early indicators must be monitored to manage risks (Ufuk Akcigit) Demand‑driven pilots that demonstrate clear impact (e.g., education tools) are key for scaling successful AI projects (Iqbal Dhaliwal)
Michael outlines a tiered, evidence-based funding model, Ufuk stresses monitoring early signals of creative destruction, and Iqbal emphasizes demand-driven pilots with measurable outcomes, all advocating data-driven decision-making for AI deployment [266-271][101-103][250-254].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy frameworks stress evidence‑based assessment, as reflected in the responsible AI governance recommendations in S44 and the discussion of the mismatch between public fear and measured impact in S55.
Similar Viewpoints
Both stress that public institutions need to provide frameworks, standards and collaborative platforms (e.g., sandboxes, advisory roles) to ensure AI solutions are safe, trustworthy and scalable [45-53][266-271].
Speakers: Johannes Zutt, Michael Kremer
Governments and multilateral bodies must create standards, sandboxes, and advisory support to enable safe AI deployment (Johannes Zutt) Coordination between public agencies and private innovators is crucial to build a vibrant AI ecosystem (Michael Kremer)
Both recognize that AI capabilities are concentrated in a few geopolitical actors due to the underlying hardware and talent ecosystem, limiting true sovereignty for any single country [93-100][357-371].
Speakers: Ufuk Akcigit, Anu Bradford
The foundational AI layer is compute‑, data‑, and talent‑intensive, creating high concentration (Ufuk Akcigit) Full AI sovereignty is limited by global supply‑chain interdependence; techno‑nationalism must be balanced against cooperation (Anu Bradford)
Both highlight institutional and trust barriers that can prevent AI benefits from reaching intended users, emphasizing the need for careful implementation and procurement design [309-318][263-277].
Speakers: Iqbal Dhaliwal, Michael Kremer
Trust gaps and mismatches between technology performance and real‑world systems can cause pilots to fail or be rejected (Iqbal Dhaliwal) Slow public‑sector adoption and procurement lock‑ins risk leaving the poor without access to AI benefits (Michael Kremer)
Unexpected Consensus
Agreement that AI innovation does not disperse equally and may widen development gaps, despite varied professional backgrounds.
Speakers: Jeanette Rodrigues, Johannes Zutt, Michael Kremer, Ufuk Akcigit, Anu Bradford
AI innovation does not disperse equally, risking a widening of development gaps (Jeanette Rodrigues) AI may displace entry‑level jobs and many developing countries lack basic infrastructure (Johannes Zutt) Slow public‑sector adoption risks leaving the poor without access (Michael Kremer) The foundational AI layer is concentration‑prone, threatening competition (Ufuk Akcigit) Full AI sovereignty is limited by global supply‑chain interdependence (Anu Bradford)
While each speaker approached the issue from different angles (technology diffusion, job loss, public-sector lag, concentration, supply-chain dependence), they all converge on the unexpected consensus that without deliberate policy action AI could exacerbate existing inequalities rather than close them [61][22-31][263-277][93-100][357-371].
POLICY CONTEXT (KNOWLEDGE BASE)
Unequal diffusion and widening gaps are highlighted in the UNCTAD analysis of concentration (S50), the India case on competitive imbalances (S51), and the optimism‑caution balance in S54.
Overall Assessment

The panel shows strong convergence on the dual nature of AI: it is a powerful development tool but also poses systemic risks related to job displacement, concentration, and governance. Consensus is high on the need for public‑private collaboration, localized small‑AI solutions, robust rights‑based regulation, and evidence‑based scaling mechanisms.

High consensus across speakers on both opportunities and risks, implying that policy agendas should simultaneously promote inclusive, small‑AI pilots, strengthen institutional frameworks, and address concentration and governance challenges to ensure AI narrows rather than widens development gaps.

Differences
Different Viewpoints
What is the primary mechanism to harness AI for development in emerging economies?
Speakers: Johannes Zutt, Ufuk Akcigit, Michael Kremer
Small AI meaning practical, affordable, locally relevant AI that addresses specific problems and also works where connectivity, data, skills, infrastructure are fairly limited (Johannes Zutt) AI offers sizable opportunities for developing economies, but only if the broader business environment supports entrepreneurship (Ufuk Akcigit) Innovation funds such as Development Innovation Ventures provide staged, evidence‑based financing to pilot, test, and scale AI interventions (Michael Kremer)
Johannes advocates focusing on “small AI” and advisory work with sandboxes to enable low-cost, locally relevant solutions [34-38][49-53]. Ufuk stresses that without a supportive business climate-competition, clear regulation, access to finance-AI cannot deliver its promises [104-115]. Michael proposes a staged public-private financing mechanism (Innovation Innovation Ventures) to de-risk pilots and scale successful projects [266-271]. The three speakers agree AI can help development but disagree on whether the main lever should be technical advisory/small-AI deployment, broader entrepreneurship-friendly reforms, or dedicated innovation funding.
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholders debate mechanisms such as sandboxes, advisory bodies and PPPs; the sandboxes for data governance approach (S58), the IGF discussion on sandbox value (S60), and the emphasis on public‑private collaboration in S63 and S64 provide relevant context.
Who should set AI rules for the Global South and how feasible is AI sovereignty?
Speakers: Jeanette Rodrigues, Anu Bradford
Who sets the AI rules for the Global South? Is there even the possibility for the Global South to talk about sovereignty? (Jeanette Rodrigues) Full AI sovereignty is limited by global supply‑chain interdependence; techno‑nationalism must be balanced against cooperation (Anu Bradford)
Jeanette raises the question of who will define AI governance for developing countries, implying that a sovereign framework could be established [163-166]. Anu counters that true sovereignty is unrealistic because AI development depends on a globally linked supply chain for compute, semiconductors, equipment, and raw materials, making any unilateral rule-making vulnerable to geopolitical pressures [357-371]. The disagreement centers on the feasibility and locus of AI governance for the Global South.
POLICY CONTEXT (KNOWLEDGE BASE)
The feasibility of AI sovereignty and rule‑setting is examined in the UNCTAD report on digital economy concentration (S50), the data‑sovereignty perspective in S62, and broader calls for human‑centric responsibility in AI governance in S45.
Regulatory approach: sandbox/advisory versus rights‑based comprehensive regulation
Speakers: Johannes Zutt, Anu Bradford
Governments and multilateral bodies must create standards, sandboxes, and advisory support to enable safe AI deployment (Johannes Zutt) The Global South needs AI sovereignty and rights‑based regulatory frameworks, drawing lessons from the EU while tailoring to local priorities (Anu Bradford)
Johannes emphasizes a pragmatic, standards-based sandbox model that the World Bank can help build, focusing on technical interoperability and low-cost applications [33-36][49-53]. Anu argues for a broader, rights-driven regulatory regime modeled on the EU AI Act, adapted to local contexts, suggesting a more comprehensive legal framework [167-176][177-178]. The two approaches differ in scope and emphasis-technical facilitation versus rights-based regulation.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between soft, sandbox‑based regulation and rights‑based comprehensive frameworks is a recurring theme, seen in the sandboxes advocacy in S58, the regulatory philosophy debate summarized in S59, and the contrasting views on guardrails versus flexible experimentation in S45 and S46.
Severity and handling of AI‑induced labor market disruptions
Speakers: Johannes Zutt, Ufuk Akcigit
One of them is there will be some job losses, particularly sort of entry‑level jobs that are very much knowledge or document‑based, performing relatively rote work that can be taken over by automation (Johannes Zutt) The biggest risk, I think, is definitely the labor market. If there was a dial where I could slow down the adaptation and give time to the labor market to catch up, that’s my biggest worry (Ufuk Akcigit)
Johannes notes that AI will eliminate certain low-skill, document-based positions, citing reductions in entry-level job ads within the World Bank Group [22-24]. Ufuk stresses that the broader labor market-especially entry-level coding jobs that fuel urban tech hubs-could be rapidly displaced, and he wishes for a mechanism to slow AI adoption to protect workers [405-411]. While both acknowledge job displacement, they differ on the perceived magnitude and the policy response needed.
POLICY CONTEXT (KNOWLEDGE BASE)
Assessments of labour‑market impact range from cautionary (S44) to empirical findings of limited displacement (S48) and sharper job losses in specific economies (S49), with broader contextualisation of AI’s impact alongside geopolitical factors in S47.
Unexpected Differences
Real‑world impact of AI pilots versus optimism about AI as a public good
Speakers: Iqbal Dhaliwal, Michael Kremer
Trust gaps and mismatches between technology performance and real‑world systems can cause pilots to fail or be rejected (Iqbal Dhaliwal) Public‑good AI tools—e.g., AI‑enhanced weather forecasts—can dramatically help farmers and other low‑income users (Michael Kremer)
Iqbal highlights that even technically superior AI tools may be rejected if users distrust them or if existing workflows are not adapted, citing failed GST fraud-detection scaling due to loss of human discretion [309-318]. Michael, by contrast, presents concrete success stories (AI weather forecasts reaching 38 million Indian farmers and improving planting decisions) as evidence that AI public goods can be rapidly adopted and generate impact [133-155]. The tension between skepticism about adoption barriers and optimism about transformative outcomes was not anticipated given the overall pro-AI tone of the panel.
POLICY CONTEXT (KNOWLEDGE BASE)
The gap between pilot optimism and measured outcomes is discussed in the mismatch analysis of public fear versus actual impact in S55, complemented by empirical stability evidence in S48 and the UK job‑loss case in S49.
Overall Assessment

The panel broadly concurs that AI holds significant promise for emerging economies, especially through small‑scale, locally relevant applications and public‑good services. Nevertheless, substantial disagreement exists on the optimal policy lever—whether to prioritize technical sandboxes, rights‑based regulation, business‑environment reforms, or staged innovation financing. Additional contention surrounds the feasibility of AI sovereignty for the Global South and the severity of labor‑market disruptions. These divergences suggest that coordinated, multi‑track strategies will be required to balance rapid AI deployment with governance, regulatory, and labor considerations.

Moderate to high: while the overarching goal of inclusive AI‑driven development is shared, the panelists propose markedly different pathways and express contrasting views on governance feasibility and labor impacts, indicating that consensus on concrete policy actions remains limited.

Partial Agreements
All speakers agree that AI has the potential to drive inclusive development and improve livelihoods in emerging economies. However, they diverge on the means: Johannes stresses small, advisory‑driven deployments; Ufuk calls for macro‑level business‑environment reforms; Michael highlights public‑good applications supported by innovation funding; Anu focuses on rights‑based regulatory sovereignty. The shared goal of inclusive AI‑driven growth is clear, but the pathways differ.
Speakers: Johannes Zutt, Ufuk Akcigit, Michael Kremer, Anu Bradford
AI can be a game‑changer for emerging markets, boosting productivity across sectors such as agriculture, health, and finance (Johannes Zutt) AI offers sizable opportunities for developing economies, but only if the broader business environment supports entrepreneurship (Ufuk Akcigit) Public‑good AI tools—e.g., AI‑enhanced weather forecasts—can dramatically help farmers and low‑income users (Michael Kremer) The Global South needs AI sovereignty and rights‑based regulatory frameworks, drawing lessons from the EU while tailoring to local priorities (Anu Bradford)
All three agree that AI can improve service delivery in education, health, and agriculture. Yet, Michael stresses the need for government procurement and public‑sector adoption to reach the poor, Iqbal stresses demand‑driven pilots and trust building, while Johannes focuses on technical complementarity and skill expansion. The consensus on benefit is matched with differing implementation strategies.
Speakers: Michael Kremer, Iqbal Dhaliwal, Johannes Zutt
Public‑good AI tools—e.g., AI‑enhanced weather forecasts—can dramatically help farmers and other low‑income users (Michael Kremer) Small‑scale AI applications free up teachers’ and health workers’ time, improving education and healthcare outcomes (Iqbal Dhaliwal) AI enables people in those jobs to expand their skills and their effectiveness in delivering the products and services that they are trying to provide (Johannes Zutt)
Takeaways
Key takeaways
AI can be a transformative development tool for emerging economies, improving productivity in agriculture, health, education, finance and public‑good services such as weather forecasting. ‘Small AI’ – affordable, locally relevant, low‑connectivity solutions – is critical for contexts with limited electricity, internet, literacy and data infrastructure. The foundational AI layer (large models, compute, data, talent) is highly concentrated, creating risks of market power, talent drain from academia, and uneven innovation dynamics. AI will displace certain entry‑level, routine jobs, especially in document‑based tasks, creating labor‑market pressures in developing countries. Effective AI deployment requires both public‑sector actions (standards, sandboxes, advisory support, procurement reforms) and private‑sector innovation; coordination between them is essential. Regulatory frameworks must balance innovation‑friendliness with protection of rights and public‑good outcomes; the EU rights‑based approach offers lessons but must be adapted locally. Multilateral institutions can accelerate impact by providing evidence‑based funding pipelines (pilot → test → scale) and by helping governments create enabling environments. Trust gaps and mis‑alignment between technology performance and real‑world systems can cause pilots to fail or be rejected; implementation design matters as much as the model itself. Geopolitical interdependence in the AI supply chain limits full AI sovereignty; techno‑nationalism should be tempered with cooperation.
Resolutions and action items
World Bank to continue focusing on ‘small AI’ projects, providing advisory support, standards and sandbox environments for local innovators. Development Innovation Ventures (and similar funds) to be used as a staged, evidence‑based financing mechanism for AI pilots, rigorous testing, and scaling. Governments (e.g., India) to develop policies that ensure AI accessibility offline, promote digital identity integration, and support private‑sector AI startups. Encourage creation of demand‑driven AI pilots that demonstrably free up public‑sector worker time (e.g., education grading tools, health diagnostics). Maintain and strengthen university research capacity to keep the foundational AI layer contestable and to mitigate concentration of talent in industry. Design procurement processes that require continuous evaluation, A/B testing, and open‑access provisions to avoid lock‑ins.
Unresolved issues
How to design AI regulations that protect rights without stifling innovation, especially in the Global South where regulatory capacity is limited. Concrete strategies to mitigate labor displacement for entry‑level workers and to reskill affected populations. Mechanisms to ensure that AI‑driven public‑good tools (e.g., weather forecasts, tax fraud detection) are adopted and scaled by governments despite political or bureaucratic resistance. Addressing the growing market concentration and ensuring competitive dynamics in both the application and foundational AI layers. How to manage geopolitical risks and supply‑chain dependencies (semiconductors, rare earths) while pursuing AI sovereignty. Establishing trustworthy governance frameworks that balance power between AI developers, regulators, and end‑users.
Suggested compromises
Adopt a rights‑based regulatory approach (EU model) but tailor it to local priorities, avoiding a binary choice between regulation and innovation. Combine public‑facing standards and sandbox creation with private‑sector‑driven application development to leverage strengths of both sectors. Support both small‑AI deployments for immediate impact and invest in foundational AI research to keep the ecosystem competitive. Implement staged funding (pilot → test → scale) that allows early learning while still providing pathways for rapid scaling of successful solutions. Encourage incremental AI adoption in government services (e.g., AI‑assisted call centers, health record linking) to improve productivity without abrupt labor shocks.
Thought Provoking Comments
AI can be a game‑changer for emerging markets, but the real bottleneck is basic infrastructure – reliable electricity, internet, literacy – leading us to focus on "small AI" that works on limited devices and connectivity.
He reframed the AI debate from a high‑tech, global perspective to the concrete, on‑the‑ground constraints of developing economies, introducing the practical concept of “small AI” as a solution.
Shifted the conversation from abstract potential to actionable implementation, prompting later speakers (e.g., Michael and Iqbal) to discuss concrete pilots (weather forecasts, teacher‑assistant tools) and setting the stage for the policy‑focused parts of the panel.
Speaker: Johannes Zutt
The AI ecosystem has two layers: a foundational layer (compute‑heavy, data‑heavy, talent‑heavy) that is highly concentrated, and an application layer where entry barriers are low. The concentration at the foundation will spill over to the application side.
He introduced a clear analytical framework that separates structural constraints from market opportunities, highlighting why concentration matters for creative destruction.
Prompted a deeper discussion on market concentration and the role of incumbents, leading Anu and Iqbal to raise concerns about regulatory sovereignty and the risk of power consolidation in AI deployment.
Speaker: Ufuk Akcigit
Public‑good AI applications (e.g., AI‑driven weather forecasts for 38 million Indian farmers) will not attract private investment, so governments and multilateral development banks must step in.
He linked a tangible AI use case to a market‑failure argument, showing how public sector action can unlock benefits for the poor that the private sector ignores.
Steered the dialogue toward funding mechanisms and the need for evidence‑based innovation funds, which Michael later expanded on, and reinforced the panel’s focus on policy levers rather than just technology.
Speaker: Michael Kremer
Regulation is not a false choice to innovation; the EU’s rights‑driven AI Act shows that a balanced, rights‑focused framework can coexist with vibrant AI ecosystems, and India can adapt such lessons without copying them wholesale.
She challenged the common narrative that regulation stifles innovation, offering a nuanced view that regulation can be both protective and enabling.
Opened the floor to a debate on AI sovereignty and regulatory design, influencing subsequent remarks from Ufuk about concentration and from Iqbal about power dynamics in scaling AI tools.
Speaker: Anu Bradford
AI should free up frontline workers’ time (e.g., teachers no longer correcting spelling) rather than replace them; demand‑driven pilots that improve productivity for teachers, nurses, and Anganwadi workers are the most successful.
He provided a concrete, demand‑side example that reframed AI from a job‑loss narrative to a productivity‑enhancement story, emphasizing user‑centric design.
Reinforced the “small AI” theme, influenced Michael’s later points about evidence‑based pilots, and set up his later caution about trust and policy mismatches.
Speaker: Iqbal Dhaliwal
Even when AI tools work better than humans in the lab (e.g., diagnostic AI), they can fail in the field because users aren’t trained and because the surrounding system isn’t adapted; scaling the GST fraud‑detection model was blocked because it removed human discretion—a power issue.
He highlighted the gap between technical performance and real‑world adoption, introducing the concept of “power” as a barrier to scaling AI, which is rarely discussed in tech‑centric panels.
Created a turning point that moved the discussion from optimism about AI capabilities to a critical examination of institutional inertia and political economy, prompting Anu and Ufuf to discuss sovereignty and concentration.
Speaker: Iqbal Dhaliwal
The biggest systemic risk is not super‑intelligent AI but that humanity becomes dumber by outsourcing thinking to models; education must teach students to augment, not replace, their cognition.
She shifted the risk narrative from external threats to internal cognitive decline, a perspective that broadens the ethical debate beyond regulation and market concentration.
Prompted Michael to echo concerns about public‑sector adoption and the need for careful procurement, and reinforced the panel’s concluding focus on long‑term societal impacts.
Speaker: Anu Bradford
AI could enable poverty‑targeted interventions at the individual level for the first time, but we risk failing to build robust governance to prevent abuses.
He distilled the panel’s core promise—precision poverty alleviation—while simultaneously flagging the governance challenge, encapsulating the central tension of the discussion.
Served as a concise summary that reinforced earlier points about governance, prompting the final round of reflections on both upside (health, education) and downside (concentration, governance failures).
Speaker: Johannes Zutt (rapid‑fire round)
Overall Assessment

The discussion was shaped by a series of pivot points that moved the conversation from high‑level optimism about AI’s potential to a nuanced, policy‑oriented debate about infrastructure, market concentration, regulatory design, and power dynamics. Johannes’s “small AI” framing grounded the talk in practical constraints, Ufuk’s two‑layer model introduced a structural lens that exposed concentration risks, and Iqbal’s field examples highlighted the human and institutional frictions that can derail even technically superior solutions. Anu’s challenge to the regulation‑vs‑innovation myth and her warning about cognitive atrophy broadened the scope to societal values. Michael’s public‑good examples and funding‑mechanism suggestions offered concrete pathways for action. Together, these comments redirected the panel from abstract hype to concrete, actionable insights, ensuring that the dialogue remained balanced between opportunity and risk.

Follow-up Questions
What early indicators can signal how AI will affect creative destruction in both advanced and emerging economies?
Understanding early signals is crucial for anticipating economic impacts and guiding policy.
Speaker: Ufuk Akcigit
Why have emerging economies historically lacked entrepreneurship and dynamism, and what business‑environment reforms are needed for AI to foster entrepreneurship?
Identifying structural barriers is essential to ensure AI translates into genuine economic dynamism.
Speaker: Ufuk Akcigit
How can developing countries overcome basic infrastructure constraints such as unreliable electricity, weak internet backbones, and low literacy to enable effective AI use?
Infrastructure gaps limit AI adoption; research is needed on feasible solutions for low‑resource settings.
Speaker: Johannes Zutt
What models and strategies are most effective for scaling ‘small AI’ applications in environments with limited connectivity and data?
Small AI can deliver high impact in low‑resource contexts, but evidence on best practices for scaling is limited.
Speaker: Johannes Zutt
What evaluation frameworks and metrics should policymakers use to assess AI interventions, ensuring continuous improvement and real‑world impact?
Robust evaluation is needed to determine which AI solutions deliver measurable benefits and to guide scaling decisions.
Speaker: Michael Kremer
Which regulatory approaches can balance innovation with protection of fundamental rights in the Global South, drawing lessons from the EU AI Act?
Adapting rights‑driven regulation could help emerging economies harness AI while safeguarding citizens.
Speaker: Anu Bradford
How can India tailor the EU’s rights‑driven AI regulatory model to its own priorities without stifling innovation?
India needs a customized regulatory framework that supports local innovation while ensuring safeguards.
Speaker: Anu Bradford
What are the implications of high concentration in the foundational AI layer (compute, data, talent) for market competition and downstream application development?
Concentration could limit access for smaller players and affect the diffusion of AI benefits.
Speaker: Ufuk Akcigit
How does the migration of AI talent from academia to industry affect open science, knowledge spillovers, and overall innovation?
Shifts toward proprietary research may reduce collaborative advances and public‑good outcomes.
Speaker: Ufuk Akcigit
What governance mechanisms are needed to ensure AI tools (e.g., GST fraud detection) are adopted at scale without undermining necessary human discretion?
Understanding the balance between algorithmic decision‑making and human oversight is vital for effective policy implementation.
Speaker: Iqbal Dhaliwal
How can trust in AI technologies be built among frontline workers such as doctors and teachers to ensure effective adoption and impact?
Lack of trust can negate technical performance; research on training, incentives, and system integration is required.
Speaker: Iqbal Dhaliwal
What procurement designs (e.g., evidence‑based innovation funds, A/B testing, open‑access requirements) can accelerate AI adoption in the public sector while ensuring competition and quality?
Innovative procurement can overcome market failures and speed up delivery of AI‑enabled public services.
Speaker: Michael Kremer
How can vulnerabilities in the AI supply chain (semiconductors, equipment, raw materials) be mitigated to reduce geopolitical weaponization risks?
Supply‑chain resilience is critical for sustainable AI development across nations.
Speaker: Anu Bradford
What policies can mitigate labor‑market risks from AI‑driven automation of entry‑level jobs in emerging economies, ensuring a just transition for workers?
Protecting vulnerable workers while fostering AI adoption requires targeted labor and social policies.
Speaker: Ufuk Akcigit
How can AI be leveraged to target poverty reduction at the individual level while establishing robust governance to prevent abuses?
Precision poverty targeting promises gains but raises governance and ethical concerns that need further study.
Speaker: Johannes Zutt
What detailed sector‑specific data are needed to better understand AI’s impact on jobs and productivity in South Asia?
More granular evidence would inform policy design and investment priorities.
Speaker: Johannes Zutt

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards

NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, chaired by Sh. Subodh Sachan, examined the widening AI talent gap in India and how the nation can build a next-generation AI ecosystem to sustain rapid industry transformation [3-5][8][9]. Participants agreed that AI is reshaping business and the workforce, requiring people not only to use AI tools but to coexist with an evolving AI ecosystem [5][6-7]. Dr. Sarabjot emphasized that future AI practitioners must be critical thinkers who question AI outputs, recognize its deficiencies, and be willing to take risks [39-46]. Dr. Devinder Singh added that next-gen AI talent should possess strong AI expertise, research ability, cross-sector experience, and awareness of regulatory frameworks [49-54]. Professor Jawar Singh highlighted the necessity of solid grounding in hardware and computer-science fundamentals to translate algorithms into efficient, secure implementations [57-60]. Professor Alok Pandey described the ideal AI professional as “T-shaped,” combining deep domain knowledge with fluency in AI techniques and skills in red-teaming and containment [63-66]. Kunal Gupta and Vikas Srivastava stressed that beyond technical mastery, ethical judgment and real-world problem-solving are essential components of next-gen AI talent [84-87][71-78]. Dr. Sarabjot further noted that assessing talent should focus on problem-solving ability, self-directed learning, and creativity rather than mere familiarity with libraries [103-110]. Both Kunal and Vikas pointed out that many candidates struggle to define problems correctly, a skill they consider half of any solution [205-208]. The panel identified a systemic lag in academic curricula, calling for de-bureaucratised, fast-moving curricula and greater autonomy for institutions, especially state technical colleges, to keep pace with AI advances [239-244][283-291]. In the telecom sector, Dr. Devinder explained that 6G will embed AI in every component, requiring engineers to master machine learning and adhere to emerging AI standards, with the government already publishing relevant guidelines [124-138][140-147]. Addressing algorithmic bias and robustness, Dr. Devinder outlined quantitative fairness and bias indices that can be used by developers, regulators, and deployers to ensure trustworthy AI systems [320-327]. The discussion concluded that closing the AI talent gap demands coordinated action among academia, industry, and policymakers to foster critical thinking, interdisciplinary expertise, ethical awareness, and agile education reforms, thereby enabling India to leverage AI as an infrastructure of intelligence [71-78][239-244][315].


Keypoints

Major discussion points


The AI talent gap and the competencies needed for “next-gen” AI professionals – Panelists repeatedly stressed that future AI talent must go beyond tool-level knowledge. Critical thinking, the ability to question AI outputs, risk-taking, and a solid grounding in both algorithms and hardware are essential [39-46][57-60][63-66][83-86]. A “T-shaped” profile (deep domain expertise plus fluency in AI and red-team/containment skills) was highlighted as the ideal model [63-66]. Vikas added that technical mastery, ethical judgment and real-world problem-solving are the three pillars of next-gen talent [83-86].


Curriculum reform and the need for faster, more flexible education pathways – Several speakers pointed out that current curricula are too slow and bureaucratic, especially in state technical institutions, and that universities must gain autonomy to create rapid, industry-aligned programs [252-256][279-282][283-293]. Alok called for “de-bureaucratised” curricula, more faculty training, and stronger industry-university MOUs to keep pace with AI’s velocity [237-250]. Sarabjot described a “passion-project” model that pairs students with industry mentors to fill gaps that formal curricula cannot [259-276].


AI as foundational infrastructure that will reshape sectors, especially telecom and vernacular services – Kunal described next-gen AI as an “infrastructure of intelligence” that multiplies human reasoning and enables vernacular-language interfaces [71-78]. Devinder explained how 6G will embed AI in every network component, shifting from static planning to self-learning, edge-distributed decision-making [124-136]. Subodh linked this to the need for sector-specific skill sets and standards for AI-enabled operations [148-152].


Skilling initiatives and ecosystem building – Subodh outlined the STPI “Skill-Up” programme, the creation of regional training hubs, and a partner network of 18 training organisations to deliver AI up-skilling at scale [9-12]. Vikas noted the use of AI-driven adaptive learning tools that assess individual skill gaps and recommend personalised pathways [315-316]. Kunal emphasized a data-driven “employability intelligence layer” that matches market-demanded jobs with candidate capabilities [212-219].


Standards, bias, fairness and evaluation of AI systems and talent – The discussion moved to the importance of regulatory standards, bias indices, and fairness metrics for trustworthy AI deployment [320-327]. Devinder highlighted existing telecom AI standards and the need for engineers to follow them [142-147]. The panel agreed that evaluating talent must include problem-definition ability, ethical awareness, and the capacity to work within these standards [204-209][320-327].


Overall purpose / goal of the discussion


The session was convened to diagnose the current AI talent gap in India, explore what “next-gen” AI expertise should look like, and chart concrete actions-through curriculum reform, industry-academia partnerships, national skilling programmes, and standards development-to build a robust AI ecosystem that can drive economic transformation and inclusive societal impact.


Overall tone and its evolution


The conversation began with a forward-looking, optimistic tone, celebrating AI’s transformative potential and the launch of new skilling initiatives [3-5][9-12]. As the panel delved into specific challenges, the tone shifted to urgent and problem-focused, highlighting gaps in education, industry readiness, and regulatory frameworks [204-209][252-256][320-327]. Throughout, the discourse remained collaborative and respectful, with speakers building on each other’s points and repeatedly calling for joint action across government, academia, and industry.


Speakers

Professor Dr. Jawar Singh – Role/Title: Professor, Indian Institute of Technology Patna; Founder, Kuturna Labs.


Areas of Expertise: AI algorithms, hardware implementation, neuromorphic/brain-inspired computing, AI product development, hardware security. [S1]


Dr. Sarabjot Singh Anand – Role/Title: Co-founder & Chief Data Scientist, TATRAS; Co-founder, Sabath Foundation.


Areas of Expertise: Artificial intelligence, data science, talent development, social-impact AI solutions, AI education and mentorship. [S2]


Vikash Srivastava – Role/Title: Chief Growth Strategist, Vincis IT Services Private Limited.


Areas of Expertise: Enterprise consulting, cloud workforce upskilling, AI talent reskilling, industry-focused AI training. [S3]


Kunal Gupta – Role/Title: Managing Director, Mount Talent Consulting.


Areas of Expertise: Talent advisory, recruitment, AI-driven skill-gap analysis, job-search portal operations, industry-academia talent alignment. [S5]


Professor Dr. Alok Pandey – Role/Title: Professor and Dean, UP Jindal University.


Areas of Expertise: Finance, governance, higher education, fintech, AI applications, curriculum development, academic-industry collaboration. [S7]


Dr. Devinder Singh – Role/Title: Deputy Director General, Department of Telecommunications (TEC).


Areas of Expertise: Telecom standards formalisation, AI integration in telecommunications, 6G technology, AI governance and regulatory frameworks. [S9]


Audience – Role/Title: General participants (e.g., Vikram Tripathi, village resident and aspiring panchayat candidate).


Areas of Expertise: Not specified.


Sh. Subodh Sachan – Role/Title: Director, SGPA Headquarters; Moderator of the session.


Areas of Expertise: AI ecosystem development, skilling initiatives, industry-government liaison, national AI policy implementation. [S14]


Additional speakers:


None identified beyond the listed speakers.


Full session reportComprehensive analysis and detailed insights

The session opened with Sh Subodh Sachan framing the discussion around a widening talent gap in India’s AI ecosystem. He argued that the present era is “the most exciting time in the industry because AI is transforming everything” – from business models to the workforce – and that success now depends on the ability to “co-exist with the whole AI ecosystem together” (see [3-5]). He highlighted his 27-year experience across industry and government and noted that “there is always a gap in opportunity” that must be addressed to sustain the transformative potential of AI (see [8]). To that end, he announced the STPI “Skill-Up” programme, which will soon launch multiple regional training hubs and currently partners with 18 training organisations across India, with plans to expand the network further (see [9-12]).


After a brief introduction of the panel, Sachan introduced the speakers: Professor Dr Alok Pandey (Dean, UP Jindal University) with three decades of experience in finance, governance and fintech; Professor Dr Jawar Singh (IIT Patna, founder of Kuturna Labs); Dr Devinder Singh (Deputy Director General, Department of Telecommunications) with expertise in standards; Dr Sarabjot Singh Anand (co-founder of TATRAS and Sabath Foundation); Vikas Srivastava (Chief Growth Strategist, Vincis IT Services) and Kunal Gupta (MD, Mount Talent Consulting) (see [13-31]).


The first substantive contribution came from Dr Sarabjot, who distinguished two camps in the AI workforce: those who generate the next wave of AI and those who use AI to become more efficient. He stressed that “critical thinkers … are more important than any technology as such” because “there is a great move towards outsourcing your thinking to AI” and warned against treating AI as an oracle, urging practitioners to recognise its deficiencies, question outputs and be willing to take risks (see [39-46]).


Dr Devinder Singh added a complementary perspective, asserting that next-gen AI talent must possess “strong expertise in AI” together with the ability to solve real-world problems, adapt to new technologies, conduct research across sectors and remain aware of regulatory frameworks governing AI (see [49-54]).


Professor Dr Jawar Singh broadened the technical scope by insisting that future AI professionals need a “solid grounding of hardware, solid grounding of computer science, or even the engineering domain” to map algorithms efficiently onto hardware and to ensure security, noting the stark energy gap between a typical NVIDIA processor (500-700 W) and the human brain (≈20 W) and calling for neuromorphic, brain-inspired computing as a research priority (see [57-60][153-164]).


Professor Dr Alok Pandey then presented a concise talent model: a “T-shaped” profile that combines deep domain specialisation, fluency in AI software and hardware, and the capability to perform red-team testing and containment of AI systems (see [63-67]). This model was echoed by Vikas Srivastava, who identified three pillars for next-gen talent – technical mastery, ethical judgement, and real-world problem-solving – and argued that professionals must know where AI fits and where it does not (see [83-87]).


Kunal Gupta described AI as “infrastructure of intelligence” that multiplies human reasoning, creativity and values, highlighted its potential to democratise access through vernacular-language interfaces, and drew a parallel with how TikTok expanded content creation beyond English-speaking users (see [71-78]). He also cited AI-enabled hydroponics as an example of how AI can create high-yield, pesticide-free agriculture without dependence on weather (see [220-227]).


When asked how fresh AI talent should be evaluated, Dr Sarabjot outlined a practical rubric centred on problem-solving ability, self-directed learning, curiosity and creativity, rather than mere familiarity with libraries. He illustrated this with TATRAS’s “passion-project” approach, where students work on real customer problems under mentorship from industry experts, thereby gaining domain insight that “doesn’t matter what technology you use” as long as the solution solves the problem (see [103-110][259-276]).


Vikas Srivastava argued that conventional classroom training, which focuses heavily on theory, must be supplemented with (i) applied problem-solving on real data sets and (ii) production-level exposure that moves models from notebooks to secure, scalable systems; he indicated that a third, as-yet-unspecified layer would also be needed (see [303-311]).


Both Kunal and Vikas highlighted the role of AI-driven tools in scaling upskilling. Kunal described an “employability intelligence layer” that uses AI to perform scientific gap analysis, match market-demanded jobs with candidate profiles and recommend personalised learning pathways, while Vikas noted that adaptive learning platforms now assess individual skill gaps and suggest targeted upskilling, thereby improving employability outcomes (see [212-224][315-316]).


Curriculum reform emerged as a recurrent theme. Sachan linked the discussion to the National Education Policy, noting that it already grants greater autonomy for faster curriculum evolution (see [252-256]). Alok called for “de-bureaucratised” curricula, more autonomy for Institutions of Eminence, and stronger industry-university MOUs to increase faculty capacity and keep pace with AI’s velocity (see [237-250]). Jawar added that centrally funded technical institutes (CFTIs) already enjoy the freedom to launch new courses without delay, suggesting that the bottleneck lies primarily in state-run institutions, which suffer from lengthy syllabus-revision cycles and limited multilingual support (see [279-293]).


The panel also examined sector-specific AI integration. Dr Devinder Singh explained that 6G will embed AI in every network component, shifting from static planning to self-learning, edge-distributed decision-making, and that engineers will need to master machine-learning and adhere to emerging AI standards, many of which have already been drafted by the telecom standards body (see [124-138][140-147]). This vision aligns with the broader view that AI is becoming foundational infrastructure, requiring efficient hardware, standards and a mindset of curiosity and creativity to harness its potential (see [90-94][170-176][153-164]).


Ethical considerations were foregrounded throughout. Alok warned that every AI product must undergo red-team testing and containment, even suggesting that a technology should be “killed” if it behaves undesirably, and he referred to Mustafa Suleiman’s book The Coming Wave, cautioning that without robust safety and security mechanisms AI could become a “wave that drowns us” (see [186-188]). Devinder introduced quantitative bias and fairness indices (0-1 scale) and robustness metrics that can be used by developers, regulators and deployers to ensure trustworthy AI, emphasising that different applications tolerate different levels of bias (see [320-328]). Subodh reinforced the need for standards on fairness and robustness, noting that such guidelines are already publicly available (see [166-168][332]).


The discussion concluded with a set of agreed-upon actions. The STPI “Skill-Up” programme will roll out regional hubs and expand its partner ecosystem beyond the current 18 trainers (see [9-12]). Academia is urged to de-bureaucratise curricula, grant greater autonomy to institutions of eminence, and develop large-scale faculty development programmes and industry-university MOUs (see [237-250][283-292]). The panel advocated broader adoption of AI-driven assessment tools to personalise learning pathways and improve employability (see [215-224][315-316]). Finally, industry mentors will be paired with students on passion projects that address social impact, thereby bridging the gap between theoretical knowledge and practical problem-solving (see [259-276]).


Several issues remained unresolved. The audience asked for concrete AI tools suitable for district-panchayat governance and the role of CSR funding, which the panel did not provide a concrete answer to (see [319]). Detailed road-maps for implementing AI standards in 6G, mechanisms for uniformly upgrading faculty across thousands of state institutions, and operational guidelines for continuous monitoring of bias, fairness and robustness indices were also left open (see [124-138][283-293][320-328]).


The discussion repeatedly emphasized four themes for closing India’s AI talent gap: (1) a T-shaped, interdisciplinary skill set that blends deep domain expertise, AI fluency, critical thinking, risk-taking and ethical judgement; (2) agile, autonomous curriculum reform supported by the NEP and de-bureaucratised processes; (3) sector-specific standards and hardware-aware training, especially for emerging 6G telecom networks; and (4) the deployment of AI-driven assessment and adaptive learning platforms at scale. These converging views underscore the need for coordinated action among government, academia and industry to transform AI from a set of tools into a national infrastructure of intelligence that can drive inclusive economic growth and social impact.


Session transcriptComplete transcript of the session
Sh. Subodh Sachan

Where do we see a talent gap? What is the requirement in terms of growing this whole ecosystem? Because when we come and we talk about today, it is the era, this is the most exciting time in the industry because AI is transforming everything. AI is transforming the way the businesses are being conducted. AI is transforming the whole workforce also because it’s not about what you are able to do, but it’s about co -exist with the whole AI ecosystem together. So my name is Sibodh and I’m director of SGPA headquarters. I’ve been part of the industry, I’ve been part of the government for almost 27 years. And being in the space of technology, being in the space of working closely within the startup ecosystem, within the academias, there’s always a gap in opportunity which we have witnessed.

And that’s why this particular topic today is very, very close to my heart in terms of how we ensure the industry move forward, how do we ensure that the AI as a technology can bring transformative changes overall. so I am happy that today you know very briefly today’s discussion will align very closely with the national efforts I am sure all of you when you talk about the NDIAI overall theme some of you have witnessed already that there is a lot of activity around the skilling, there is already 10 lakh AI skill in drive which has been initiated there is already a skill India digital program happening this is a new version of skill India altogether within STPI we have focused on you know a program called STPI skill up and I am happy to in fact announce also here that we are going to soon start the multiple regional hubs for training and ensuring that the training across technologies can happen and we have been joined by a lot of our training partners, the current training partner ecosystem is around 18 training partners across India and two of them in fact are here today with us and three of them are here with us and as we move forward, we’ll add more such training partners and collaborators.

We are calling them partners and collaborators because the aim and the objective is all aligned within the ecosystem of skilling up, right? The SIPI skill up becomes that particular program. Let me introduce our speakers. I’m not taking much time. So it’s my privilege to introduce my first speaker, Professor Dr. Alok Pandey, a professor and dean of UP Jindal University, a very senior academic leader with almost three decades of experience, focus across finance, governance, higher education, and I think multiple implementation within the financial technology space. He also comes with a great perspective on the AI. So let me request Professor Dr. Alok Pandey to come on stage and take the space. Please welcome Professor Pandey with a big round of applause.

A limited audience, but ensure that your applause covers the whole hall also. I’ll also like to introduce and welcome Professor Dr. Jawar Singh Professor Dr. Jawar Singh is also a professor from the Indian Institute of Technology Patna, he is also founder of Kuturna Labs and just we were chatting and he has just briefly told about his successful exit so he is not just the professor who is teaching but he is also practicing the same in the form of his own ideas implementation so we are literally and I am sure we are proud to have you Dr. Jawar Singh please welcome you on the dais let me also introduce Dr. Devinder Singh Deputy Director General of TEC this is the Department of Telecommunications in India Dr.

Devinder Singh has spent multiple years in the standards formalization standards ecosystem because you understand the telecom space especially is governed by the standards and these standards are very critical because unless and until because the interoperable ecosystem can only work if each and every device each and every node can be standardized and has to be standardized right So, Dr. Devendra Singh, Sri Devendra Singh represents the government from the Postal Telecom. So, let me welcome with a warm applause from the audience, Dr. Sri Devendra Singh on the dais, please. I’m also honored to join by Dr. Sarabjot. Dr. Sarabjot Singh Anand, he’s a co -founder and chief data scientist of TATRAS, also the co -founder of Sabath Foundation.

I have known, you know, Sarabjot Singh from almost, if I’m not wrong, seven, eight years now. And I’ve seen his passion in the space of AI. It’s not about just what he wants to achieve through his, you know, TATRAS data, but also about how, you know, and I think his work in the space of growing AI talent is well recognized in probably in some regions, especially in the region of Punjab, right? So, Dr. Sarabjot, thank you for being here. I request and welcome you on the dais of pioneer data science. A big round of applause for him. He has also roots in academia at Warwick and Ulcer. He has a very global perspective in this particular space altogether.

Let me introduce our next two speakers or two panelists on this agenda today. Vikas Srivastava, he’s a chief growth strategist of Vincis IT Services Private Limited. Vincis is one of our technology training collaborator and partner of STPI Scalar program. Vikas has almost 16 plus years in enterprise consulting, cloud workforce upskilling. And I think Vikas has a great perspective to share in terms of what is really reskilling requirement today within the whole ecosystem of the AI workforce. So, with a big round of applause, please welcome Vikash Srivastava. last but not the least let us also give a warm welcome to Kunal Gupta managing director of Mount Talent Consulting you know he has been doing talent advisory he runs his own job search portal he understands he works very closely with industry and has a clear perspective what is the industry requirement and where is the gap so with a round of applause Kunal welcome on the dais as well thank you for everyone and let me you know let me probably switch my place as well so it will be easier for us to start the whole discussion hello yes so I think let me quickly start and I will probably start from my immediate left Dr.

Sarabjot you know when we talk about next gen AI AI you And when we talk about next -gen AI as a space, next -gen AI as the whole, from the talent perspective, from the opportunity perspective, what is your perspective? Briefly, we’ll touch upon each one of you on the defining next -gen AI so that the audience understands very clearly what does next -gen AI really means. So over to you.

Dr. Sarabjot Singh Anand

So to me, there are two camps here, right? One is the people who want to generate the next wave of AI, and then, of course, they’re the ones that have to use AI to be more efficient in their jobs. Now, for both of them, I think what is very, very important is that they have to be critical thinkers more than any technology as such, because there is, you know, a great move towards outsourcing your thinking to AI, and that’s a problem. We need to recognize that AI is not perfect. We need to recognize that there are certain deficiencies in it. and therefore we have to question what we get from that AI. And if we can get people who can critically think about the problem they are trying to solve and then take risks, I think risk -taking is going to be another very, very important aspect and having a foundational understanding of what is possible today with AI and what is not possible today with AI.

Because if we don’t recognize the deficiencies and start to regard AI as an oracle that always tells us the truth, we are going to get into trouble. So these are very, very important aspects apart from of course technology. Thank you.

Sh. Subodh Sachan

To Devender Singh, your perspective on the next -gen AI technology in a very brief.

Dr. Devinder Singh

Hello. Next -gen AI, I feel he should have a strong expertise in AI and he should have skills to solve the real -world problems also. And he should adapt to new technologies also. He should be able to work in research. He should be able to work in different sectors also. And above all, I feel he should be aware of the regulations. Thank you. in the sector and in AI also. Thank you.

Sh. Subodh Sachan

Thank you. Yes.

Professor Dr. Jawar Singh

Yeah, hello. So to me, actually, the NextGen AI should not only the AI, NextGen should be aware of the AI algorithms, but basically they can make products or solution with customer facing. And they should understand not only the algorithms, but the way those algorithms are mapped onto the hardware. To me, a grounding, solid grounding of hardware, solid grounding of computer science, or even the engineering domain is must, actually. Thank you.

Sh. Subodh Sachan

Yes. Professor Alok, sorry for my mistake in pronouncing your name wrong. Yes.

Professor Dr. Alok Pandey

Thank you. I think the next gen AI is largely a T -shaped thing. You need to be domain specialists, deep domain specialists. You need to be fluent in AI skills, whatever software, hardware, etc. you are looking at. And then you should be able to understand red teaming and containment. So, if you have these three, then probably we will be able to solve most of the problems we face in India today.

Sh. Subodh Sachan

Please, Kunal.

Kunal Gupta

I think your question is very important. What do I understand or what do we understand with next gen AI? You know, next gen AI is the infrastructure. It’s not for intelligence like you currently have this infrastructure wherein we are able to express our views and they go out to the world. you know next generation of AI is like this infrastructure meant to multiply our intelligence our reasoning our research our values our creativity our judgments and what the future holds for us you know we are going to see a new wave of new materials and for a very long time we haven’t seen any major materials coming apart from the basic alloys that we have been using the process changes which are going to come about in the next generation with the use of the next generation AI the generation of models you know we talk about many things about differentiation in the society from a digital divide to this new edge AI divide but it could also at the same time help us reach out to the inclusive society in general with vernacular languages you know multiplying and extrapolating the reach of what a common normal common man can do earlier they were dependent on languages like English you but with the expansion of the next -gen AI platforms, tools, local vernacular languages wherein you can speak and give instructions to the computer in Hindi, in your local languages, get access to data, knowledge.

Like I said, you know, you could just build anything. We have seen this with a tool called TikTok, you know, a tool about 10 -12 years back which started. And it created a wave of influencers, otherwise a language or a platform meant only for the English and the literate. You know, went on to the masses. So I think next -gen AI, like I said, is just in one word an infrastructure of intelligence, multiplying our ability to think and, you know, make judgments in the future as well. Thank you.

Sh. Subodh Sachan

Very well said. It is the infrastructure level intelligence which can be, which has to be created and which defines the next -gen AI. And carrying forward the same thought, I’ll ask Vikas to share his opening remarks on the next -gen AI.

Vikash Srivastava

Thank you. So I think most of the important aspect has on you. I think we’ve covered the panel. What I wanted to add is for me, the next -gen talent combines three important things. First is technical mastery. Second is ethical judgment. And third is real -world problem -solving capabilities. So we need people who understand, as I said, you know, we should know, the people should know where AI fits and where AI doesn’t, right? So I think this is the most important thing which I wanted to add. Thank you.

Sh. Subodh Sachan

I think for the audience, it is important to understand that when we talk about next -gen AI and we talk about next -gen AI talent and the next -gen AI talent gap, right, there is, we got a clear perspective right from a critical thinking, right, going to the level of not just the, you know, opening up the layers of the AI, but from the perspective that one has to start thinking about the new ways and new layers in which the AI technology is having an impact. But there’s materials which, you know, Whether it is you know the infrastructure intelligence again which we talked about. Whether it is a foundational knowledge and foundational you know algorithm which we talked about.

The next gen AI talent gaps exist everywhere. And accordingly you know I think I will ask Sarabhjotji now to probably talk something on specifically. You know from your perspective of both as TATRAS and Sabud Foundation. You have seen the whole AI evolution. And you have seen the gaps which have been there. And you have tried to fill the gaps already. So my question to you is you know when you talk about the evaluative you know evaluation of the fresh AI talent. How do you what is your approach? Because that approach will lead us you know in terms of ensuring that you know how will this whole space will grow up right. So you are opening Ramon on that here.

Dr. Sarabjot Singh Anand

Sure thank you. So you know when we look at talent today. What we assess is their problem solving skills. We look at. you know how keen are they on learning themselves so have they taken control of their own agency in learning in the future because what’s happening today and I’ve seen this over the years right a lot of students are very focused because they want to get a job they are focused on learning libraries you know even in 2018 when we started Sabudha Foundation because we found there was a huge gap in AI skilling here in India we found that till we got them to program a neural network they felt they weren’t doing anything right and now of course it’s LLMs everybody wants to learn lang chain and that’s about it but they have to understand the foundations the foundations if they are weak we are going to do interesting things but are not going to do amazing things and so the focus has to be on building a strong foundation increasing their curiosity in terms of what they are doing and getting them to think about how they can be creative in the solutions that they are engineering for their customers.

Now, in Tataras, we work with startups in the US and develop their AI for them. Now, to do that, somebody mentioned domain being very important. And what we are constantly training our folks to say is understand the problem from the customer’s perspective. Right? It’s not just about algorithms. When you create a solution, a successful solution is going to be one that solves the problem. It doesn’t matter what technology you use. And that is a key differentiation between the training that we provide and what is available otherwise in terms of just skilling on libraries. Right.

Sh. Subodh Sachan

Thank you. And I think, you know, for all the people outside sitting here, the most important part, I think, as Sarojotji said, and probably any one of you can add also, whenever you feel like, you know, curiosity is one part. Right? Because curiosity to our human mind. adds that element of learning and when the curiosity is there there comes a creativity and once you have this curiosity combined with creativity then only you can understand the customer problem if I am not wrong you can understand the customer ecosystem it’s just about the customer ecosystem from the perspective where you make money but when you talk about social impact because social impact even the people who are getting benefited from the technology they might not be directly paying you but you are creating great amount of social equivalence outside there so it becomes important from their perspective and when we combine these three and we map that with the AI which is such a powerful language right now in terms of technology I think the solutions which you see outside are just a few examples of what really can wonders can be created when you know you bring these three elements together right so I think in similar lines you know I will ask Dr.

Devinder Singhji because he comes from his background on the whole telecom space right and today we talk about native AI telecom infrastructure. When you talk about native AI, they not be just AI ready, but they need to also bring in AI in their own operations. When I say AI readiness, it’s all about the scale, the kind of compute, kind of technology, kind of infrastructure they need to create. But how do they approach from a standards perspective when you see? When you see from the standards perspective, because you see future. You are looking into 6G as a standards. What is the role of an AI in terms of standard creation? And what is the role in terms of technology when standards are getting defined?

So from that angle, your thoughts are on the same.

Dr. Devinder Singh

The present telecom engineers, they are very strong in networking. But the future network that the 6G would be coming, it will be more dependent on the AI. The present technology, 5G, is in that case the AI is add on that. But in 6G, each and every component has got AI inbuilt in that only. So at present, the planning is done in a static way. Components are selected and then the effect is seen. But in 6G, it will be self -learning type of thing. So the engineers would be required to know machine learning. And the present cases, whenever there is some fault and alarm is generated, an engineer is supposed to take corrective action. But in 6G, it would be happening, AI will predict what kind of fault can come and it will take corrective action on its own.

And at present, most of the decisions are taken at the central level only. But in 6G, the intelligence will be distributed at the edge also. So the decision will be taken at a distributed level. So the engineer must be able to, plan everything. Thank you. in considering that the distributed decision will be taken. So in that case, as far as the standards are concerned, standards for 6G are being finalized. They are not final, but it is already decided that each and every component will be having AI. In addition to that, you were talking about the standards. Since in TEC, in the telecom engineering center, we have already published some standards on AI. So the telecom engineers should also be aware of the standards which they are supposed to follow for implementing of an AI.

At present, the telecom engineers are using AI, but in future, the telecom engineers will design and operate and use it. Most of the decisions will be taken by AI. The human will only supervise only. Thank you.

Sh. Subodh Sachan

Thank you. I think when we talk about telecom space, I think two or three critical things, which probably, you know, is of interest to the audience, as Devendraji talked about, you know, we talk about the AI, agentic AI and the agentic AI in taking care of the operations part. And I think from a skill perspective, it is important that when you look into the agentic AI ecosystem, you need to go deeper into the particular technology or particular sector, because each industry gives its new challenges and problem even for agentic AI ecosystem, right. And when you talk about the infrastructure readiness, right, it is important that today, the whole telecom sector is one of the sector and I think I’ll come back to you on the on the perspective of how is telecom sector creating robustness.

Right. And I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector is one of the sector and I think it is important that today, the whole telecom sector Next, I think I’ll touch upon again, you know, I think to Professor Jawar, you know, when you look into the layer of the hardware and below, right?

Because when we talk about the AI and we talk about six layers as has been spoken about across the spectrum, the most promising and most important layer is also about not just applications, but also about the hardware, which is powering up the whole AI, you know, the need and speed

Professor Dr. Jawar Singh

All right. So, I mean, this is quite interesting because very rarely people talk about how those algorithms runs, how those AI models runs actually. So honestly, if I say these models are very expensive, expensive in terms of not cost, but in terms of power, I will say. And cost is obviously associated with it actually. So, if I say a simple, if I take a simple example, the power requirement of if I take a very basic NVIDIA processor, it consumes around 500 to 700 watt. But if I say the same processor, I mean, not, I mean, our human brain is also having a very beautiful processor. I can say it can compute a lot and it just consumes 20 watt power.

So, you can see there is a huge gap between. The processing capabilities, the most instead of the processors that we have and the. most cognitive processes that we all have is there is a lots of gap actually so the gap need to be bridged so there are lots of research is going on in this domain we usually call the neuromorphic computing or brain inspired computing where these algorithms can be mapped at the hardware in a efficient manner another example i can give you the deep seek when it comes and when it first time pops up or surfaced in the market actually the nvidia’s stocks i mean slumped down quite severely and the reason was that their processor was quite efficient actually that was the only thing so people think okay we may do the same thing in a more efficient way so there is a we need people basically they not only think from the algorithm perspective but they also think from the hardware perspective hardware security also i will add here one more term hardware security because the ai can be weaponized and can also be used for neutralization purpose.

So the hardware play a very crucial role. Algorithms are okay. So we need people basically they understand not from the algorithm but all the way down to the hardware implementation. How your implementation is secure, trusted and the reliable. So I hope No, thank you. Thank you.

Sh. Subodh Sachan

You touched upon the element of the security and I think when AI comes into play, the cyber just not the security elements of the algorithms but also the important thing which has popped up is the biasness and the robustness and the fairness of the algorithms. So I am sure some of you will talk about that as a gap from a talent perspective and how do we skill and reskill the people who can be used for this particular filling the layers across the layers they can be probably be more coming into the ecosystem. So with that I think I will ask Dr. Alok. when you look into from a university academia perspective today we talk about population scale AI implementation and I think when we talk about population scale AI implementation it is not about the whole critical thinking and the thinking part also need to be changed and hence academia needs to be geared up to create that kind of curiosity and learning from the perspective of students so what is your take from the industry and academia when you see a gap.

How are you gearing up your students from a perspective of scale of AI perspective and from that particular element?

Professor Dr. Alok Pandey

We have large contracts. Say I have to do an M &A valuation, an M &A due diligence. And, you know, competition commission has asked me whether I should go for this merger acquisition or not. I am a lawyer. How do I do it? I can use a generative AI software for that. I can do money laundering prevention, not just spam prevention like, it’s very effective spam prevention by, you know, Airtel and all. But I can do money laundering calculations and identify which transactions work in which manner through a generative AI. So we need to develop products in these lines. The second thing I would say that safety and security of these products. How are we going to look at, you know, the safe usage?

Now, there’s a term which has come up. You know, this is called a coming wave. Mustafa Suleiman has written a book, The Coming Wave, and everybody uses. This is the coming wave. And the wave is going to drown all of us if safety and security is not there. Every young person who uses AI needs to understand what is red teaming and what is containment. I should kill my technology if it doesn’t work in my favor. Right. And finally, domain integration. So AI healthcare, AI law, AI education, AI finance. So all these levels basically need to be understood by educational institutions. If you ask me another question that how do we scale it up, then I’ll of course later speak on that.

But I’ll tell you that we need to really work out an infrastructure. We need to work out on academic strength. We need to have large number of trained faculty members. We need to have MOUs with Western countries. All companies are based in either China or Europe or America. And the universities are generating a lot of trained resources. And Indian universities need to move forward in that direction. So I basically feel that yes, there is a huge gap today. And we need to really answer these gaps through not just viable funding from government, but also from industry.

Sh. Subodh Sachan

I tend to. Partly agree in terms of, you know. the length and the breadth of the AI ecosystem has changed dramatically everywhere but you know when we look into the Indian talent right and I strongly believe because I have been in this industry for very long now and kind of you know energy we are seeing in this particular 10 arenas hall and probably the conferences happening other side there is a huge talent which has popped up now and they are generating very good solutions and today India from a solution producer perspective they are not just doing something at the application layer or the agentic layer but they are also looking into the foundational layer and that’s why we have saw the launch of the recent LLMs also right and when we look into let’s say the launch of the Sarvam LLM or other LLMs it’s very clear people are trying to see there is a lot of data available in our country and this data needs to be understood and as I think as you talked about if you take just one particular sector called law and justice right and there is one start one company here Lex Leges and I was interacting with them yesterday they have understood this problem they have created their LLM on particular you know not the large language model but they have approached this problem with the same mentality of LLM right and hence they have been able to ensure exactly what you are describing as a problem you know how do you create a solutions for that and it works on Indian data it works on the Indian contract laws it works on the Indian you know past judgments so that is the need of our now you as an entrepreneur if you have entrepreneurial mindset as from a perspective of audience sitting here or as somebody who wants to get into this particular as a workforce you need to clearly have the idea about each in every domain whether it is health as you talked about or whether it is law and justice has its own set of challenges and problems right and when there are challenges and problems with right skill and right talent you can actually approach and be very successful and we see this as a you know leapfrog moment for each one of you From the industry perspective also So taking that thought forward I’ll ask Kunal Kunal you have been talking about Skill gaps Especially working with students and working professionals From your platform perspective Whatever you are seeing From your own job portal and job placements perspective What is your take again In terms of most commonly seen The certain abilities Which is required Which probably should be seen by each one of them In a short term What are the skills they need to fill in Whether it is Learning to coexist With the LLMs outside there Whether it is Learning to do the coding Whether in the AGI As Professor Alok talked about Or creating new Machine learning algorithms What do you see as a typical problem In a short term Where talent has to be ready for that

Kunal Gupta

I think I see the problem threefold when it comes to skill gap, specifically in a dynamic country like India, wherein we are living across many generations across the country. You know, a generation which is far ahead in the future, a generation which is far behind in terms of development, capability and education as well. The biggest skill gap that I see right now is the application. And more importantly, how do we define a problem? You know, what we do is we have this mentality out of whatever ecosystem that we have built, we just start copying others in terms of this is the trend, so we need to go for this trend without really understanding how to define the problem first.

Define the problem is about 50 % of the solution achieved in itself. Define the problem in any sphere that the person is in. You know, like Dr. Saab said about different usage in different fields, whether it is healthcare, whether it is… whether it is law, whether it is… is agriculture for that matter, which is catering to such a huge population in our country. Who would have thought of hydroproning, you know, producing such huge results without the application of soil, no dependency on weather, and you can create your own environment for creating absolutely green vegetation, you know, in the best of atmosphere without germs and without the usage of application of pesticides as well. So coming back to your question, skill gap again is going to be defined sector specific.

Different sectors are going to have different specific gaps at different specific application levels. When it comes to industry, again, what is the solution that I am providing or we are providing as a company, you know, our aim is to develop an employability intelligence layer. You know, how do we define skill gap, basis, what kind of jobs are coming from the market, basis, the jobs, what is the current skill set of the candidates, we have a scientific gap analysis in terms of what is missing. it’s not that we have a very nice application tracking system we do a recommendation algorithm using a lot of ai the aim is you know in my view the aim is not to exclude people or reject people using ai when it comes to skill gap analysis the aim is to show them that this is what is missing this is what has to be developed it is not rocket science that can’t be developed you take a course of one month three months six months or you do it while working in another job role while coming to your ambitious job role you know it takes time nothing is built in a day but more importantly i think a more bigger gap that i see right now which is going to come as a huge pressure on the educational setups whether it is at a university level or at a school level you know any which ways we keep talking about the fact that india’s syllabus is very uh you know it is not aligned to the industry it is not aligned to the industry it is not aligned to the industry it is not aligned to the industry it is about 20 year old we don’t update our syllabuses you know it takes six committees to take five years, seven years to come up with new curriculums.

By the time the new curriculum is implemented, it has already gone obsolete from an industry perspective. I think in the last six months, the speed of growth of AI that we have seen is going to put the maximum pressure on the policy makers of the country, specifically those catering to one is the core foundational education, the higher skilling education, and more importantly, the industry skilling which has to be needed to ensure that people understand why productivity is needed, how productivity is done. Students need to understand that most industry needs output. We need production. We need results. We are not, you know, industry cannot always bridge the gap. And in India, you know, I’ll have to say it, whether it is an MSME or whether it is large industries, everybody has done their bit in terms of scaling those whom they select.

And, you know, today the success that we see in conferences like these is a lot of people. People who have grown through the industry and how industry has scaled people up. Correct. Colleges will need to ensure that AI education is for all. You know, application of AI for all. Output will increase. Output will lead to, you know, more analysis in terms of how to improve the output production. Production leads to more research. Research is going to lead to more efficiency in production. And it’s a loop. You know, currently the application is going to lead to higher output in my view. Higher output in terms of what an engineer can do in eight hours of work.

What a company can do in terms of per year revenues. Whatever models can do. Whatever processes can do. And based on that, it’s a regular running cycle. We can’t sit in a relaxed manner right now. Specifically in this changing world right now.

Professor Dr. Alok Pandey

I’ll just add to what Kunal has said. We need to de -bureaucratize education today to a great extent. In fact, we brought in this concept of institution of eminence. And I’m happy that I’m part of an institution of eminence. where we can create our own curriculum. You know, curriculum velocity is so high that you can’t give a command to faculty members to teach a particular course, especially in technology. And especially when you’re talking about integrating with a particular domain when the faculty has to work with other specialists, identify something and the needs change frequently. You know what has happened? It’s not just that AI technologies have changed. The consumer and the user has started demanding change.

For example, if you look at the crop insurance, the crop insurance idea basically means that I should have satellite pictures and I should have an understanding as to whether a crop failed or not. And this is done best using AI. And if I need to train my agriculture college students, you know, who study in large agronomy institutions, I need to have a quick delivery of the curriculum. Sadly, we don’t have that. We don’t have expertise on those. Thank you. So if you de -bureaucratize curriculum, allow more autonomy to institutions who are into technology at least, or technology applications, we’ll have a much bigger national good at hand.

Sh. Subodh Sachan

And I think the start has already happened with the NEP, if I’m not wrong, right? The whole focus on the national education policy and the initiatives around that has been giving more autonomy and speed towards defining the curriculum. So I tend to see this as a problem which was there. And there’s already a lot of work has happened now, right now. I think if you talk about, I think, when you were at probably a globally level and you were probably, you know, from a Warwick experience, right? You would have seen these changes there. And do you see this coming back to India right now in that similar speed?

Dr. Sarabjot Singh Anand

I don’t, unfortunately, right? So at Warwick, we actually have the Jaguar Land Rover research labs on campus, right? And we were interacting with them. Even 14 years ago, we were looking at tracking, the cognitive load on a driver. as they drove a vehicle to understand whether we need to take some preventative action before he causes an accident, right? Now, of course, we are saying we don’t need a driver. So times are changing very quickly. I think the one thing when we started Sabud, we realized that curriculum is falling short. Academics are not equipped to deal with the change that’s happening. Even HR folks, when we look at it from an industry perspective, our HR folks are not evolving quick enough to evaluate candidates the right way, right?

So what we did in Sabud was, we said the centerpiece of our training is what we call a passion project, where we get students, we are training them in AI machine learning and technology, but we are getting them to think about how do I solve a problem of social impact? Right? And then we are giving them mentors. who are from Tatras and from other organizations that are actually creating AI solutions for the global north, as they say, right? And so now the students are getting mentorship. And I think the key thing we are missing today, which is shocking for a country our size, we have companies with lots of technologists that have no choice but to keep up with technology innovation.

At the same time, these people have to be trained to give back more. If we can get every person to be evaluated or valued based on how much they give back to others, then we can pair students with mentors in industry and get them to get the skills that no curriculum can give, right? Because you really need problem -solving skills that are existing outside of academia. Of course, academia. Academia has great depth, and therefore they have to be part of that. And so as Subodhji was saying, we’ve got to bring academia and industry closer together and solve this problem. It’s not going to happen from one side alone.

Sh. Subodh Sachan

No, I think, thank you for sharing your thought on the I’ll just take you know, professor, sorry, Vikash first and then you on, please go ahead.

Professor Dr. Jawar Singh

Specifically for this, so in that way basically from the curriculum point of view at least, I just want to make this a small caveat actually related to this curriculum updates at least the centrally funded technical institutions are not a problem at all. Even they are free of all those things actually. If I have to start a new course from the next semester, I’m free to run. So such kind of restrictions for curriculum updates, at least as far as I know, CFTIs are not bound to that actually and they are quite okay.

Professor Dr. Alok Pandey

It was not only for CFTIs because India is 1 .4 billion people, right, and majority of it are in tier 2. My basic problem is state technical institutions. The talent which comes from state technological universities, which is the best talent. And these people need scholarships. These people need multilingual support. And these teachers also need training. You know, and, you know, there’s a very large layer for the state institutions because education being both a center as well as state funded thing. And we are in a quagmire where, you know, new regulators are coming in, old regulators are falling and we need to identify how to do it. But my basic thing was not CFTIs. The central funded institutions are much better off.

But still, you know, the amount of manpower you need for developing AGI kind of systems. And it is yet to see just a matter of five years. We’ll see how this hypothesis works, whether we are able to generate something in artificial general intelligence. I think all of us will have to contribute towards this transformational change right from academia to the industry, to the policymakers like us. It becomes important. We understand the speed is not required. But. to develop the solution, speed is required in terms of how the solutions get developed by virtue of doing right things, right?

Sh. Subodh Sachan

And I think to Vikash, I think Vikash, you have seen, because you have been coming from the AI learning space, right? So my only question to you is you would have seen the conventional way of doing AI education in past and how that has changed today, right? Are we still looking into conventional classroom mechanism of making the AI learning or as you know, probably what Sarab said, it’s not about learning but practicing it while learning, right? So what’s your input around the same?

Vikash Srivastava

So in my view, conventional or traditional trainings, they focus heavily on theory, mathematics, model architecture, you know, those foundations are important but from an industry readiness, we require additional three layer. First is, you know, problem, I know, applied problem solving. So, learner must work on real data set. Or, you know, they should focus on the domain -specific knowledge. Or they should work with the deployment scenarios. Second is, you know, the production exposure. So, knowing how your model move from your notebook environment to real scalable or, you know, secure, you know, systems. And, you know, how the production happens. And last is.

Sh. Subodh Sachan

So, I think when we talk about the classroom learning and we talk about the learning about the mathematics. How do you see the, you know, the new tools and technologies being used for training? For example. you know are there any examples probably you can quote some examples we have seen that students are now able to not just see the typical you know learning of the classroom but what other tools and technologies they’re being exposed so the learning gets you know increased this the learning speed of learning becomes faster?

Vikash Srivastava

So basically in in our sector you know we are utilizing ai to assess the skill gaps so now there are tools who based on the you know participant profile is able to assess the learning gaps and recommend adaptive learnings so which eventually help you know increasing the employability outcome that’s that’s so so this is how the ai is helping today.

Sh. Subodh Sachan

Great i think while we you know we are probably somewhere almost towards the end but we have one more set of questions but just to keep the audience anybody wants to have one quick question please can somebody bring the mic to them? Can somebody please bring the mic to them ? Right so I think we’ll I wanted to go one more round of questions but just to keep the interactiveness because the audience is also limited I don’t want you guys to get bored about what we are speaking so anybody can probably ask one or two questions?

Audience

Thank you hello everyone namaskar just quickly you speak in Hindi you tell your name my name is Vikram Tripathi I am from a village in Prayagraj and the upcoming elections are the panchayat elections I am going to participate in them there is a district panchayat election there is a district panchayat member of 25 villages so if I win the election then in the first year the AI tools or softwares which are available which are the three sectors where I should use them and secondly is it possible that private companies, CSR funds Thank you.

Dr. Devinder Singh

One bias index is produced depending upon matrices. For one bias, I can use a number of matrices also. Result of all the matrices are clubbed to find the bias index for one particular parameter. Then, a system can have bias due to many things and different bias indexes are clubbed to find one fairness index. The fairness index ranges from 0 to 1. If it is 1, then it is considered fair. If it is 0, it cannot be used. But in practice, the fairness index will be from 0 to 1. Then it will depend upon the user also. If he wants to have how much fairness in the system. If the system is used to suggest what song would you like to hear, then some bias may be accepted.

If system is supposed to identify whether the person is the soldier is enemy or our own. then no bias can be accepted so that can be used for the by the deployer and those matrices or the framework we have suggested it can be used by deployer the developers also the engineers who are involved in developing those systems those people can also test their models if it is fair or not and it can be used by the regulators also the regulator may say the government may say for such sector the system should be tested and it should have at least this much fairness level similar to fairness we have got one standard for robustness also which can be used to check if the system gives consistent results in different situations

Sh. Subodh Sachan

Great and I am sure these standards are available in the public domain they are not draft stages.

Related ResourcesKnowledge base sources related to the discussion topics (18)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Sh Subodh Sachan has 27 years of experience across industry and government and moderated the discussion.”

The knowledge base identifies Subodh Sachan as Director of SGPA headquarters with 27 years in industry and government and notes he moderated the panel discussion [S1].

Additional Contextmedium

“The STPI “Skill‑Up” programme partners with multiple training organisations across India and will launch several regional training hubs.”

S4 confirms the existence of the SIPI/Skill-Up programme and that it works with partners and collaborators; S93 adds that STPI already operates 70 centres (62 in tier-2/3 cities) which can serve as the regional hubs mentioned [S4] and [S93].

Additional Contextmedium

“STPI plans to expand the Skill‑Up network to reach a large number of Indians by 2030.”

S92 states that India has committed to skilling up 10 million people by 2030, providing quantitative context for the programme’s expansion goals [S92].

Confirmedhigh

“Critical thinking is more important than technology; practitioners must question AI outputs and avoid treating AI as an oracle.”

Both S101 and S102 stress the need to validate AI results and invest in critical-thinking skills, echoing the speaker’s warning about outsourcing thinking to AI [S101] and [S102].

Additional Contextmedium

“Future AI professionals need a solid grounding in hardware and should be aware of the large energy disparity between current GPUs (≈500‑700 W) and the human brain (≈20 W), prompting research into neuromorphic computing.”

S28 discusses AI-powered chips and the skills required for India’s next-gen workforce, highlighting hardware expertise and energy efficiency as key focus areas, which adds nuance to the speaker’s point [S28].

Confirmedmedium

“Next‑gen AI talent must understand regulatory frameworks governing AI.”

S90 notes that AI is treated as critical infrastructure and emphasizes the need for capacities to articulate regulatory and standards issues, confirming the importance of regulatory awareness for talent [S90].

External Sources (104)
S1
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — – Dr. Sarabjot Singh Anand- Professor Dr. Jawar Singh – Professor Dr. Alok Pandey- Professor Dr. Jawar Singh
S2
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — – Dr. Sarabjot Singh Anand- Professor Dr. Alok Pandey – Dr. Sarabjot Singh Anand- Professor Dr. Alok Pandey- Vikash Sri…
S3
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — So, Dr. Sarabjot, thank you for being here. I request and welcome you on the dais of pioneer data science. A big round o…
S4
https://dig.watch/event/india-ai-impact-summit-2026/nextgen-ai-skills-safety-and-social-value-technical-mastery-aligned-with-ethical-standards — So, Dr. Sarabjot, thank you for being here. I request and welcome you on the dais of pioneer data science. A big round o…
S5
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — -Kunal Gupta- Managing Director of Mount Talent Consulting, runs talent advisory and job search portal, works closely wi…
S6
AI technology aim to detect emotional distress and depression sooner — A University of Auckland researcher isdeveloping AI toolsto identify early signs of depression in young men. The work fo…
S7
https://dig.watch/event/india-ai-impact-summit-2026/nextgen-ai-skills-safety-and-social-value-technical-mastery-aligned-with-ethical-standards — We are calling them partners and collaborators because the aim and the objective is all aligned within the ecosystem of …
S8
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — – Dr. Sarabjot Singh Anand- Professor Dr. Jawar Singh – Professor Dr. Alok Pandey- Professor Dr. Jawar Singh Professor…
S9
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — -Dr. Devinder Singh- Deputy Director General of TEC (Department of Telecommunications), expert in telecom standards form…
S10
Building the Workforce_ AI for Viksit Bharat 2047 — -Dr. Jitendra Singh- Role/Title: Honorable Minister, Minister of State for Personnel, Minister of State for Personal Gri…
S11
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S12
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S13
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S14
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — -Sh. Subodh Sachan- Director of SGPA headquarters, 27 years in industry and government, works in technology space and st…
S15
How AI Is Transforming Indias Workforce for Global Competitivene — -Pragya- (Role/title not specified, mentioned briefly at the beginning) -Sangeeta Gupta- Panel moderator (role/title no…
S16
Building the Workforce_ AI for Viksit Bharat 2047 — -Moderator- Role/Title: Event moderator, Area of expertise: Not specified -Shubhavi S. Radha Chauhan- Role/Title: Chair…
S17
From Innovation to Impact_ Bringing AI to the Public — Whilst maintaining an optimistic outlook, the discussion acknowledges important limitations and risks. Sharma emphasises…
S18
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — . . . . . . . . . . . . . . one of our keynote speakers, they said autonomous weapons are going to AI -based autonomous …
S19
Artificial intelligence as a driver of digital transformation in industries (HSE University) — The analysis offers a comprehensive examination of artificial intelligence (AI) and its impact on various sectors. One s…
S20
Designing Indias Digital Future AI at the Core 6G at the Edge — The panel discussion revealed that AI-driven applications will fundamentally change network traffic patterns, with uplin…
S21
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — Another thing I would also request you to dwell on is that ultimately the AI is going to come on the handsets. So once A…
S22
Opening address of the co-chairs of the AI Governance Dialogue — Tomas Lamanauskas: Thank you, thank you very much Charlotte indeed, and thank you everyone coming here this morning to j…
S23
Brain-inspired networks boost AI performance and cut energy use — Researchers at the University of Surreyhave developeda new method to enhance AI by imitating how the human brain connect…
S24
Agenda item 5: Day 2 Morning session — Chile:Thank you very much, Chairman. I’d like to thank you and welcome all those present and wish us success or work in …
S25
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — “This will be powered by our Fujitsu Monaca chip, which is a two nanometer chip.”[28]. “In recent past, we’ve announced,…
S26
Knowledge in the Age of AI: World Economic Forum Town Hall Discussion — Critical thinking as essential human skill
S27
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Future workforce needs different skills including critical thinking, judgment capabilities, and empathy when working wit…
S28
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — A critical million-person talent gap exists across the semiconductor ecosystem, spanning from field service engineers to…
S29
Connecting open code with policymakers to development | IGF 2023 WS #500 — The discussion also addresses the need for skilled individuals in government roles. It is argued that attracting talente…
S30
AI (and) education: Convergences between Chinese and European pedagogical practices — This observation prompted Hao Liu to share BIT’s flexible academic system and their consideration of competency-based ev…
S31
Discussion Report: AI as Foundational Infrastructure – A Conversation Between Laurence Fink and Satya Nadella — Economic | Development | Future of work Fink positions AI as having transitioned from a future concept to a present rea…
S32
Telecommunications infrastructure — Network operators increasingly rely on AI for a wide range of tasks, fromnetwork planning(e.g. using algorithms to ident…
S33
Artificial intelligence (AI) – UN Security Council — Furthermore, the discussions underscored the importance of establishing frameworks and infrastructures that support dist…
S34
Building the AI-Ready Future From Infrastructure to Skills — The emphasis on open ecosystems, linguistic diversity, human oversight, and broad adoption provides a framework balancin…
S35
UNESCO Recommendation on the ethics of artificial intelligence — 117. Member States should support collaboration agreements among governments, academic institutions,  vocational educati…
S36
Harmonizing High-Tech: The role of AI standards as an implementation tool — Philippe Metzger:Thank you, Bilel. Maybe to be as succinct as possible, just would like to mention four areas, which I t…
S37
Opening Ceremony — Bogdan-Martin emphasized the importance of establishing trustworthy technical standards to guide AI development and ensu…
S38
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S39
Artificial intelligence — Despite their technical nature – or rather because of that – standards have an important role to play in bridging techno…
S40
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — It was not only for CFTIs because India is 1 .4 billion people, right, and majority of it are in tier 2. My basic proble…
S41
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — – S. Krishnan- Ashwini Vaishnaw- Rangesh Raghavan Focus should be on developing broad talent and understanding rather t…
S42
https://dig.watch/event/india-ai-impact-summit-2026/nextgen-ai-skills-safety-and-social-value-technical-mastery-aligned-with-ethical-standards — Thank you. So I think most of the important aspect has on you. I think we’ve covered the panel. What I wanted to add is …
S43
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — There is the challenge of the nature of the AI curriculum to develop. This is because the proposed Artificial Intelligen…
S44
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — Finally, the analysis highlights the need for academics to propose alternatives to address biases in the digital medium….
S45
Introduction — | Term | EU definition …
S46
WS #205 Contextualising Fairness: AI Governance in Asia — – Nidhi Singh: Moderator – Tejaswita Kharel: Project Officer at the Center for Communication Governance at the National…
S47
Open Forum #30 High Level Review of AI Governance Including the Discussion — These key comments fundamentally shaped the discussion by introducing three critical themes that transformed it from a r…
S48
Why science metters in global AI governance — The discussion revealed surprisingly few fundamental disagreements among speakers, with most conflicts arising around im…
S49
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S50
Ethics and AI | Part 6 — Even if the Act itself does not make direct reference to “ethics”, it is closely tied to the broader context of ethical …
S51
UNGA/DAY 1/PART 2 — The advancement of AI is outpacing regulation and responsibility, with its control concentrated in a few hands. (UN Secr…
S52
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a…
S53
AI and international peace and security: Key issues and relevance for Geneva — Title:Expert Consultation Report on AI and Related Technologies in the MilitaryDescription:This report compiles insights…
S54
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Anne Flanagan: Hello, apologies that I’m not there in person today. I’m in transit at the moment, hence my picture on yo…
S55
How AI Is Transforming Indias Workforce for Global Competitivene — The conversation highlighted urgent needs for educational reform. Aurora emphasised that AI education cannot be confined…
S56
Policymaker’s Guide to International AI Safety Coordination — This comment crystallizes the fundamental tension at the heart of AI governance – the misalignment between market incent…
S57
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — Yeah, I think I just want to add some echo to Professor Gong’s comments. I think it’s not necessarily a negative effect,…
S58
DC-IoT Progressing Global Good Practice for the Internet of Things | IGF 2023 — Summary: The analysis of IoT security policies across different countries revealed some significant findings. Firstly, t…
S59
WSIS High-Level Dialogue: Multistakeholder Partnerships Driving Digital Transformation — Lastly, the analysis illuminates the need for legislation orientated toward ensuring the security and privacy of both so…
S60
Table of Contents — Security doctrine is often understood to refer mainly to administrative security, personnel security, and physical secur…
S61
LinkedIn unveils AI-driven features to enhance job hunting and recruitment — LinkedIn isusingAI to streamline the job hunting process, aiming to alleviate the task of job searching for its users. T…
S62
How AI Drives Innovation and Economic Growth — Kremer argues that while there are forces that may widen gaps, AI has significant potential to narrow development dispar…
S63
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — Artificial intelligence | Social and economic development | Capacity development Speaker 4 envisions AI as a tool to qu…
S64
Day 0 Event #183 What Mature Organizations Do Differently for AI Success — Abdullah Alshamrani: Thank you, doctor. So, hopefully, you’ve learned, the foundational AI techniques, which are not…
S65
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — Because if we don’t recognize the deficiencies and start to regard AI as an oracle that always tells us the truth, we ar…
S66
Optimism for AI – Leading with empathy — Online education | Capacity development | Future of work Will.i.am believes these three qualities are essential for suc…
S67
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — A critical million-person talent gap exists across the semiconductor ecosystem, spanning from field service engineers to…
S68
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Future workforce needs different skills including critical thinking, judgment capabilities, and empathy when working wit…
S69
Knowledge in the Age of AI: World Economic Forum Town Hall Discussion — Critical thinking as essential human skill
S70
AI (and) education: Convergences between Chinese and European pedagogical practices — This comment was insightful because it challenged one of the most fundamental structural assumptions of higher education…
S71
Telecommunications infrastructure — Network operators increasingly rely on AI for a wide range of tasks, fromnetwork planning(e.g. using algorithms to ident…
S72
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — Artificial intelligence and telecommunications complement each other to form the backbone for the intelligence era. Tele…
S73
Building Indias Digital and Industrial Future with AI — Thank you, Devashish and GSMA for this particular session. It’s a session of particular interest to me as a user in the …
S74
Discussion Report: AI as Foundational Infrastructure – A Conversation Between Laurence Fink and Satya Nadella — Example of rural Indian farmer using early GPT models to reason over farm subsidies in local language and complete forms…
S75
WSIS Action Lines C4 and C7:E-employment: Emerging technologies in the world of work: Addressing challenges through digital skills — Shekhar emphasised that this transformation necessitates three critical strategies for effective response. First, organi…
S76
Scaling Innovation Building a Robust AI Startup Ecosystem — -Collaborative Ecosystem Building: The event highlighted partnerships between STPI, National Productivity Council, and o…
S77
Artificial intelligence (AI) – UN Security Council — Furthermore, the discussions underscored the importance of establishing frameworks and infrastructures that support dist…
S78
Setting the Rules_ Global AI Standards for Growth and Governance — And before we go to Rebecca, just from an India perspective, PM Modiji talked about Manav yesterday and the AI vision. T…
S79
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Roy Jakobs argues that the healthcare industry must establish self-regulation standards for AI implementation since regu…
S80
AI promises, ethics, and human rights: Time to open Pandora’s box — Bias, discrimination, and fairness: Are biases being propagated with data sets used to train algorithms? How transparent…
S81
AI Governance Dialogue: Steering the future of AI — Infrastructure | Development | Legal and regulatory Technical Standards and Implementation
S82
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S83
Powering AI Global Leaders Session AI Impact Summit India — “And what that really means is the technology continues to accelerate.”[14]. “going to become even faster and faster.”[1…
S84
AI and the future of digital global supply chains (UNCTAD) — There is a skills gap in these countries
S85
AI is transforming businesses and industries — I am so excited because next week OpenAI is launchingGPT-4– the next-generation large language model! It is going to be …
S86
Sticking with Start-ups / DAVOS 2025 — Bhatnagar explains how AI is transforming content creation and enabling new business models. He highlights the reduced c…
S87
Powering the Technology Revolution / Davos 2025 — Dan Murphy: ♫ ♫ Welcome to Red Bee Media’s Live Remote Broadcasting Service. I’m from CNBC, I’m CNBC’s Middle E…
S88
The Global Economic Outlook — Panelists emphasized the need to rebuild optimism and trust among populations feeling economically insecure. They discus…
S89
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Armando José Manzueta-Peña:Well, thank you, Luca, for the presentation. I’m more than thrilled to be present here and to…
S90
AI as critical infrastructure for continuity in public services — “I believe that there is perhaps awareness challenge as well as the capacity challenge, because I think that this whole …
S91
Open Internet Inclusive AI Unlocking Innovation for All — With decades of experience across entrepreneurship, investing, and global technology leadership, Rajan has played a pivo…
S92
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — And then we are coupling that with investments in skilling. So we have made some big -number commitments around how we a…
S93
Building the Future STPI Global Partnerships &amp; Startup Felicitation 2026 — STPI operates 70 centers across India with 62 in tier 2/3 cities and 24 domain-specific centers of entrepreneurship prov…
S94
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — He introduces a panel of experts from different fields
S95
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — -Ashutosh Sharma- Investor in India’s fintech ecosystem, described as one of the leading deployers of finance in fintech…
S96
Launch of the eTrade Readiness Assessment of Peru (UNCTAD) — Another point raised by the speakers is the role of fintech in enhancing financial inclusion and trust in digital transa…
S97
DC-DH: Health Digital Health &amp; Selfcare – Can we replace Doctors in PHCs — Debbie Rogers: I definitely am a proponent of bringing technology into the mix to relieve some of the burden on the he…
S98
Driving Indias AI Future Growth Innovation and Impact — And for this, I’m delighted to welcome two very eminent leaders who are instrumental in shaping the journey, both from p…
S99
Main Topic 3 –  Identification of AI generated content — Despite the difficulties posed by the enforcement of such regulations, the inflexibility of legislation regarding their …
S100
Experts urge broader values in AI development — Since the launch of ChatGPT in late 2023, the private sectorhas led AI innovation. Major players like Microsoft, Google,…
S101
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — He advocates for always validating everything AI produces and encourages experimental use of AI technology to understand…
S102
Artificial General Intelligence and the Future of Responsible Governance — He emphasized that investing in education and critical thinking is as important as investing in computing power, but the…
S103
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Both speakers stress the critical importance of ongoing skill development and adaptation to new technologies, though the…
S104
Keynote-Rishad Premji — From experimentation to adoption and from pilots to scaled impact. This shift matters and it matters tremendously becaus…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Sarabjot Singh Anand
3 arguments158 words per minute892 words336 seconds
Argument 1
Critical thinking and risk‑taking; recognizing AI limitations
EXPLANATION
Dr. Sarabjot emphasizes that next‑gen AI talent must be strong critical thinkers who can question AI outputs and take calculated risks. Recognizing AI’s imperfections prevents over‑reliance on it as an infallible oracle.
EVIDENCE
He explains that there are two camps – those who generate AI and those who use it – and stresses the need for critical thinking because people tend to outsource their thinking to AI, which is problematic. He notes that AI is not perfect, has deficiencies, and therefore must be questioned, and that risk-taking and a foundational understanding of AI capabilities are essential [40-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 notes that Dr. Sarabjot stresses the need for strong critical thinking, risk‑taking and questioning AI outputs because AI is imperfect and should not be treated as an infallible oracle.
MAJOR DISCUSSION POINT
Critical thinking and risk‑taking; recognizing AI limitations
DISAGREED WITH
Professor Dr. Alok Pandey, Vikash Srivastava, Kunal Gupta
Argument 2
Evaluate talent on problem‑solving, self‑directed learning, and foundational strength
EXPLANATION
He describes the criteria used to assess AI talent, focusing on problem‑solving ability, self‑initiative in learning, and a solid foundational knowledge base. These factors are seen as essential for producing capable AI professionals.
EVIDENCE
He states that talent is assessed on problem-solving skills, the eagerness to learn independently, and the strength of foundational knowledge, citing examples from the Sabudh Foundation’s experience with students struggling to program neural networks and later LLMs [103-110].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 describes his assessment framework that prioritises problem‑solving ability, self‑directed learning and a solid foundational knowledge base for AI talent.
MAJOR DISCUSSION POINT
Evaluate talent on problem‑solving, self‑directed learning, and foundational strength
Argument 3
Startup‑oriented AI solutions require understanding customer problems beyond algorithms
EXPLANATION
Dr. Sarabjot argues that AI solutions for startups must start with a deep understanding of the customer’s problem rather than focusing solely on algorithms. This customer‑centric approach leads to more effective and market‑relevant AI products.
EVIDENCE
He explains that at TATRAS they work with US startups, emphasizing that understanding the problem from the customer’s perspective is crucial, and that successful solutions solve the problem regardless of the technology used, distinguishing their training from generic library-focused skilling [106-112].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 reports his view that AI solutions for startups must start with a deep understanding of the customer’s problem rather than focusing solely on algorithms, emphasizing a customer‑centric approach.
MAJOR DISCUSSION POINT
Startup‑oriented AI solutions require understanding customer problems beyond algorithms
AGREED WITH
Kunal Gupta, Vikash Srivastava, Professor Dr. Alok Pandey
D
Dr. Devinder Singh
3 arguments155 words per minute661 words255 seconds
Argument 1
Strong AI expertise, real‑world problem solving, adaptability, regulatory awareness
EXPLANATION
He outlines the qualities needed for next‑gen AI talent: deep AI expertise, ability to solve real‑world problems, adaptability to new technologies, research capability, and awareness of regulations governing AI.
EVIDENCE
He lists that a next-gen AI professional should have strong AI expertise, solve real-world problems, adapt to new technologies, conduct research, work across sectors, and be aware of regulations [49-55].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 summarises Dr. Devinder Singh’s outlined qualities for next‑gen AI professionals: deep AI expertise, ability to solve real‑world problems, adaptability to new technologies, research capability and awareness of AI regulations.
MAJOR DISCUSSION POINT
Strong AI expertise, real‑world problem solving, adaptability, regulatory awareness
Argument 2
6G networks will embed AI in every component; engineers must master ML and new AI standards
EXPLANATION
Dr. Devinder explains that future 6G telecom networks will have AI built into every component, requiring engineers to acquire machine‑learning skills and understand emerging AI standards.
EVIDENCE
He describes how 5G treats AI as an add-on, whereas 6G will have AI in-built, with self-learning components, distributed edge intelligence, and the need for engineers to know ML and follow AI standards that are being finalized [124-146].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S4 discusses distributed decision‑making in 6G and the need for engineers to master ML; S19 and S20 highlight that AI will be embedded in all 6G components and will reshape network traffic; S21 points out security challenges when AI runs on handsets.
MAJOR DISCUSSION POINT
6G networks will embed AI in every component; engineers must master ML and new AI standards
AGREED WITH
Professor Dr. Jawar Singh, Sh. Subodh Sachan, Professor Dr. Alok Pandey
Argument 3
Fairness and bias indices; robustness standards for AI systems; role of regulators
EXPLANATION
He presents a quantitative approach to measuring AI bias and fairness, proposing indices that range from 0 to 1, and stresses that regulators should set minimum fairness thresholds and robustness standards.
EVIDENCE
He details the creation of a bias index using multiple matrices, aggregation into a fairness index (0-1 scale), and notes that different applications tolerate different bias levels; he also mentions a robustness standard for consistent results, suggesting regulators can mandate minimum levels [320-328].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 presents a bias index, a composite fairness index (0‑1 scale) and a robustness standard, and recommends that regulators set minimum thresholds for fairness and robustness.
MAJOR DISCUSSION POINT
Fairness and bias indices; robustness standards for AI systems; role of regulators
AGREED WITH
Professor Dr. Alok Pandey, Vikash Srivastava, Sh. Subodh Sachan
P
Professor Dr. Jawar Singh
4 arguments141 words per minute539 words227 seconds
Argument 1
Deep understanding of algorithms together with hardware mapping and security
EXPLANATION
Prof. Jawar stresses that next‑gen AI talent must not only master algorithms but also understand how those algorithms map onto hardware and ensure hardware security, as AI can be weaponised.
EVIDENCE
He states that AI professionals should know algorithms, how they run on hardware, and that hardware security is critical to prevent weaponisation, highlighting the need for grounding in hardware, neuromorphic computing, and secure implementation [57-60][160-164].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 stresses the importance of knowing how AI algorithms map onto hardware and ensuring hardware security; S23 adds context on brain‑inspired hardware efficiency; S4 notes the high power consumption of AI models, underscoring the need for efficient hardware mapping.
MAJOR DISCUSSION POINT
Deep understanding of algorithms together with hardware mapping and security
DISAGREED WITH
Professor Dr. Alok Pandey, Dr. Sarabjot Singh Anand
Argument 2
Centrally funded technical institutes already enjoy curriculum flexibility
EXPLANATION
He points out that centrally funded technical institutions (CFTIs) are not bound by the bureaucratic curriculum approval process and can introduce new courses quickly.
EVIDENCE
He explains that CFTIs can start a new course from the next semester without restrictions, indicating they have curriculum autonomy [279-283].
MAJOR DISCUSSION POINT
Centrally funded technical institutes already enjoy curriculum flexibility
AGREED WITH
Sh. Subodh Sachan, Professor Dr. Alok Pandey
Argument 3
Neuromorphic/brain‑inspired computing and hardware efficiency are crucial for future AI models
EXPLANATION
Prof. Jawar highlights the importance of energy‑efficient, brain‑inspired hardware (neuromorphic computing) to bridge the gap between human brain power consumption and current AI processors.
EVIDENCE
He compares a basic NVIDIA processor (500-700 W) with the human brain (≈20 W), notes the large efficiency gap, and describes ongoing research in neuromorphic computing to map algorithms efficiently onto hardware, also mentioning hardware security concerns [153-164].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S23 details neuromorphic computing and topographical sparse mapping as approaches to achieve energy‑efficient AI, directly supporting his argument.
MAJOR DISCUSSION POINT
Neuromorphic/brain‑inspired computing and hardware efficiency are crucial for future AI models
AGREED WITH
Dr. Devinder Singh, Sh. Subodh Sachan, Professor Dr. Alok Pandey
Argument 4
Hardware security as a critical layer to prevent weaponisation of AI
EXPLANATION
He argues that securing the hardware layer is essential because AI can be weaponised if hardware implementations are not trustworthy.
EVIDENCE
He explicitly mentions that hardware security is crucial because AI can be weaponised and used for neutralisation, urging a focus on secure, trusted hardware implementations [160-164].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S18 warns about AI‑driven autonomous weapons and stresses the need for secure hardware to prevent weaponisation; S1 also highlights hardware security as essential for trusted AI implementations.
MAJOR DISCUSSION POINT
Hardware security as a critical layer to prevent weaponisation of AI
P
Professor Dr. Alok Pandey
4 arguments172 words per minute891 words310 seconds
Argument 1
T‑shaped profile: deep domain expertise, AI fluency, red‑team/containment skills
EXPLANATION
Prof. Alok describes the ideal next‑gen AI professional as T‑shaped: deep expertise in a specific domain, fluency across AI tools and technologies, and the ability to conduct red‑team testing and containment.
EVIDENCE
He states that the next-gen AI talent should be deep domain specialists, fluent in AI software/hardware, and capable of red-team and containment activities [63-67].
MAJOR DISCUSSION POINT
T‑shaped profile: deep domain expertise, AI fluency, red‑team/containment skills
AGREED WITH
Dr. Sarabjot Singh Anand, Kunal Gupta, Vikash Srivastava
DISAGREED WITH
Dr. Sarabjot Singh Anand, Vikash Srivastava, Kunal Gupta
Argument 2
Need for large‑scale faculty development, industry‑university MOUs, and funding support
EXPLANATION
He argues that scaling AI education requires expanding faculty numbers, establishing industry‑university collaborations, and securing both government and industry funding.
EVIDENCE
He mentions the need for large-scale faculty development, MOUs with Western countries, and funding from government and industry to bridge the talent gap [170-200].
MAJOR DISCUSSION POINT
Need for large‑scale faculty development, industry‑university MOUs, and funding support
Argument 3
De‑bureaucratise curricula; grant autonomy to institutions of eminence for rapid updates
EXPLANATION
Prof. Alok calls for reducing bureaucratic hurdles in curriculum design, allowing institutions of eminence to create and update courses swiftly to keep pace with AI advances.
EVIDENCE
He describes the concept of ‘institution of eminence’, the need for high curriculum velocity, and the inability to command faculty to teach specific courses due to rapid AI changes, urging de-bureaucratisation [237-244].
MAJOR DISCUSSION POINT
De‑bureaucratise curricula; grant autonomy to institutions of eminence for rapid updates
AGREED WITH
Sh. Subodh Sachan, Professor Dr. Jawar Singh
DISAGREED WITH
Kunal Gupta
Argument 4
Importance of red‑team testing, containment, and safety in AI product development
EXPLANATION
He stresses that AI products must undergo red‑team testing and have containment mechanisms to ensure safety, even suggesting that a technology should be killed if it behaves undesirably.
EVIDENCE
He notes that every young AI user must understand red-team and containment, and that one should ‘kill’ technology if it does not work in their favor, highlighting safety concerns [186-188].
MAJOR DISCUSSION POINT
Importance of red‑team testing, containment, and safety in AI product development
AGREED WITH
Dr. Devinder Singh, Vikash Srivastava, Sh. Subodh Sachan
K
Kunal Gupta
4 arguments181 words per minute1228 words406 seconds
Argument 1
Emphasis on application focus and clear problem definition, sector‑specific skills
EXPLANATION
Kunal argues that the biggest skill gap lies in applying AI, specifically in defining problems clearly, and that skill requirements differ across sectors.
EVIDENCE
He states that the biggest gap is application, that defining the problem accounts for 50 % of the solution, and that each sector (healthcare, law, agriculture) has specific gaps, citing examples like hydroponics and crop insurance [202-214].
MAJOR DISCUSSION POINT
Emphasis on application focus and clear problem definition, sector‑specific skills
AGREED WITH
Dr. Sarabjot Singh Anand, Vikash Srivastava, Professor Dr. Alok Pandey
Argument 2
Identify gaps in application skills; use AI‑driven gap analysis and personalized learning paths
EXPLANATION
He describes using AI to assess individual skill gaps, generate recommendations, and provide adaptive learning pathways to close those gaps.
EVIDENCE
He explains that their platform uses AI to assess participant profiles, identify missing skills, and recommend adaptive learning, thereby improving employability [214-224].
MAJOR DISCUSSION POINT
Identify gaps in application skills; use AI‑driven gap analysis and personalized learning paths
AGREED WITH
Vikash Srivastava, Sh. Subodh Sachan
Argument 3
Current curricula are outdated and misaligned with industry; need faster, industry‑responsive revisions
EXPLANATION
Kunal points out that Indian curricula take years to update, rendering them obsolete by the time they are implemented, and calls for rapid, industry‑aligned curriculum reforms.
EVIDENCE
He notes that syllabus updates require multiple committees and take five-seven years, causing a mismatch with industry needs, especially given the rapid AI growth in the last six months [212-218].
MAJOR DISCUSSION POINT
Current curricula are outdated and misaligned with industry; need faster, industry‑responsive revisions
Argument 4
Sector‑specific AI use‑cases (e.g., hydroponics, crop insurance) illustrate need for domain‑focused talent
EXPLANATION
He provides concrete examples where AI application requires domain knowledge, such as hydroponic farming and crop insurance, underscoring the need for talent that understands specific industry contexts.
EVIDENCE
He cites hydroponics as a sector-specific AI application that can produce high yields without pesticides, and crop insurance that relies on satellite imagery and AI for assessment, illustrating domain-focused talent needs [210-214].
MAJOR DISCUSSION POINT
Sector‑specific AI use‑cases (e.g., hydroponics, crop insurance) illustrate need for domain‑focused talent
V
Vikash Srivastava
4 arguments132 words per minute262 words118 seconds
Argument 1
Blend of technical mastery, ethical judgment, and real‑world problem‑solving ability
EXPLANATION
Vikash proposes that next‑gen AI talent should combine strong technical skills, ethical decision‑making, and the ability to solve real‑world problems.
EVIDENCE
He lists three important components: technical mastery, ethical judgment, and real-world problem-solving capabilities [83-88].
MAJOR DISCUSSION POINT
Blend of technical mastery, ethical judgment, and real‑world problem‑solving ability
AGREED WITH
Professor Dr. Alok Pandey, Dr. Devinder Singh, Sh. Subodh Sachan
DISAGREED WITH
Dr. Sarabjot Singh Anand, Professor Dr. Alok Pandey, Kunal Gupta
Argument 2
Move beyond theory: applied problem solving, production exposure, deployment experience
EXPLANATION
He argues that traditional AI training focuses on theory, whereas industry‑ready talent needs hands‑on problem solving, exposure to production environments, and experience deploying models at scale.
EVIDENCE
He contrasts conventional training (theory, mathematics) with three additional layers: applied problem solving with real data, production exposure (moving models from notebooks to secure systems), and deployment scenarios [303-311].
MAJOR DISCUSSION POINT
Move beyond theory: applied problem solving, production exposure, deployment experience
AGREED WITH
Dr. Sarabjot Singh Anand, Kunal Gupta, Professor Dr. Alok Pandey
Argument 3
Production‑level exposure: moving models from notebooks to secure, scalable systems
EXPLANATION
He highlights the necessity for AI practitioners to understand how to transition models from development notebooks to secure, scalable production environments.
EVIDENCE
He specifically mentions the need to know how a model moves from a notebook environment to a real, scalable, secure system as part of production exposure [308-311].
MAJOR DISCUSSION POINT
Production‑level exposure: moving models from notebooks to secure, scalable systems
Argument 4
Adaptive learning tools driven by AI assess participant profiles and tailor curricula in real time
EXPLANATION
Vikash notes that AI can be used to evaluate learners’ profiles, identify gaps, and recommend adaptive learning paths, thereby enhancing employability outcomes.
EVIDENCE
He states that AI-based tools assess skill gaps based on participant profiles and recommend adaptive learning, improving employability [315].
MAJOR DISCUSSION POINT
Adaptive learning tools driven by AI assess participant profiles and tailor curricula in real time
AGREED WITH
Kunal Gupta, Sh. Subodh Sachan
S
Sh. Subodh Sachan
5 arguments177 words per minute3433 words1160 seconds
Argument 1
Infrastructure‑level AI mindset; curiosity and creativity as drivers
EXPLANATION
He frames next‑gen AI as an infrastructure that multiplies human intelligence, emphasizing that curiosity and creativity are essential to harness this infrastructure effectively.
EVIDENCE
He describes AI as infrastructure-level intelligence that multiplies reasoning, creativity, and judgment, and links curiosity and creativity to the ability to understand customer problems and create impactful solutions [90-94].
MAJOR DISCUSSION POINT
Infrastructure‑level AI mindset; curiosity and creativity as drivers
AGREED WITH
Dr. Devinder Singh, Professor Dr. Jawar Singh, Professor Dr. Alok Pandey
Argument 2
STPI “Skill‑Up” program, regional training hubs, partnership network of 18+ trainers
EXPLANATION
He outlines the STPI initiative that establishes regional AI training hubs and collaborates with over 18 training partners to upskill the workforce.
EVIDENCE
He announces the launch of multiple regional hubs for training, mentions a network of 18 training partners across India, and describes the STPI Skill-Up program as the vehicle for skilling up [9-12].
MAJOR DISCUSSION POINT
STPI “Skill‑Up” program, regional training hubs, partnership network of 18+ trainers
AGREED WITH
Vikash Srivastava, Kunal Gupta
Argument 3
National Education Policy (NEP) provides greater autonomy, supporting faster curriculum evolution
EXPLANATION
He notes that the NEP has already begun granting institutions more autonomy, which should accelerate curriculum updates to match AI advancements.
EVIDENCE
He references the NEP as having already given more autonomy to institutions, facilitating faster curriculum evolution, and asks whether global experiences are being reflected in India [252-258].
MAJOR DISCUSSION POINT
National Education Policy (NEP) provides greater autonomy, supporting faster curriculum evolution
AGREED WITH
Professor Dr. Alok Pandey, Professor Dr. Jawar Singh
Argument 4
Need for standards to ensure AI fairness and robustness across applications
EXPLANATION
He highlights that AI systems must be evaluated for bias, robustness, and fairness, and that standards are required to guide developers, regulators, and users.
EVIDENCE
He mentions the emerging gaps in cyber-security, bias, robustness, and fairness, and calls for standards to fill these talent gaps and guide ecosystem development [166-168].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 outlines fairness and robustness indices and calls for standards to guide developers and regulators, aligning with his call for standardized evaluation of AI systems.
MAJOR DISCUSSION POINT
Need for standards to ensure AI fairness and robustness across applications
AGREED WITH
Professor Dr. Alok Pandey, Dr. Devinder Singh, Vikash Srivastava
Argument 5
STPI’s “Skill‑Up” initiative creates regional hubs and collaborates with multiple training partners
EXPLANATION
He reiterates the STPI Skill‑Up effort, emphasizing its role in building AI talent through regional hubs and a broad partner ecosystem.
EVIDENCE
He again references the STPI Skill-Up program, regional hubs, and the network of 18 training partners as a cornerstone of AI skilling in India [9-12].
MAJOR DISCUSSION POINT
STPI’s “Skill‑Up” initiative creates regional hubs and collaborates with multiple training partners
A
Audience
1 argument59 words per minute98 words98 seconds
Argument 1
Audience query on practical AI tools for local governance and CSR funding
EXPLANATION
An audience member asks which AI tools could be applied in three sectors for local governance and whether private companies or CSR funds could support such initiatives.
EVIDENCE
The audience member, Vikram Tripathi, asks for guidance on three sectors where AI tools can be used in district panchayat work and whether private companies or CSR funds can support these efforts [319].
MAJOR DISCUSSION POINT
Audience query on practical AI tools for local governance and CSR funding
Agreements
Agreement Points
AI talent must prioritize problem‑solving and domain/customer understanding over pure algorithmic focus
Speakers: Dr. Sarabjot Singh Anand, Kunal Gupta, Vikash Srivastava, Professor Dr. Alok Pandey
Startup‑oriented AI solutions require understanding customer problems beyond algorithms Emphasis on application focus and clear problem definition, sector‑specific skills Move beyond theory: applied problem solving, production exposure, deployment experience T‑shaped profile: deep domain expertise, AI fluency, red‑team/containment skills
All four speakers stress that next-gen AI professionals should first understand the real problem and the domain context before applying algorithms, emphasizing applied problem-solving, sector-specific knowledge and a T-shaped skill set [106-112][202-214][303-311][63-67].
POLICY CONTEXT (KNOWLEDGE BASE)
This emphasis mirrors calls for broader AI talent development that prioritize critical thinking and real-world problem solving over narrow technical skills, as highlighted in industry discussions on AI talent development [S41] and panel remarks stressing technical mastery combined with ethical judgment and problem-solving capabilities [S42].
Curricula and training programmes must become agile, with institutional autonomy to update rapidly
Speakers: Sh. Subodh Sachan, Professor Dr. Alok Pandey, Professor Dr. Jawar Singh
National Education Policy (NEP) provides greater autonomy, supporting faster curriculum evolution De‑bureaucratise curricula; grant autonomy to institutions of eminence for rapid updates Centrally funded technical institutes already enjoy curriculum flexibility
Subodh highlights NEP-driven autonomy, Alok calls for de-bureaucratised curricula, and Jawar notes that CFTIs can introduce new courses without delay, showing a shared view that curriculum flexibility is essential [252-258][237-244][279-283].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for agile, de-bureaucratised curricula is echoed in recent policy analyses urging rapid curriculum redesign and stronger industry-academia collaboration to keep pace with AI advances [S55] and in recommendations for flexible AI education pathways [S41].
Ethical, safety and fairness standards are essential for responsible AI deployment
Speakers: Professor Dr. Alok Pandey, Dr. Devinder Singh, Vikash Srivastava, Sh. Subodh Sachan
Importance of red‑team testing, containment, and safety in AI product development Fairness and bias indices; robustness standards for AI systems; role of regulators Blend of technical mastery, ethical judgment, and real‑world problem‑solving ability Need for standards to ensure AI fairness and robustness across applications
Alok stresses red-team and safety, Devinder proposes bias/fairness indices and regulator roles, Vikash adds ethical judgment to the skill mix, and Subodh calls for standards on fairness and robustness, indicating consensus on ethical safeguards [186-188][320-328][83-88][166-168].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with global AI governance frameworks such as the EU Ethics Guidelines for Trustworthy AI and UN calls for universal guardrails on safety, fairness and accountability [S50][S51], as well as academic work highlighting the need to address bias and fairness in educational technologies [S44].
AI is becoming a foundational infrastructure, requiring hardware efficiency, standards and sector‑specific integration (e.g., 6G)
Speakers: Dr. Devinder Singh, Professor Dr. Jawar Singh, Sh. Subodh Sachan, Professor Dr. Alok Pandey
6G networks will embed AI in every component; engineers must master ML and new AI standards Neuromorphic/brain‑inspired computing and hardware efficiency are crucial for future AI models Infrastructure‑level AI mindset; curiosity and creativity as drivers AI as infrastructure of intelligence multiplying our ability
Devinder describes AI-embedded 6G, Jawar highlights energy-efficient neuromorphic hardware, Subodh frames AI as an infrastructure that multiplies intelligence, and Alok refers to AI as an infrastructure-level intelligence, all agreeing on the infrastructural nature of next-gen AI [124-146][153-164][90-94][170-176].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions on AI-powered chips and the importance of hardware efficiency for next-gen AI workloads support this view [S41], while broader infrastructure security considerations stress the need for standards across hardware and software layers [S59].
AI‑driven tools can assess skill gaps and personalize learning, supporting upskilling at scale
Speakers: Vikash Srivastava, Kunal Gupta, Sh. Subodh Sachan
Adaptive learning tools driven by AI assess participant profiles and tailor curricula in real time Identify gaps in application skills; use AI‑driven gap analysis and personalized learning paths STPI “Skill‑Up” program, regional training hubs, partnership network of 18+ trainers
Vikash mentions AI-based skill-gap assessment, Kunal describes AI-driven gap analysis and personalized pathways, and Subodh outlines the STPI Skill-Up ecosystem, showing agreement on leveraging AI for scalable upskilling [315][214-224][9-12].
POLICY CONTEXT (KNOWLEDGE BASE)
Reports on AI-enabled workforce development describe AI systems that evaluate individual backgrounds and tailor skill-building pathways, underscoring their role in large-scale upskilling initiatives [S63][S61].
Similar Viewpoints
All emphasize that AI expertise must be coupled with strong problem‑definition and domain knowledge, moving beyond pure technical theory to deliver real‑world solutions [106-112][202-214][303-311][63-67].
Speakers: Dr. Sarabjot Singh Anand, Kunal Gupta, Vikash Srivastava, Professor Dr. Alok Pandey
Startup‑oriented AI solutions require understanding customer problems beyond algorithms Emphasis on application focus and clear problem definition, sector‑specific skills Move beyond theory: applied problem solving, production exposure, deployment experience T‑shaped profile: deep domain expertise, AI fluency, red‑team/containment skills
All agree that institutional autonomy and reduced bureaucracy are needed for curricula to keep pace with rapid AI advances [252-258][237-244][279-283].
Speakers: Sh. Subodh Sachan, Professor Dr. Alok Pandey, Professor Dr. Jawar Singh
National Education Policy (NEP) provides greater autonomy, supporting faster curriculum evolution De‑bureaucratise curricula; grant autonomy to institutions of eminence for rapid updates Centrally funded technical institutes already enjoy curriculum flexibility
Consensus that ethical safeguards, fairness metrics and formal standards are critical for trustworthy AI deployment [186-188][320-328][83-88][166-168].
Speakers: Professor Dr. Alok Pandey, Dr. Devinder Singh, Vikash Srivastava, Sh. Subodh Sachan
Importance of red‑team testing, containment, and safety in AI product development Fairness and bias indices; robustness standards for AI systems; role of regulators Blend of technical mastery, ethical judgment, and real‑world problem‑solving ability Need for standards to ensure AI fairness and robustness across applications
All view AI as a foundational infrastructure that requires efficient hardware, standards and a mindset of curiosity/creativity to harness its potential [124-146][153-164][90-94][170-176].
Speakers: Dr. Devinder Singh, Professor Dr. Jawar Singh, Sh. Subodh Sachan, Professor Dr. Alok Pandey
6G networks will embed AI in every component; engineers must master ML and new AI standards Neuromorphic/brain‑inspired computing and hardware efficiency are crucial for future AI models Infrastructure‑level AI mindset; curiosity and creativity as drivers AI as infrastructure of intelligence multiplying our ability
Agreement that AI‑enabled assessment and regional training networks can efficiently close skill gaps at scale [315][214-224][9-12].
Speakers: Vikash Srivastava, Kunal Gupta, Sh. Subodh Sachan
Adaptive learning tools driven by AI assess participant profiles and tailor curricula in real time Identify gaps in application skills; use AI‑driven gap analysis and personalized learning paths STPI “Skill‑Up” program, regional training hubs, partnership network of 18+ trainers
Unexpected Consensus
Industry‑driven call for faster curriculum updates aligns with academic push for de‑bureaucratised curricula
Speakers: Kunal Gupta, Professor Dr. Alok Pandey
Current curricula are outdated and misaligned with industry; need faster, industry‑responsive revisions De‑bureaucratise curricula; grant autonomy to institutions of eminence for rapid updates
Kunal, representing a private skilling platform, stresses that curricula lag behind industry needs, while Alok, from academia, proposes de-bureaucratising curricula to allow rapid updates. Their convergence bridges the typical industry-academia divide, highlighting a shared urgency for curriculum agility [212-218][237-244].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry-academia partnership models advocating swift curriculum reforms have been highlighted as critical for integrating AI across disciplines and reducing bureaucratic lag [S55][S41].
Overall Assessment

The panel shows strong consensus on four pillars: (1) problem‑solving and domain‑centric AI skills; (2) the need for agile, autonomous curricula; (3) the imperative of ethical standards, fairness and safety; (4) viewing AI as core infrastructure requiring hardware efficiency and standards; plus a shared belief in AI‑driven skill‑gap assessment tools. These converging views suggest a coordinated path forward involving curriculum reform, industry‑academia collaboration, standards development, and investment in AI‑enabled training ecosystems.

High consensus – most speakers independently arrived at similar conclusions across technical, educational and ethical dimensions, indicating a solid foundation for policy and programmatic action.

Differences
Different Viewpoints
Where the bottleneck lies in curriculum reform and who should drive rapid AI curriculum updates
Speakers: Professor Dr. Alok Pandey, Professor Dr. Jawar Singh, Sh. Subodh Sachan
De‑bureaucratise curricula; grant autonomy to institutions of eminence for rapid updates Centrally funded technical institutes already enjoy curriculum flexibility and can start new courses without restrictions NEP already provides greater autonomy, but question remains whether global experiences are being reflected in India
Alok argues that bureaucratic hurdles prevent fast curriculum changes and calls for de-bureaucratisation and autonomy for institutions of eminence [237-244]. Jawar counters that centrally funded institutes already have the freedom to introduce new courses quickly, implying no major bottleneck there [279-283]. Subodh notes that the National Education Policy has already granted more autonomy, asking if international best-practices are being adopted [252-258]. The speakers thus disagree on whether curriculum rigidity is a systemic problem and on which institutions should lead reform.
What core competency should define next‑gen AI talent
Speakers: Dr. Sarabjot Singh Anand, Professor Dr. Alok Pandey, Vikash Srivastava, Kunal Gupta
Critical thinking and risk‑taking; recognizing AI limitations T‑shaped profile: deep domain expertise, AI fluency, red‑team/containment skills Blend of technical mastery, ethical judgment, and real‑world problem‑solving ability Emphasis on clear problem definition and sector‑specific application skills
Sarabjot stresses critical thinking, risk-taking and awareness of AI’s imperfections as essential [40-46]. Alok proposes a T-shaped talent model combining deep domain knowledge, AI fluency and red-team/containment capabilities [63-67]. Vikash highlights the need for technical mastery together with ethical judgment and the ability to solve real-world problems [83-88]. Kunal argues that the biggest gap is the ability to define problems and apply AI in sector-specific contexts [202-214]. Each speaker foregrounds a different priority, revealing disagreement on which skill set should be the cornerstone of AI talent development.
POLICY CONTEXT (KNOWLEDGE BASE)
Panel insights identify three pillars-technical mastery, ethical judgment, and real-world problem-solving-as the defining competencies for future AI professionals [S42].
Relative importance of hardware knowledge versus algorithmic/software focus in AI talent
Speakers: Professor Dr. Jawar Singh, Professor Dr. Alok Pandey, Dr. Sarabjot Singh Anand
Deep understanding of algorithms together with hardware mapping and security Focus on domain integration, red‑team testing and safety rather than hardware specifics Emphasis on critical thinking and problem‑solving over hardware considerations
Jawar stresses that next-gen AI professionals must understand how algorithms map onto hardware, energy-efficient neuromorphic computing and hardware security [57-60][153-164]. Alok’s view centres on domain expertise, AI fluency and red-team/containment without explicit hardware emphasis [63-67][186-188]. Sarabjot focuses on critical thinking and risk-taking, not mentioning hardware at all [40-46]. The speakers thus disagree on the priority of hardware knowledge in AI talent development.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on talent composition reference the growing significance of hardware efficiency and AI-specific chip design alongside traditional algorithmic expertise, as discussed in AI-powered chip strategy documents [S41] and security policy briefs on hardware/software integration [S59].
Approach to establishing AI curriculum standards and updates
Speakers: Professor Dr. Alok Pandey, Kunal Gupta
De‑bureaucratise curricula; grant autonomy to institutions of eminence for rapid updates Current curricula are outdated and misaligned; need faster, industry‑responsive revisions and policy reforms
Alok calls for reducing bureaucratic constraints and giving institutions of eminence the freedom to swiftly redesign curricula [237-244]. Kunal points out that syllabus revisions take years, making them obsolete, and urges rapid, industry-aligned curriculum reforms [212-218]. Both agree on the need for faster curriculum change but propose different mechanisms-institutional autonomy versus systemic policy overhaul.
POLICY CONTEXT (KNOWLEDGE BASE)
Proposals for AI curriculum frameworks that balance agility with standardisation have been put forward by bodies such as AUDA-NEPAD and national AI strategy reports, emphasizing the need for coordinated standards and rapid updates [S43][S55].
Unexpected Differences
Different security focus: hardware security versus software/red‑team containment
Speakers: Professor Dr. Jawar Singh, Professor Dr. Alok Pandey
Hardware security as a critical layer to prevent weaponisation of AI Importance of red‑team testing, containment, and safety in AI product development
Jawar highlights the need for secure, trusted hardware implementations to prevent AI weaponisation [160-164], whereas Alok concentrates on software-level safety measures such as red-team testing and containment, even suggesting killing a technology that misbehaves [186-188]. The disagreement is unexpected because both address security but focus on different layers (hardware vs software).
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses differentiate between hardware-centric security doctrines and software/red-team containment strategies, highlighting the necessity of addressing both layers in AI system protection [S59][S60].
Introduction of bias and fairness indices by Dr. Devinder Singh, not addressed by other panelists
Speakers: Dr. Devinder Singh, Other panelists
Fairness and bias indices; robustness standards for AI systems; role of regulators Various other perspectives on talent, curriculum, hardware, etc., without explicit mention of bias/fairness metrics
Devinder proposes quantitative bias and fairness indices and calls for regulatory thresholds [320-328], a focus absent from the other speakers’ contributions, making this a surprising point of divergence within the discussion on AI standards and ethics.
POLICY CONTEXT (KNOWLEDGE BASE)
The call for bias and fairness metrics aligns with ongoing scholarly work on fairness indices in AI education and governance, as documented in fairness-focused workshops and reports [S44][S46][S48].
Overall Assessment

The panel exhibits moderate disagreement centered on the priorities for AI talent development (critical thinking vs domain expertise vs ethical and problem‑definition skills) and on the mechanisms for curriculum reform (institutional autonomy versus policy‑level overhaul). There is also a clear split on the importance of hardware knowledge and security versus software‑level safety measures. While all participants share the overarching goal of building a robust AI ecosystem in India, the divergent views on which competencies and institutional levers are most critical could slow coordinated action and policy implementation.

Moderate disagreement with implications for fragmented policy approaches; consensus on the need for AI skill development exists, but differing emphases may lead to parallel initiatives rather than a unified national strategy.

Partial Agreements
Both recognise a significant gap in AI education capacity and call for systemic changes—Alok emphasizes expanding faculty, MOUs and funding to scale up education [170-200], while Kunal highlights the need to overhaul curricula quickly to match industry demands [212-218]. They share the goal of strengthening AI education but differ on the primary lever (faculty/institutional capacity vs curriculum policy reform).
Speakers: Professor Dr. Alok Pandey, Kunal Gupta
Need for large‑scale faculty development, industry‑university MOUs, and funding support Current curricula are outdated and misaligned; need faster, industry‑responsive revisions
Both agree that AI talent must go beyond pure technical knowledge. Sarabjot stresses critical thinking and awareness of AI’s limits [40-46], while Vikash adds that ethical judgment and real‑world problem‑solving are essential alongside technical mastery [83-88]. They share the goal of producing well‑rounded professionals but differ on the emphasis—cognitive/critical skills versus a triad of technical, ethical and applied abilities.
Speakers: Dr. Sarabjot Singh Anand, Vikash Srivastava
Critical thinking, risk‑taking and recognizing AI limitations Blend of technical mastery, ethical judgment, and real‑world problem‑solving ability
Takeaways
Key takeaways
Next‑gen AI talent must combine deep domain expertise, AI fluency, critical thinking, risk‑taking, ethical judgment and the ability to solve real‑world problems. A T‑shaped skill profile (deep domain knowledge + broad AI capabilities + red‑team/containment skills) is essential for addressing India’s AI challenges. Curriculum and training must shift from theory‑heavy, static models to applied problem‑solving, production‑level deployment, and continuous, adaptive learning. Current curricula, especially in state‑run institutions, are outdated and misaligned with industry needs; rapid, autonomous curriculum updates are required. Sector‑specific AI adoption (e.g., telecom/6G, neuromorphic hardware, agriculture, law) demands tailored talent pipelines and standards. Ethics, bias, fairness, robustness and hardware security are integral parts of AI development and must be embedded in training and standards. STPI’s “Skill‑Up” programme will create regional training hubs, expand the partner network (currently 18+ partners), and coordinate large‑scale skilling initiatives. AI‑driven skill‑gap analysis tools can personalize learning paths, improve employability, and support industry‑university collaboration. Collaboration between academia, industry, and government (MOUs, funding, mentorship programmes) is critical to bridge the talent gap.
Resolutions and action items
Launch of multiple STPI regional training hubs under the “Skill‑Up” programme (announced by Sh. Subodh Sachan). Expansion of the training‑partner ecosystem beyond the current 18 partners, with a call for additional collaborators. Commitment to de‑bureaucratise curricula by granting greater autonomy to Institutions of Eminence and leveraging NEP provisions. Proposal for large‑scale faculty development and industry‑university MOUs to accelerate curriculum velocity (raised by Prof. Dr. Alok Pandey). Adoption of AI‑powered skill‑gap assessment and adaptive learning platforms for personalized upskilling (suggested by Kunal Gupta and Vikash Srivastava). Encouragement for industry mentors to guide students on passion projects that address social impact (initiative described by Dr. Sarabjot Singh Anand).
Unresolved issues
Specific AI tools and platforms suitable for local governance (e.g., panchayat level) and mechanisms for CSR funding were raised by the audience but not answered. Detailed implementation roadmap for AI standards in 6G telecom networks, including timelines and responsible bodies, remains unclear. How to uniformly upgrade faculty capabilities across thousands of state technical institutions, given resource constraints, was not concretely addressed. Mechanisms for continuous monitoring of bias, fairness and robustness indices in deployed AI systems were mentioned but lack actionable guidelines. The extent to which existing regulatory frameworks will adapt to rapid AI advances, especially for hardware security, remains open.
Suggested compromises
Balance rapid curriculum updates with quality control by de‑bureaucratising processes while still using NEP’s autonomy framework. Use AI‑driven skill‑gap analysis to guide learning without excluding candidates; the tool serves as a diagnostic, not a gatekeeper. Combine human oversight with AI‑driven decision‑making in telecom (engineers supervise AI actions rather than replace them). Blend academic depth with industry mentorship: students receive foundational knowledge from universities and practical problem‑solving experience from industry partners.
Thought Provoking Comments
AI is not perfect. We need to recognize its deficiencies, question its outputs, and cultivate critical thinking and risk‑taking rather than just learning the technology.
Highlights the essential mindset shift required for responsible AI use, emphasizing human judgment over blind reliance on AI.
Set the tone for the discussion on talent, prompting other speakers to stress critical thinking, problem‑solving, and ethical awareness in AI education and hiring.
Speaker: Dr. Sarabjot Singh Anand
The power gap between GPUs (500‑700 W) and the human brain (≈20 W) shows we need neuromorphic or brain‑inspired computing, and hardware security is crucial because AI can be weaponised.
Introduces a hardware‑centric perspective that many participants had not considered, linking energy efficiency, emerging neuromorphic research, and security.
Shifted the conversation from software‑only talent to a broader ecosystem that includes hardware expertise, leading to later remarks on standards and the need for interdisciplinary training.
Speaker: Professor Dr. Jawar Singh
Next‑gen AI talent should be T‑shaped: deep domain expertise, fluency in AI tools, and ability to perform red‑team testing and containment.
Combines technical depth, cross‑domain breadth, and security testing into a concise talent model, foregrounding safety and governance.
Prompted other panelists to discuss ethical judgment, red‑team practices, and the importance of integrating security into curricula and industry training.
Speaker: Professor Dr. Alok Pandey
Next‑gen AI is an infrastructure of intelligence that multiplies our reasoning, creativity and values, enabling vernacular language interaction and democratizing access—much like TikTok did for content creation.
Frames AI as a societal infrastructure rather than a product, linking technology to inclusion, language diversity, and economic empowerment.
Expanded the dialogue from skill gaps to broader social impact, influencing later comments on curriculum relevance, multilingual support, and the role of AI in public services.
Speaker: Kunal Gupta
When evaluating fresh AI talent we look at problem‑solving ability, self‑directed learning, strong foundations, and the capacity to understand the customer’s problem—not just library knowledge.
Provides a concrete, practice‑oriented assessment framework that moves beyond theoretical knowledge to real‑world applicability.
Guided the discussion toward practical training methods, such as passion projects and mentorship, and reinforced the need for industry‑academia collaboration.
Speaker: Dr. Sarabjot Singh Anand
The next‑gen talent mix must include technical mastery, ethical judgment, and real‑world problem‑solving capabilities; people need to know where AI fits and where it doesn’t.
Distills the talent requirement into three actionable pillars, emphasizing ethical awareness alongside technical skills.
Reinforced earlier points about critical thinking and ethics, and led to deeper conversation about curriculum design that embeds these three dimensions.
Speaker: Vikash Srivastava
The biggest skill gap is the ability to define a problem; many copy trends without understanding the underlying need. Curriculum is bureaucratic and outdated, needing rapid, sector‑specific updates and multilingual support.
Diagnoses a root cause of talent mismatch—problem definition—and critiques systemic educational inertia, calling for agile, inclusive curricula.
Triggered a series of responses about de‑bureaucratising education, the role of the National Education Policy, and the need for state‑level institutional reforms.
Speaker: Kunal Gupta
In 6G every component will have AI built‑in; decisions will be distributed to the edge, requiring engineers to know machine learning and to follow new AI standards. Human role will shift to supervision.
Projects a concrete future telecom architecture where AI is integral, highlighting new skill sets and standardisation challenges.
Steered the conversation toward sector‑specific talent needs, prompting further discussion on standards, robustness, and the evolving role of telecom engineers.
Speaker: Dr. Devinder Singh
We can compute a fairness index (0‑1) to quantify bias; different sectors tolerate different bias levels. Similar metrics exist for robustness, and they can guide regulators, developers, and deployers.
Offers a practical metric‑based approach to bias and fairness, linking technical evaluation to policy and regulatory decisions.
Added a measurable dimension to earlier abstract discussions on ethics and bias, leading to acknowledgement of standards availability and encouraging concrete implementation.
Speaker: Dr. Devinder Singh (audience response)
Overall Assessment

The discussion was shaped by a handful of pivotal insights that moved it beyond a generic talk on AI talent gaps. Early remarks on critical thinking and the hardware‑energy gap broadened the talent definition to include mindset and interdisciplinary expertise. The T‑shaped model and the three‑pillar framework (technical, ethical, problem‑solving) provided concrete structures that participants repeatedly referenced. Kunal Gupta’s view of AI as societal infrastructure and his critique of curriculum bureaucracy introduced a socio‑economic dimension, prompting calls for rapid, multilingual, and industry‑aligned education reforms. Sector‑specific forecasts, especially the 6G AI‑embedded network, anchored the conversation in concrete future skill requirements. Finally, the introduction of fairness and robustness indices gave the debate a measurable, policy‑oriented anchor. Collectively, these comments redirected the dialogue from abstract skill shortages to a nuanced, multi‑layered roadmap encompassing mindset, interdisciplinary knowledge, ethical safeguards, curriculum agility, and sector‑specific standards.

Follow-up Questions
Which three sectors should AI tools be applied to in a district panchayat, and can private companies’ CSR funds support this?
Understanding practical AI use‑cases at the local governance level and funding mechanisms is essential for early adoption and impact.
Speaker: Vikram Tripathi (audience)
What standards are needed for AI fairness, robustness, and bias, especially in telecom and other sectors, and how should they be implemented?
Clear, publicly available standards are required to ensure trustworthy AI systems and to guide developers, regulators, and users.
Speaker: Dr. Devinder Singh, Sh. Subodh Sachan
How can curriculum development be de‑bureaucratized and accelerated, particularly in state technical institutions, to keep pace with rapid AI advances?
Slow syllabus updates hinder the ability to produce AI‑ready graduates; faster, more autonomous curriculum processes are needed.
Speaker: Prof. Alok Pandey, Prof. Jawar Singh, Kunal Gupta
What is an effective approach to evaluate fresh AI talent, focusing on problem‑solving, curiosity, and domain understanding?
A robust assessment framework will help identify and nurture talent that can address real‑world AI challenges.
Speaker: Sh. Subodh Sachan, Dr. Sarabjot Singh Anand
How should AI education incorporate hardware awareness, neuromorphic computing, and hardware security to bridge the algorithm‑hardware gap?
Future AI solutions depend on efficient hardware; training must cover hardware‑software co‑design and security.
Speaker: Prof. Jawar Singh
How can concepts of AI safety, red‑team­ing, and containment be integrated into university curricula across domains?
Embedding safety and containment practices in education will prepare graduates to develop responsible AI systems.
Speaker: Prof. Alok Pandey
What mechanisms can align AI education with industry skill needs given the lag in syllabus updates, especially in state institutions?
Bridging the gap between academia and industry ensures graduates possess relevant, employable skills.
Speaker: Kunal Gupta, Prof. Alok Pandey
How can AI education be made inclusive for all languages and regions, supporting vernacular language capabilities?
Inclusive AI tools in local languages broaden access and reduce digital divide.
Speaker: Kunal Gupta
What are the requirements for AI‑native 6G standards, and how should engineers be trained to develop and operate such systems?
AI‑driven 6G networks will need new standards and a workforce skilled in machine learning integrated with telecom engineering.
Speaker: Dr. Devinder Singh, Sh. Subodh Sachan
What models of industry‑academia mentorship can effectively develop AI talent with real‑world problem‑solving skills?
Mentorship bridges theoretical knowledge and practical application, accelerating talent readiness.
Speaker: Dr. Sarabjot Singh Anand, Sh. Subodh Sachan
How can AI‑driven tools be used for skill‑gap analysis and adaptive learning recommendations to improve employability?
AI can personalize training pathways, identifying gaps and recommending targeted upskilling.
Speaker: Vikash Srivastava
How should AI safety and security be ensured in domain‑specific applications such as healthcare, law, and finance?
Domain‑specific risks require tailored safety frameworks and governance.
Speaker: Prof. Alok Pandey
How can Indian‑specific large language models be developed for sectors like law, agriculture, and others, using local data?
Tailored LLMs can address unique regulatory, linguistic, and data characteristics of Indian domains, enhancing relevance and adoption.
Speaker: Sh. Subodh Sachan, Dr. Sarabjot Singh Anand

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

How the Global South Is Accelerating AI Adoption_ Finance Sector Insights

How the Global South Is Accelerating AI Adoption_ Finance Sector Insights

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened with John Tass-Parker framing the shift from “frontier AI” to “institutional AI,” emphasizing that in finance the critical challenge is establishing legitimacy and trust rather than raw model capability [4-10][18-19]. He argued that trust is the business model of financial institutions, and that systems demonstrating reliability, auditability and resilience will be rewarded [5-8][10-13].


Bharat then introduced Suvendu K. Pati of the RBI to discuss India’s regulatory stance on AI in finance [24-27]. Pati explained that the RBI’s approach is to enable responsible AI adoption through a technology-neutral, principle-based framework that focuses on innovation, risk mitigation and enhancing trust [30-36][38-41]. He stressed that liability rests with the regulated entities deploying AI, requiring a “glass-box” transparency where customers are informed of AI interaction and institutions must audit bias, drift and degradation [46-52][184-188]. To support the ecosystem, the RBI runs regular FinQuery/Finteract engagements, conducts surveys, and is developing an AI sandbox to provide data and compute resources to smaller fintechs [280-292].


Tara Lyons highlighted JPMorgan’s long-standing AI deployment across fraud detection, payments, markets and compliance, noting that the sector’s principle-based, technology-agnostic regulation has enabled extensive experimentation [73-80][82-84]. Ashutosh Sharma described AI’s strategic value for India’s fintechs through improved unit economics, the ability to thicken thin credit files using unstructured data, and expanding reach via conversational interfaces, while recommending human-in-the-loop controls and strict data-privacy safeguards [106-118][119-126][127-133]. Harshil Mathur added that AI accelerates processing of large data volumes and that “agentic commerce” – voice-first, multilingual conversational purchasing – can unlock the billions of Indians currently excluded from online shopping [134-141][143-166].


Both regulators and industry agreed that deployers must act as custodians of trust, ensuring transparency, auditability and governance, with the RBI focusing guidance on regulated entities rather than model developers [171-188][196-199]. Participants identified key regulatory challenges such as data-residency requirements, limited access to cutting-edge models in Indian data centres, and the risk of LLM hallucinations, which constrain deployment in financial services [300-319][322-329]. The discussion concluded with a shared vision that AI can dramatically enhance financial inclusion by providing language- and voice-based banking, assistive technologies for the disabled, and personalized advisory services to all citizens [340-347][368-371][373-376].


Overall, the panel underscored that building trustworthy, transparent AI infrastructures and collaborative regulator-industry engagement are essential to realizing AI’s potential for inclusive finance in the global south [11-13][20-21][376-380].


Keypoints


Major discussion points


Trust and legitimacy are the core “currency” for institutional AI in finance.


John highlighted that while model performance is important, the scarce attribute in the sector is legitimacy – the ability to demonstrate reliability, auditability and resilience, which determines whether regulators, boards and customers will adopt AI systems [4-10][12-19].


The RBI’s regulatory philosophy is to enable responsible AI adoption rather than prescribe technology-specific rules.


Suvendu explained a tech-neutral, principles-based approach that encourages innovation, mandates lifecycle governance, and introduces new tools such as an AI sandbox and the “seven sutras” to guide banks and fintechs [28-40][45-48][56-63][65-68][171-188][280-295].


Key AI use-cases in finance are already delivering value: fraud & scams remediation, payments, compliance, underwriting and new “agentic” commerce.


Terah listed fraud detection, payments and compliance as high-impact areas, while Ashutosh and Harshil described how AI can thicken thin credit files, enable voice-first conversational commerce, and automate large-scale data analysis [73-80][84-86][108-118][124-166][260-268][269-274].


Best-practice challenges for fintechs revolve around human-in-the-loop oversight, data privacy, residency requirements and model reliability (e.g., hallucinations).


Ashutosh stressed keeping a human in the loop and adhering to data-privacy guardrails [127-133]; Harshil added that Indian data-residency rules, lack of local compute, and the risk of erroneous LLM outputs are major operational hurdles [300-311][314-322].


A shared vision for the next five years is AI-driven financial inclusion through conversational, multilingual and assistive technologies.


Panelists repeatedly spoke of AI lowering service costs, delivering “voice-first” banking, expanding credit to the un-banked, and embedding language-aware assistants in every person’s pocket [338-346][350-357][368-374].


Overall purpose / goal of the discussion


The session was convened to explore how the financial sector-particularly in the Global South-can transition from “frontier AI” to “institutional AI” by building trustworthy, auditable systems, aligning regulatory frameworks (exemplified by the RBI’s approach), sharing practical use-cases, and charting a collaborative roadmap that enables responsible, scalable AI adoption across banks, fintechs and regulators.


Overall tone and its evolution


The conversation began with a formal, measured tone focused on the challenges of legitimacy and governance. As regulators and industry leaders presented concrete initiatives (AI sandbox, principles, use-cases), the tone shifted to optimistic and forward-looking, emphasizing innovation, partnership and the societal benefits of AI. Throughout, the dialogue remained collaborative and constructive, with occasional reiteration for emphasis but no overt conflict.


Speakers

John Tass-Parker


Role/Title: Lead, Policy Partnerships at JPMorgan Chase


Area of Expertise: Policy partnerships, AI governance in finance [S1]


Bharat


Role/Title: Moderator (affiliated with JPMorgan Chase)


Area of Expertise: Finance and AI moderation (not explicitly stated)


Harshil Mathur


Role/Title: Executive at Razorpay (company referenced in his remarks)


Area of Expertise: Fintech product development, AI-driven payments


Terah Lyons


Role/Title: JPMorgan Chase representative (speaker on trusted AI)


Area of Expertise: Trusted AI deployment, risk management in financial services


Ashutosh Sharma


Role/Title: Investor in India’s fintech ecosystem


Area of Expertise: Fintech investment, AI applications in finance [S9]


Suvendu K. Pati


Role/Title: Chief General Manager and Head of FinTech, Reserve Bank of India [S10][S11]


Area of Expertise: FinTech regulation, AI policy and governance in banking


Additional speakers:


– None identified beyond the listed speakers.


Full session reportComprehensive analysis and detailed insights

John Tass-Parker opened the session by noting that public conversation on artificial intelligence often centres on model breakthroughs, speed and capability, but that finance is now moving from a “frontier AI” era to an “institutional AI” era where the hard problem is not raw capability but legitimacy and trust [4-10]. He argued that in financial services trust is not merely a feature but the very business model [5-7]; institutions will only adopt systems they can govern, boards can scale and regulators can supervise [8-9]. Consequently, the sector rewards AI that is reliable, auditable and resilient, and the emerging focus is on the infrastructure that enables model risk management, oversight, explainability, cyber-security and regulatory engagement [12-13][14-19].


Moderator Bharat then introduced the panel and turned to Suvendu K. Pati of the Reserve Bank of India (RBI), asking how India is approaching AI regulation in the highly regulated financial sector [21-27].


Suvendu K. Pati explained that the RBI’s stance is to enable responsible AI adoption rather than impose prescriptive, technology-specific rules. The regulator adopts a technology-neutral, principle-based framework that encourages innovation while managing risk, emphasising “innovation versus restraint” as a regulatory nudge [30-36][38-41][56-63]. A lifecycle-management mindset is required, with liability and accountability placed on the regulated entities that deploy AI, not on the model developers [46-52]. The RBI calls for a “glass-box” approach: customers must be informed when they are interacting with an AI system and must be able to opt for a non-AI alternative [181-188]; institutions must embed audit mechanisms to monitor bias, model drift and degradation [190-193].


To operationalise this philosophy, the RBI has instituted regular industry-engagement programmes such as FinQuery/Finteract, which convene over 2 000 entities and include a survey covering close to 600 entities, complemented by one-hour deep-dive engagements with about 75 entities [280-284][285-286]. Building on the “seven sutras” that have been adopted nationally [56-59], the RBI is developing an AI sandbox-in operation since 2019-that will provide access to curated data sets and affordable compute resources for smaller fintechs, thereby democratising AI capability [287-295][295-298]. The RBI also highlighted its own AI model, MuleHunter.ai, which is already deployed in 26 banks and is being rolled out to other entities [300-303]. It encourages self-regulatory organisations to create toolkits and benchmarking services for bias-free, transparent models [294-295].


Terah Lyons, representing JPMorgan Chase, highlighted that the bank has been deploying AI for over a decade across fraud and scams remediation, payments, market analytics and compliance [73-80]. She credited the sector’s principle-based, technology-agnostic regulation for allowing extensive experimentation while maintaining proportionate risk management [82-86]. This risk-aware governance, she argued, can serve as a template for other industries seeking trustworthy AI lifecycle oversight [82-86].


Ashutosh Sharma, a leading fintech investor, described AI’s strategic importance for India’s financial ecosystem. He noted that the Indian credit market, valued at US$2 trillion, incurs 3-5 % OPEX (≈US$60-100 billion annually), and AI can dramatically improve productivity and unit economics [106-112]. By leveraging unstructured data, AI can “thicken” thin credit files for the large un-formalised segment, enabling more inclusive underwriting [115-118]. He also stressed that conversational interfaces can broaden reach, but best practice demands a human-in-the-loop and strict adherence to data-privacy guardrails [127-136]. AI-led collection agents were cited as a way to enhance outreach while preserving oversight [260-265], and biometric payments in UPI were highlighted as an emerging use case [250-255].


Harshil Mathur expanded on the data-processing advantage, explaining that AI can analyse data at a thousand-fold speed compared with traditional tools, facilitating underwriting, risk management and fraud detection [134-141]. He introduced the concept of agentic commerce – voice-first, multilingual, conversational purchasing – as the next wave that could unlock the 300-400 million Indian UPI users who currently do not shop online [143-166]. Harshil emphasized that AI can dramatically lower the cost of servicing and enable voice-first, conversational experiences for villagers, while still recognising a role for human oversight in many processes [260-275].


Across the discussion, there was strong agreement that AI deployers must act as custodians of trust, providing glass-box transparency, auditability and board-level governance [171-188][196-199][10-13]. All speakers underscored that legitimacy, not merely model performance, is the scarce attribute for AI adoption in finance [4-10][12-19].


Disagreements were mild. Ashutosh stressed keeping a human in the loop and strict data-privacy compliance [127-136], whereas Harshil highlighted the potential of highly automated, voice-first agents to serve underserved villagers, while still acknowledging the need for safeguards [260-275]. A second tension concerned regulatory scope versus practical constraints: Suvendu stressed a tech-neutral, deployer-focused guidance [190-196][184-188], while Harshil pointed to India’s stringent data-residency rules and the limited availability of cutting-edge compute in domestic data centres, which impede the use of foreign large language models [300-307][310-311].


Both regulators and industry converged on a vision for financial inclusion. Suvendu envisaged AI-driven alternate-data underwriting and language-aware conversational banking to bring the unbanked into the formal credit system [340-347]; Ashutosh spoke of “Viksit Bharat”, an AI-led financial ecosystem reaching every citizen [364]; and Terah reiterated that AI could place a personal financial advisor in every pocket, extending services to the poorest and to people with disabilities [368-374][373-376].


The panel identified several action items. The RBI will operationalise the AI sandbox and continue monthly FinQuery/Finteract sessions [280-295][295-298]; regulated entities are urged to embed board-level AI policies, audit frameworks and glass-box disclosures [190-196][181-188]; fintechs should adopt human-in-the-loop designs and comply with DPDP data-privacy guardrails [127-136]; industry bodies are called upon to develop bias-assessment toolkits [294-295]; and JPMorgan’s risk-aware governance was highlighted as a model for scaling trusted AI deployments [82-86].


Unresolved challenges remain. Mitigating LLM hallucinations to meet the sector’s near-zero tolerance for erroneous outputs is an open research problem [317-329]; clarifying liability when AI models produce faulty decisions, especially for third-party developers, requires further legal framing [190-193]; reconciling data-residency requirements with the need for cutting-edge models calls for domestic model development or policy adjustments [300-307][310-311]; and defining concrete metrics for fraud-reduction, mis-selling and inclusion impact is still pending [317-329][260-268]. Suvendu also noted that AI is a probabilistic technology, so regulators should adopt a tolerant and differentiated approach when embedding it in financial services [310-313].


In closing, Bharat thanked the distinguished panel, reaffirmed the focus on super-charging AI adoption in the Global South, and highlighted the consensus that trustworthy, transparent AI infrastructure-supported by collaborative regulator-industry engagement-will be pivotal for inclusive finance [376-380]. The discussion left participants optimistic that, within the next five years, AI will substantially lower service costs, deliver personalised “N-of-1” experiences, and unlock a multilingual, voice-first financial ecosystem for billions of underserved users [338-346][350-357][368-371].


Session transcriptComplete transcript of the session
John Tass-Parker

Hello everyone, my name is, oh sorry we’ve got a photographer here now, so we’re going to take our photo. False start, sorry, bear with us. Well now that we’ve got the most important thing out of the way, we’ll get started. Hello everyone, my name is John Tass -Parker I lead policy partnerships at JPMorgan Chase and just wanted to firstly thank everyone for being here for this very important conversation when people talk about AI the conversation tends to focus on model breakthroughs speed, capability but in finance, which our wonderful panellists here represent that’s never been the real question we’re really moving from this era of frontier AI in our world certainly to an era of institutional AI and in this phase the hard problem is not actually the capability itself it’s legitimacy and trust financial services is one of the most regulated sectors in the global economy and yet it’s consistently been one of the earliest to be a part of the global economy and one of the first adopters of AI and all…

technologies. Why? Because in finance, trust is not a feature. It’s actually the business model. Institutions only absorb systems they trust. The C -suite can only scale what their boards can govern. Regulators can only enable what they can supervise. And increasingly, those that can demonstrate reliability, auditability, resilience, not just model performance, will be the ones that are rewarded. The more important story is coming into focus in rooms like this. It’s the infrastructure enabling institutional AI, model risk management, oversight, explainability, cyber security, regulatory engagement. Finance has had to learn how to deploy these incredibly powerful systems inside real world guardrails. And that’s why conversations matters beyond and beyond the door. And that’s why this conversation, frankly, not only matters for our financial and banking sectors, but also beyond that.

If we want AI to drive productivity for small business, for farmers, for teachers, for local government, for state government, for international, across the global south, then trusted deployment is what unlocks it. Capability is increasingly being commoditized. It’s the legitimacy that is the scarce attribute here. Today’s discussion is about how we build systems that institutions will actually absorb and how finance can help shape a framework for responsible, scalable adoption. With that, I’m delighted to hand it over to Bharat to set the broader context for how we think about safe and trusted AI globally.

Bharat

Thank you, John. It is my honor to moderate this discussion with a truly distinguished panel. So without further ado, let me just jump straight into it. Capitalizing the artificial intelligence moment for finance. The financial sector, as we all know, is one of the most regulated sectors in our country in India. and in most parts of the globe. So I think it’s appropriate to turn to the regulator from India, Mr. Swendu Pati from RBI, who’s to my right. Swendu ji, the financial sector has been one of the earliest adapters of AI, despite being one of the most regulated sectors, as I mentioned. Given this dichotomy, how is India approaching AI regulation in finance?

Suvendu K. Pati

Yeah, thank you, Bharat, and thank you, everyone, for having me here. I would begin by saying we are not exactly the phrasing I would entirely agree with, that regulating AI, but I would say that we are here to sort of enable responsible adoption of AI in the financial sector. That would be the overall approach to this technology, I would say, what Reserve Bank of India, you know, understand. and why I would say that it is clearly we recognizing the potential of this new technology, although it’s not very new in that sense, but it has really come to a limelight over the past five years. And that’s because, you know, data is one of the key ingredients which it thrives on.

And we had constituted an external expert committee of which I was a member to look at this sector and look at this technology, how it can be embedded into the financial services segment. So our approach when we looked at, you know, we wanted to be slightly more nudging towards enabling innovation in some sense. And unless we play around with this technology experiment enough, you would not ever utilize the full potential of it. So basically it is concentrated towards… you know, innovation, enablement, as well as risk mitigation. The risks that have been talked about, bias, accountability, auditability, explainability, these are pretty well known. And this needs to be managed in a way so that we ultimately we come out with the principles of enhancing trust, which was also a fundamental attribute of the financial sector.

And in terms of regulation, Reserve Bank’s approach has been largely tech neutral. It’s tech agnostic in some sense, because most of the times you would, you know, new technologies, new things would keep evolving. But for example, the safety or the consumer protection, not doing consumer harm, is a good stated objective to pursue irrespective of what technology you adopt. Similarly, on IT services, outsourcing guidelines, on, you know, managing concentration risk, there are already existing guidelines. which do provide the guidance to the regulated entities like banks and NBFCs, how do they manage their affairs. So in some sense, the consumer protection guidelines also do cover some of the safety aspects that we would generally talk about. So in some sense, there is a regulation which is in place.

There is guidance which is already in place. It’s only that because of this transformational technology, if there is a need to look at it from a new technology lens, any additional guidance that needs to be incremental guidance that needs to be provided. And that’s a precise point we have come out with in this report. And one of the things that we expect institutions to go forward is with the entire lifecycle management of AI, should be a thought process. The institutions need to look at… the liability and accountability framework in a much different way. Our expectation is that customers need to be protected in all cases. So it’s not a question, it’s about the model deployed by the entities rather than the model developers.

The responsibility should rest with the model deployers and which are the regulated entities in this case. And therefore, there are three or four additional dimensions which need to be looked at in terms of supervision, in terms of the internal audit assurance framework. How do you audit or how do you validate or improve your product approval process to capture the additional incremental risks on account of AI? So these are some of the additional things that we are looking at to provide some nudge. And one principle that we had come out. Within the report, there are seven principles or sutras that the report talks about. and these have been adopted. I’m happy to report that these have been adopted by the government of India for implementation across sectors.

So these are generic principles and they have found acceptance. So one of the principles that we have talked about there is innovation versus restraint. Everything else remaining constant, entity should prioritize innovation rather than restraint. So that is a nudge. That is an innovation enablement or a nudge that we are trying to give to the sector. They should feel comfortable with this. So our whole approach is optimistic. We want people to experiment, adopt it responsibly, but think creatively in terms of liability framework, revisiting the accountability framework, have a board governance policy in place, and improve their internal systems and processes to give the comfort to not only the people, not only their own set of employees, but to other stakeholders.

about this new technology. All said and done, this is a probabilistic technology. There are bound to be some mistakes here and there. So we need to have a very tolerant and differentiated approach when we embed this into the financial services where people’s money is involved. I will stop here, but we’ll talk something more later.

Bharat

Thank you, Swenduji, for that insight. If I could now turn towards the global view, our employer J .P. Morgan Chase is one of the world’s largest deployers of artificial intelligence. Tara, in terms of trust, what are some of the most impactful use cases trusted AI is being leveraged for in finance in your purview?

Terah Lyons

We joke that we shouldn’t worry about AI until we figure out AV. So I guess this is a perfect example of that. Thanks for the question, Arat. I think maybe the first thing to say about this, and this probably isn’t news to this room especially, but AI has been used in finance in deployed settings for over a decade. And at JPMorgan Chase, we’ve been using it, spanning use cases across our bank, starting first with the era of analytic tools, moving into machine learning capabilities, now in the direction of large language model deployment and sort of looking directionally towards the era of agentic capabilities and beyond. And spanning all of those, I think the most impactful use cases that we have seen, certainly in fraud and scams remediation, which is just a huge priority for the entire sector.

Payments, there’s some really exciting applications and in markets as well. And honestly, in compliance use cases for us too, just given the focus that we have on ensuring that we’re being compliant with our regulatory requirements. I think I also, I just want to pick up on a couple of things. that were previously mentioned that I think are worth underscoring. And one of those points was the point that you made, Mr. Patti, about one of the strengths of the financial sector regulatory approach being the principles -based technology -neutral approach that our regulators have taken. And I think it has allowed banks to experiment to a wide degree with the types of techniques that I just talked about.

Well, thinking about the proportionate risk of each one of those use cases as we are deploying. So I think that’s been really key. And I think the second point to underscore that you had mentioned previously, which I think was a really good one for us to address as well, is that there are, I think because of the strength of the financial sector’s approach to AI governance, really useful lessons that can be exported from this sector in considering questions of oversight and regulatory control. And I think that’s a really good point. And I think that’s a really good point. And I think that’s a really good point. And I think that’s a really good point. And I think that’s a really good point.

And I think that’s a really good point. And I think that’s a really good point. And I think that’s a really good point. That speaks to the sutures that you mentioned being adopted more widely across the economy in the RBI report that I think are really well aligned to wider consideration just beyond, you know, the banking

Bharat

Thank you, Tara. And I think now we move to the more important issue of putting money into this particular industry. Ashutosh is one of the leading deployers of finance in India’s fintech ecosystem. What makes AI so strategic in your view for the sector? And what are some of the best practices you see being adopted by fintechs to build particularly trust in AI?

Ashutosh Sharma

Super. Thank you so much for having me here. I think over the last two, three, four days, folks in the room. room have probably attended 5, 10, 15 such sessions, maybe more. And I think if there’s one takeaway that you have taken with you is that AI is going to change almost everything. And so it will the financial services sector. I think this is equally, in fact, more importantly applicable to India in a bigger measure than anywhere in the world. And the reason I would say is threefold. The first is unit economics. Let’s take an example. Indian credit market is $2 trillion in value. We spend anywhere from 3 % to 5 % on OPEX. Just on OPEX, we invest $60 to $100 billion a year.

And what AI can do strategically to improving productivity and therefore making these businesses much more healthy. It’s only a beginning of… of the journey we are taking. I think second strategic point of strategic importance is risk. A large section of our economy in India is unformalized. What I mean is that in credit parlance, it’s called a thin file issue, which is that for a large section of society, we don’t have enough data points, enough matrices to make the file thick enough for you to underwrite them. Now with AI, because of the technology’s ability to use unstructured data, you can actually very quickly and in a very cost -effective way make that thin file, thick file.

So again, I think underwriting risk for a large section of society in India will be possible now. with this. I think the last, not the last, one of the more important other points is reach. Buying a financial product is not like buying a shirt on Myntra or ordering food on Swiggy or ordering a saree on Mishra. This is a complex product. It needs engagement. The app or whatever platform you’re using asks you a bunch of questions. Before you even decide. Today, again for a large section of Indian society, it’s very hard to engage with that app. It’s complex. Now imagine a world tomorrow where you can speak to that app. And therefore now that enables reach of financial products, financial services to again a very large section of society.

So I think it’s extremely, extremely strategic from that standpoint. Also best practices look, we are too early. I mean we can only talk about practice. practices best or not only time will tell so so i mean look and look because we are early and because of what sir said um this is this is a high impact transaction for anyone a financial services transaction um and and therefore having a bot run a bank i think is not advisable so one of the practices that good fintech companies are using is keeping a human in the loop the technology can prepare a file but in the end it’s a human who kind of the second thing is again is is data is while data is of primary importance in the in the ai world but this is a lot of sensitive data that you as a fintech or financial services product provider you have that so ensuring at all times that you are following the dpdp guardrails i think is again something which is this is just a start we’ll evolve uh but i think it’s a good thing that we’re following the

Bharat

Thank you, Ashutosh. Turning now to the person who’s actually deploying the money, which is Harshal. That’s a pointy edge. Do you really believe that this is AI’s big moment in finance? I gather at Razorpay, you are rolling out AI -based payment solution models. How do you think this will transform the payments landscape?

Harshil Mathur

First of all, just from a back -end usage, like my colleague spoke about, I think finance typically deals with large volumes of data. Large volumes of data is generally harder for humans to really skim through. We always have to use machines and software to run through it. AI makes that job much, much easier. Anywhere where large volumes of data has to be interpreted, inference has to be drawn, I think you need systems to do that. AI is a system that allows you to do it at far more data points than it was possible in older systems. You can see, you can do as much analysis on Excel sheets or at . software, but with AI you can do 1000x more.

So I think just this advantage of that and things like underwriting and risk management and identifying fraud and multiple things that finance ecosystem has to do becomes increasingly important. So I think that’s why finance has been one of the earliest adopters because it’s just natural that the system is so much better than the previous systems. Coming to payments, I think one of the things that we’ve done is we’ve taken a very early bet on agentic commerce and the reason is fairly simple that there are 300 to 400 million Indian consumers who are on UPI today on district payments today. Less than 200 million of those actually do shopping online. But if you go, peel it even further and this is based on data that we see at Razorpay, less than 10 million of those users do 70 % of all commerce in India.

Just 10 million in a country of a billion and a half do 70 % of all commerce online. And that’s because, like he said, the commerce systems that we have built so far are not natural to most people in India. So we’ve built apps, we’ve built all the accesses. available, but while the access is there, the accessibility is missing. Because Indians don’t buy stuff the way Americans do. So the way we have built our apps is our American shop. It’s like a supermarket. Everything is available. You pick and choose yourself. Indians shop on retailers, where you go and talk. You say, hey, I want to buy this. He tells you, hey, why don’t you buy this, and so on.

We are conversational in commerce. And that’s why the app ecosystem we have so far has only penetrated 10 million or maybe 15 million. The rest of India needs conversations. Like take an example of travel. There are OTAs available everywhere. $50 billion of travel is purchased through agents on the ground, because people want to talk before they make a booking. 95 % of insurance in India is sold through offline brokers. There’s Policy Bazaar, and there’s so many brokers available, which will give you far cheaper, which will not missell you insurance. People still trust their local insurance broker. Because Indians want to converse before they buy. They want to ask 20 questions about what their and that’s hard to do in the apps that we have so far.

And I think agentic commerce is that next wave which will unlock the next form of commerce for the next billion people who have not really come, in spite of all the apps being available, who are not really shopping online, who are not really consuming online. They may be paying their bills online, but that’s it, just because they don’t want to stand in the line. But everything else they’re still doing through offline channels and if we can bridge that gap through agentic commerce, which is voice first, which is multilingual, which is conversational, I think we can unlock commerce for a large volume of Indians who have not come online properly.

Bharat

Thank you, Arshil. I think the next angle which I’d like to touch upon is elevating deployers as key custodians of trust. So Venduji, the RBI has traditionally been ahead of the curve in comparison to some other sectors due to key initiatives which you’ve promulgated such as the Free AI Committee and its very progressive policy recommendations. If I may ask, is there a distinction in your approach for regulating AI developers and…

Suvendu K. Pati

See, under the remit of the mandate given to the Reserve Bank of India, under the Reserve Bank of India Act or the Banking Regulation Act, our remit is towards the we can regulate only the regulated entities like the banks, non -banking financial companies or fintechs or so and so forth. So, model developers would strictly fit into the IT or technology companies. So in our remit or the official mandate that we have, we really cannot sort of regulate or prescribe rules for them. So what we are looking at is from a deployment point of view. And so our regulations or our guidance, I would refrain from using the word regulation in this sector. But in this context, but our guidance would be towards the deployers, which are the regulated entities.

And these, as I would say, are more, already some are in place through various, you know, guidelines. I have talked about IT outsourcing, third -party dependencies, and also on the customer engagement and things like that. So what we are looking at is how does the regulated entity be, you know, accountable. Once the regulated entity is providing a service to a customer, it is the complete responsibility of the entity to ensure the transparency, accountability, the way the customer engages with an AI system or a service. So from, you know, if I may loosely put it, from a typically black box is something which is associated with AI systems. You really do not know what happens inside and the result is produced.

But as far as the regulated entity. As far as the regulated entity is dealing with the customers are concerned, we would like. this to be a not a black box but a glass box. It’s completely should be customer should be knowing what they are getting. When they are engaging they should be clearly told upfront that they are engaging with an AI system. If they choose they should have the freedom to offer a non AI based engagement and transparency. Similarly for the accountability the institution should devise their audit systems to capture incremental risks arising out of the AI. How does the bias get removed? Is there a model drift? Is there a model degradation? Does it get addressed periodically?

So those kind of checks and balances regulated entities need to put as part of their board policy and set the implementation and some things like understandability by design. You know the course itself should ensure implementation. These are some of the things we have talked about. And over a period of time, we would like that this gets addressed and gets refined and gets embedded and implemented across their processes.

Bharat

Thank you, Sovenduji. Deployers are also fast emerging as key custodians of trust in the AI ecosystem. And, you know, frankly, it’s the responsibility to the global economy to get AI integration right for large financial services firms such as J .P. Morgan. So how is J .P. Morgan positioning itself in this debate?

Terah Lyons

Well, I think AI is not made useful unless it’s deployed, and it can’t be deployed at scale without trust and transparency. And so the way that we’re thinking about these questions really rests. It rests on, again, the strengths of the sort of, I think, the culture of risk management and oversight that we have grown into in financial services, deploying technology of all sorts, not just AI, but certainly AI more recently, as I mentioned. and having there really be a sort of a use case focus on the risks entailed in every single one of our deployments. I think a lot of the lessons that can be learned in financial services risk management, again, are applicable widely to other sectors, as we’ve talked a little bit about this afternoon, including in sort of AI lifecycle oversight and management in model risk management guidelines and principles, in the principles and practices of real transparency and auditability that we’ve spoken to up here and many, many others.

And so I think what that allows is, as we’ve spoken to, banks and financial service organizations are sort of uniquely positioned in many ways, given the nature of the data estates that we sit on top of, given the necessity of the business model. given customer demand and market demand and a host of other issues that I would say surround kind of the innovation envelope here. But I think the risk management practices that we have are a huge strength there, too. So, yeah, I would say that that’s all really key to engendering trust with customers and making sure that we’re doing right by the products and services that we’re delivering to them.

Suvendu K. Pati

providing what information they need to fill in while account opening, those kind of summarization effects may not be subjected to very, very elaborate degree of scrutiny or risk testing or template, those kind of processes. This is what I would feel personally, but yes. And just to make this more of a conversation, I’ll add one additional point, which is that I think it’s important to understand that the way that we’re dealing with this is not just about the data. It’s about the information that we’re getting from the data. It’s about the information that we’re getting from the data. And I think that’s what I would feel personally, but yes. And I think that’s what I would feel personally, but yes.

Harshil Mathur

If you’re a large company, it’s competing with a small, let’s say, retailer, and let’s say they’ve opened a new supermarket opposite to them. It’s hard for a small retailer to compete because they don’t have the intelligence available to the large supermarket in terms of what products to put in, what things to deploy, what marketing ideas to deploy, and so on. But now you can really open a chat GPT app, ask it to prepare a business plan for you, tell it how do I fight this, and it can really help you compete. So the advantage of having intelligence on demand really, I think, balances the scale than what was available before because it reduces the cost of intelligence.

A large company could always afford that intelligence, but now it’s available. Similar examples will be available later. let’s say it’s a farmer on the ground who is unable to figure out, like, which crop should he purchase this season, right? And I met companies recently who are essentially doing that, that they’re deploying AI models to be available to farmers on the ground, that, hey, you can ask it. It can tell you information that is generally not available to you. So I think that’s on the general side of things. Now, if you come to finance side of things, one of the biggest problems in finance is mis -selling, right? Or fraud, and fraud. Like, for example, I have told my dad, my dad is 70 years old, and I’ve told him, hey, if you’re making any expensive purchase decision, just give me a call.

I don’t know if you’re getting fraud aid, if you’re getting digital estate. I don’t know if you’re getting insurance sold, which you don’t need, or a financial instrument you don’t need. But, like, AI allows me to put something smarter than me in his own pocket, right? So he doesn’t need to call me now. He can open, and I taught my dad how to use chat GPT. Now he opens it up and asks it in his voice, like, I’m going to buy this. Should I buy this or should I not buy this? I can imagine a year or two years from now. Now, all of us will have an AI agent who is essentially your assistant.

So, when you’re shopping something, it’s searching for the best prices online. When you’re buying something, it’s searching for the best features. Is this the best product? Is something else the best product? It’s doing a research on Reddit, it’s doing a research on Twitter, telling you, hey, don’t buy this, buy this. Or you’re on this website, which is clearly mis -selling, which is fraudulent, the price looks too good to be true, so don’t buy it from here. I think having that intelligence available to every person on demand is a massive advantage. And I think the impact of it in society will be fairly positive. I think the people are worried about frauds happening because of AI.

And I think that’s true in the short term when the ecosystem is getting prepared. But in longer term, frauds and mis -selling and all of that will go down significantly because everyone will have an intelligent agent who’s extremely smart and who can tell things far better than a human can. So I think that can really bring a massive

Suvendu K. Pati

Just in case, Harshil, you ask your dad to be aware about hallucinations. . .

Bharat

Thank you, Harshil. Thank you, Harshil. Thank you. So innovation and commitment are key in any new technology, as we all know. So, Ashutosh, what are some of the promising business models you are excited about in FinTech and with the AI space? Which ones do you see gaining more traction in the global south? And in your view as an investor, do some key gaps still exist which are currently unaddressed and could benefit in some way from an AI solution?

Ashutosh Sharma

I’m always excited about interesting ideas, Bharat. Now, with that said, I think the adoption is all across the subsectors of financial services. In subsectors where India has naturally been at the forefront of innovation and payments come to mind, right? UPI is a very good example. I think India is leading the innovation wave. even with the advent of AI. Right about the time when the Indian e -commerce platforms were getting embedded or connected with the large foundational models, about the same evening, Indian payments companies were launching products, as Harsh said, that could enable you to buy from within the model or even within the chat experience that you are having in the e -commerce app, Swiggy or Flipkart, whatever you call it.

So within payments, we are at the forefront. Talking generally, I think the most use of AI I see today is in two areas. One is productivity. This is related to the unit economics point that I made previously. I think that’s happening. But more importantly, also in customer experience. And I’ll give you two examples. One is the use of AI. In UPI, we are now moving from this kind of OTP world to a biometric world, wherein you don’t need to just using your biometrics, you can make a payment, right? In part, that is enabled by AI. And imagine how nice the customer experience now will be with this, rather than waiting for something to come to you.

In lending, almost 60 to 70 % of collection for the first 30 days is now moved to an AI -led agent. Us as humans, we get irritated calling 20 people all day. And by the end of the day, the human agent is upset and the customer is upset and the conversation is like the collection is not happening. Whereas with an agent, the agent can be empathetic. Agent will call you, can remember. this is the time when Ashutosh is free let me call him and so I think we are seeing a lot of kind of movement there in the customer experience domain as well as for gaps I think there is one thing that I feel where India is slightly kind of behind is that the west has probably 50 60 years of customer data whereas in India UPI credit card all that is a new phenomenon so for us to there is no right answer for us to get to levels of underwriting which are closer to what west may enable with AI that ability of that availability of multi cyclical deep data maybe something we have a lot of data of data hundreds of millions of customers.

But the depth of that is something that I think we need to consider as

Bharat

Well, as they’re saying, those data is the new gold. So you need to keep it with you as much as possible. And I think that’s going to be something which is going to be challenging for a country of a billion and 400 million. So, Venduji, are there any engagement pathways which RBI is using to engage and partner with the industry to promote AI adoption in finance? And, you know, in the Indian startup ecosystem, are there any specific initiatives you’ve seen to promote AI adoption that banking sector can support this diffusion?

Suvendu K. Pati

Yeah, good point. And first of all, during the last couple of years, we have had multiple engagements. In fact, we have a scheduled monthly engagement with FinTech, and that’s titled as FinQuery and Finteract. So these events do take place at very regular intervals and across cities and through a hybrid channel as well. And roughly about 2 ,000 plus entities have engaged with us in the last one and a half years. And specifically on AI, we did a survey across more than close to 600 entities, including banks and NBFCs. That was a dipstick survey and deep engagement of about one hour each with around more than 75 entities to understand their adoption and what areas they see the potential implementation and what challenges they are witnessing.

So there is a constant. And after the report, free AI committee report has been released on our website in August, we have had around three rounds of consultations with various stakeholders, including FinTechs, to take their inputs on board. So it’s a continuous process. It’s a constant engagement. And I would also like to draw attention to the. regulatory sandbox framework which has been put in place since 2019 and entities are welcome to partner with us and experiment under the regulatory sandbox whenever they require any regulatory dispensation or a regulatory relaxation and as articulated in our recommendations we are one of the key constraints that we see especially the smaller fintechs is the lack of access to the affordable compute infra as well as the lack of access to data based on which they can you know innovate and build models so this is on top of our mind that we are sort of committed to design and operationalize what we would call that a ai sandbox that’s not exactly a regulatory sandbox but it will have access to the data and compute and sort of with the overall aim to democratize you know the data and compute and sort of with the overall aim to democratize you know the data and compute and sort of with the overall aim to democratize ai across you know smaller institutions A bank like JP Morgan or State Bank or HDFC may have enough data, bandwidth, and resources to build their models, but what about the smaller fintechs and other entities?

So with that vision, we would be operationalizing the AI sandbox, which would engage, put these people have access to those resources to innovate. And on top of that, we ourselves are building models like MuleHunter .ai, which is already implemented across 26 banks, and it’s getting implemented across other entities as well. And this engagement is a continuous process, and we would like them to partner with us, submit proposals, and work with us. And we also expect the industry bodies, like the self -regulatory organizations, which has already been recognized, one has been recognized, they have to come up with, we expect that they need to come up with the toolkits or benchmarking services. which the AI, you know, the models can sort of test themselves and see that whether they’re, you know, bias -free and they meet the expectation and transparency standards.

So it has to be expected that fintech industry itself comes up with those kind of standards and benchmarks and toolkits which would support the innovation.

Bharat

Thank you. As we all know, regulatory engagement is critical to promoting innovation. So, Harshal, for a company such as yours, what are the key regulatory challenges you are facing in the deployment of AI in finance? And how does your engagement with government and regulatory bodies actually address these? And do you find any public -private partnership model which could be helpful in taking the industry to the next level?

Harshil Mathur

See, I think the core aspects of regulation, as sir said, I generally don’t go into technology or which technology to use. I think there are general principles of regulation, and then you can use any technology to apply. the same principles. I think in most cases we have been fairly successful in deploying AI models and while meeting the requirements of regulators. I think the few areas where it sometimes becomes a challenge is I think we have a very strong data residency requirement in India which is rightly so and a lot of AI models are coming from the West which don’t meet the data residency requirements today for India. So I think in that context having and the open source models are all coming from China which makes it harder to deploy.

So I think we don’t have the right like we don’t have enough deployment of the cutting edge models in India data centers today and I think that sometimes delays deployments because we can’t really use them as a regulated company. I think the good part is I mean there are three language models that were announced in the AI summit today which are from India. So I think that can be a good way for at least financial companies in India who want to deploy models within India data centers and within Indian boundaries. I think they can at least those models are available and that can be a starting point and then we are hoping that the global companies will bring some of those the cutting edge models to India data centers.

centers as well, so they can be deployed. I think that’s one challenge just on the infrastructure itself, that the cutting -edge model infrastructure is not available. So we can use it for coding, we can use it for multiple internal purposes, but we can’t really use it for anything that touches customer data, anything that touches PII. We can’t use those models till they’re deployed in India data centers, and hopefully that is going to change. The second aspect is, like, related to it is, as a financial company, as she said, I think the biggest challenge for you is controlling where the data goes and where it flows out. I think AI models, as somebody said earlier, it’s a black box.

Once the data enters, you don’t know where it comes out and when, and I think drawing clear boundaries on that is hard. So that is one big challenge, just with LLMs, but there are other forms of AI where that works fine, because there are other forms of AI models or specific targeted models that you can apply where those guardrails are available. Just LLMs don’t have guardrails in terms of where data goes in and where it comes out. hallucinations. Anything to do with financial data, trust is very, very critical. So I’m okay if the system fails 10 % of the time, but it should not be wrong 10 % of the time. So it’s okay if the system says, hey, I can’t do this analysis.

But if it gives the wrong analysis and you use it as a source of truth and you act on it, and then you deliver that information to the customer and you say a commitment is successful, but it actually isn’t, even if it happens 1 % of the time, it creates a massive issue for you. So I think that’s the third piece and I think it’s less to do with regulation, it’s just how the, what is expected of financial players that you can’t be saying something that is not true. And LLM’s model by default can say things which are not true and even if it happens in 1 % or 2 % of cases, it can become a massive liability risk for financial companies.

So I think those are the three big aspects and I think the solutions available to the some of those, the first one is fairly easily solvable and global companies will probably solve it or Indian sovereign models will get there. The second is partly solvable because you can put guardrails around and use the right kind of AI models where that is possible. The third is a fundamental problem of how LLM models, LMs models work. So I think that that part is going to be harder to solve. Yes, there are newer models which hallucinate less. But as I said, even if it hallucinates less than 0 .1%, I still can’t deploy an LLM model till I’m certain about it.

And I think that part will require us to either use alternate means or wait for LLM models that can solve that boundary.

Bharat

Ashutosh, you know, because you’re looking at investing companies across the spectrum, not necessarily only in finance, but in other areas which are using artificial intelligence. In your view, what are some of the key regulatory gaps highlighted by your investing companies in the fintech sector? And going forward, what progressive regulatory measures can the government consider to promote this more smoothly?

Ashutosh Sharma

I think RBI has in general been a very kind of progressive. not regulator but guider in this in this situation the seven sutras have been really helpful for people to understand at least what the direction of travel is and also I think the one one acceptance we need to make is that just the way we all are learning about AI its use cases etc etc the regulators also learn and things are changing fast and therefore I think the end situation of what the regulation looks like may be very different. I have a slightly different sort of ask policy ask adding to what Harshal was saying I think compute for my companies is a bigger problem than regulation and researchers for my companies I don’t think we can solve this by the way I mean like through regulation or policy but I think you were asking what do companies struggle with I think those two things are are are bigger problems at this time.

Bharat

Thanks. I’m conscious of the time and I would use my moderator’s prerogative to ask one final round of questions I think to all our distinguished panelists. What is one big bet you would like to take on how AI will transform finance in the next five years? We start with Subinduji.

Suvendu K. Pati

Okay. I know that really time is up but yes it’s not a bet, it’s a wish list rather. Already I’m glad that Ashutosh has already covered some of that in a very, very elaborate way. One thing I would like to see is that how AI can bring about substantive improvement in financial inclusion. You know, bringing people to formal institutional credit which through alternate data analytics and bringing new underwriting models how we can bring them on board and it will be a big unlock. for a country like India. Second aspect I would like to emphasize is which already Harshal has also touched upon is all our fintech apps or everything are now designed for very, very digitally savvy people.

How do we use AI to bring language, voice -based banking, conversational banking, payments? I don’t have to fill a form. I just need to instruct and that translates. So uneducated but literate or sort of logical -minded people who are using WhatsApp voice and all that to transmit messages, they should be able to come on board and using AI come to the financial fold. We should focus more research on assistive technologies. For example, a disabled person, person who can’t see, can’t hear, how do we use AI to bring them or provide information? Make them access financial services in a more efficient way. manner. These are the areas where this technology is going to play a role and we would like to see this getting to that point where it really bridges this otherwise so -called digital divide which is the risk is widening.

We should bring it back to that and AI can prove it in a point and there I would say very, very optimistic about this but a lot of work needs to be done in these areas.

Bharat

Thank you. Harshil?

Harshil Mathur

I completely agree. I think the ability to bring the cost of servicing down significantly so that you can deliver personalization with the N of 1 at an individual level I think can have varied impacts. Like I said typically in India for example when HNI’s open a bank account you don’t fill a form. A guy comes to you fills the form for you, just asks for the 5 documents and asks you for signature and it’s done. But actually who needs this most is the villager. Because he really can’t fill a form. But he’s asked to stand in line, fill a form. AI can allow us to deliver that experience to the villager on the ground. And I think that is going to be the one biggest change that finance can do, is allow the cost of servicing to come down drastically, personalization to happen at an individual level, and then voice -based interactions to drive.

And as somebody said earlier, that’s what’s natural to us. That’s what’s natural to Indians, that if we can make it all voice -based.

Bharat

Arshatosh?

Ashutosh Sharma

I think AI -led financial services leading us to Vixit Bharat would be my bet.

Bharat

I think that’s an aspiration for all of us. And Tara, there’s a lady on the panel. The last word is yours.

Terah Lyons

I would underscore all the answers already provided. I think the financial inclusion potential, the accessibility potential here is massive. Imagine a world in which we can not just expand the credit envelope, but put a financial advisor in every single person’s pocket that normally only the wealthiest in society today are able to afford. So I look forward to that world being.

Bharat

Thank you.

Suvendu K. Pati

And the last word, if I may slip it, language. India is a country with diverse languages. We can leverage on our language, AI to play on the language.

Bharat

Well, I’d like to thank our distinguished panel for a truly enlightening discussion. And I think the topic was supercharging AI adoption in the global south. And I think many of the thoughts of this panel would go a very long way in achieving that goal. Thank you very much once again. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (13)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“John Tass‑Parker leads policy partnerships at JPMorgan Chase and is a key voice in discussions about AI trust in finance.”

The knowledge base lists John Tass-Parker as leading policy partnerships at JPMorgan Chase, confirming his role in AI-related finance discussions [S1].

Additional Contextmedium

“In financial services, trust is becoming a measurable and central component of business models.”

A World Economic Forum source notes that trust is now measurable through provenance, authenticity and verification, emphasizing that “it’s going to be about trust” in finance [S84].

Confirmedhigh

“The RBI adopts a technology‑neutral, principle‑based framework for AI regulation, emphasizing responsible adoption over prescriptive rules.”

A session summary highlights that regulatory approaches are leaning toward a technology-neutral legislative stance, matching the RBI’s principle-based framework description [S88].

Confirmedmedium

“The RBI warns that widespread AI use in banking could create financial stability risks, prompting a responsible‑AI stance.”

The RBI Governor has publicly highlighted AI-related risks to financial stability in the banking and private-credit markets, supporting the claim of a risk-aware, responsible-AI approach [S87].

External Sources (92)
S1
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — -John Tass-Parker- Leads policy partnerships at JPMorgan Chase
S2
[Online Event] Cables, Novels and Nobels: The Journey of Diplomacy and Literature  — Paolo Trichilo:Yes. Well, indeed, this book is a special biography of each of these writers and claims with their diplom…
S3
Announcement of New Delhi Frontier AI Commitments — -Bharat: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified …
S4
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S5
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — -Amish Devagon: Role/Title not explicitly mentioned, appears to be an interviewer or journalist conducting the discussio…
S6
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — – Ashutosh Sharma- Harshil Mathur – Suvendu K. Pati- Ashutosh Sharma- Harshil Mathur – Terah Lyons- Harshil Mathur
S7
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — – John Tass-Parker- Terah Lyons – Terah Lyons- Harshil Mathur
S8
The Power of Satellites in Emergency Alerting and Protecting Lives — Alexandre Vallet: Thank you very much Dr. Zavazava. Thank you very much both of you for this introductory remark. I will…
S9
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — -Ashutosh Sharma- Investor in India’s fintech ecosystem, described as one of the leading deployers of finance in fintech
S10
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — -Suvendu Pati- Chief General Manager and Head of FinTech at the Reserve Bank of India
S11
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — – Suvendu K. Pati- Bharat – Suvendu K. Pati- Harshil Mathur
S12
Bottom-up AI and the right to be humanly imperfect (DiploFoundation) — From the analysis of these arguments, it can be inferred that while third-party tools offer convenience and efficiency i…
S13
Advancing Scientific AI with Safety Ethics and Responsibility — All of these things don’t respect national borders, right? So, how it’s going to spread. If people using VPN or other th…
S14
GOVERNING AI FOR HUMANITY — – 190 Discussions about AI often resolve into extremes. In our consultations around the world, we engaged with those who…
S15
AI Meets Agriculture Building Food Security and Climate Resilien — And this is not proprietary. It is being designed as a replicable public infrastructure model for India and the entire g…
S17
Multistakeholder platform regulation and the Global South | IGF 2023 Town Hall #170 — In conclusion, the analysis indicates a negative sentiment towards cooperation between authorities, pointing out the pot…
S18
PART II — – A data protection impact assessment referred to in subsection (1) shall in particular be required in the case of: (4) …
S19
Annex 1 — 2The existence of a high risk, in particular when using new technologies, depends on the nature, extent, circumstances a…
S20
https://dig.watch/event/india-ai-impact-summit-2026/how-the-global-south-is-accelerating-ai-adoption_-finance-sector-insights — A large company could always afford that intelligence, but now it’s available. Similar examples will be available later….
S21
Conversational AI in low income &amp; resource settings | IGF 2023 — Digital patient engagement is crucial for maintaining relationships with patients even after they leave the hospital. Pl…
S22
WS #262 Innovative Financing Mechanisms to Bridge the Digital Divide — – Knowledge sharing between community networks – Lowering costs through community involvement 3. An interactive Q&A se…
S23
How to make AI governance fit for purpose? — Shan emphasized international collaboration through the ITU and global standards development, expressing concern about p…
S24
Agents of Change AI for Government Services &amp; Climate Resilience — “…they can hallucinate it can have bias, it can have toxicity, avoid all of that and they are unpredictable ultimately…
S25
Secure Finance Risk-Based AI Policy for the Banking Sector — It may be appreciated that an India first approach is not inward looking. It is context aware. It ensures that governanc…
S26
Agentic AI in Focus Opportunities Risks and Governance — Absolutely, and hi, everyone. It’s great to be here with you. As you said, for MasterCard, AI is nothing new. We have be…
S27
The International Observatory on Information and Democracy | IGF 2023 Town Hall #128 — Additionally, Nnenna Nwakanma’s perspective on regulation is explored further. She emphasises the significance of regula…
S28
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Overall, the sentiment towards implementing principles and regulation in AI is positive. Although the analysis does not …
S29
https://dig.watch/event/india-ai-impact-summit-2026/secure-finance-risk-based-ai-policy-for-the-banking-sector — Consumer centric safeguards obviously by way of transparent disclosure clear appeal processes and human intervention mec…
S30
EUROPEAN COMMISSION — The security of retail payments is a crucial prerequisite for payment users and merchants alike. Consumers…
S31
https://dig.watch/event/india-ai-impact-summit-2026/building-inclusive-societies-with-ai — I think actually his productivity is quite high. The problem is his realizations are not that high. What he’s able to re…
S32
tABle of Contents — These productivity gains benefit the entire economy. Investment in information and communications technologies accounted…
S33
AI as critical infrastructure for continuity in public services — Inclusivity of all affected stakeholders creates legitimacy and trust. Transparency, public comment periods and accounta…
S34
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — And the first is around investment. Investment must go beyond innovation. It must flow into the systems that make innova…
S35
Conversation: 01 — This articulates a fundamentally different regulatory philosophy – starting with adoption and gradually adding restricti…
S36
Discussion Summary: US AI Governance Strategy Under the Trump Administration — Regarding US-China competition, Ball emphasized that America should win through superior adoption and development of AI …
S37
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Virkkunen explains that the EU’s AI regulation is not as comprehensive as critics suggest, focusing primarily on high-ri…
S38
WS #438 Digital Dilemmaai Ethical Foresight Vs Regulatory Roulette — Legal and regulatory | Economic Online moderator 1 seeks identification of exemplary AI policies that successfully bala…
S39
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S40
Press Conference: The Future of Global Fintech — Another area of concern is the coordination among regulators. The analysis reveals that 27% of fintechs rated the coordi…
S41
A Fintech future for all? (SOMO) — Another challenge highlighted in the analysis is the aggressive data gathering by fintech companies for profit-making se…
S42
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Demands on policy exist without the building blocks to support its implementation Key to this trajectory are collaborat…
S43
Artificial intelligence — Inclusive finance Multilingualism
S44
How to make AI governance fit for purpose? — The Vice Minister highlighted this as a critical governance challenge, noting the urgency of coordinated development whi…
S45
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
S46
Secure Finance Risk-Based AI Policy for the Banking Sector — Consumer centric safeguards obviously by way of transparent disclosure clear appeal processes and human intervention mec…
S47
The rise of AI in financial services: balancing opportunities and challenges — According to industry executives, AIis increasingly seenas a game-changer in the financial services sector, offering sig…
S48
Embracing the future of e-commerce and AI now (WEF) — In conclusion, the implementation of advanced technology, particularly AI, in Cambodia’s customs system brings numerous …
S49
European Central Bank advocates monitoring and regulation of AI in finance — The European Central Bank(ECB) has issued a call for increased vigilance and potential regulation regarding the use of A…
S50
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Compute infrastructure and research talent shortages present bigger obstacles than regulatory constraints Data residenc…
S51
Leaders TalkX: When policy meets progress: paving the way for a fit for future digital world — Examples include network slicing implementation, software-defined networks, and qualities of service like reliability, r…
S52
Leaders TalkX: Partnership pivot: rethinking cooperation in the digital era — Effective regulation requires acknowledging uncertainty about future technological developments while maintaining framew…
S53
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Eltjo Poort: thank you Isadora yeah and thanks for giving me the opportunity to say a few things I there’s a little bit …
S54
FOREWORD — Sweden has taken a different approach and has created a virtual embassy, the Second House of Sweden, in Second Life. But…
S55
Open Forum #60 Cooperating for Digital Resilience and Prosperity — Development | Cybersecurity Effectiveness of existing versus new frameworks Luca expresses frustration that many excel…
S56
Diplomatic policy analysis — Digital divides:Not all countries have equal access to advanced analytical tools, perpetuating inequalities in diplomati…
S57
Report outlines risks and benefits of AI for financial institutions — The Financial Stability Board (FSB), an international institution that makes recommendations concerning the global finan…
S58
https://dig.watch/event/india-ai-impact-summit-2026/secure-finance-risk-based-ai-policy-for-the-banking-sector — Compliance functions increasingly rely on automated pattern recognition, while adaptive cybersecurity models respond to …
S59
Shaping AI’s Story Trust Responsibility &amp; Real-World Outcomes — Different sectors show varying risk tolerance levels, with Ekudden noting that enterprise risk assessment has become “qu…
S60
The fading of human agency in automated systems — Crucially, a human presence does not guarantee agency if the system is designed around compliance rather than contestati…
S61
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — It is human in the lead, not human in the loop. Well, I think Julie talked about must-wins for Visa. Agentic commerce i…
S62
Agentic AI in Focus Opportunities Risks and Governance — -Enterprise Guardrails and Risk Management: Panelists emphasized the critical importance of implementing robust safety m…
S63
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — “human in the loop is a first class feature not a failure point … design the system … transition … to a human”[79]…
S64
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Human-in-the-loop governance is essential – accountability cannot be outsourced to algorithms
S65
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — By fostering interaction, investigation, and the exchange of ideas, sandboxes serve as a stepping stone towards implemen…
S66
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — In conclusion, sandboxes are valuable tools for testing and implementing regulatory policies. The Brazil case highlights…
S67
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — “It’s the legitimacy that is the scarce attribute here.”[5]”It’s actually the business model.”[1]”Because in finance, tr…
S68
Secure Finance Risk-Based AI Policy for the Banking Sector — Compliance functions increasingly rely on automated pattern recognition, while adaptive cybersecurity models respond to …
S69
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — And the first is around investment. Investment must go beyond innovation. It must flow into the systems that make innova…
S70
Conversation: 01 — This articulates a fundamentally different regulatory philosophy – starting with adoption and gradually adding restricti…
S71
Generative AI: Steam Engine of the Fourth Industrial Revolution? — In terms of regulating technology, it is suggested that focus should be placed on regulating use cases rather than the t…
S72
WS #162 Overregulation: Balance Policy and Innovation in Technology — Galvez suggests that countries should consider their local needs and existing regulations when developing AI governance …
S73
Agentic AI in Focus Opportunities Risks and Governance — Absolutely, and hi, everyone. It’s great to be here with you. As you said, for MasterCard, AI is nothing new. We have be…
S74
Banks and insurers pivot to AI agents at scale, Capgemini finds — Agentic AI is expected todeliver up to $450 billion in valueby 2028, as financial institutions shift frontline processes…
S75
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S76
A Fintech future for all? (SOMO) — Another challenge highlighted in the analysis is the aggressive data gathering by fintech companies for profit-making se…
S77
Press Conference: The Future of Global Fintech — Another area of concern is the coordination among regulators. The analysis reveals that 27% of fintechs rated the coordi…
S78
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — For example,sustainable financemodels, such as green bonds and ESG-linked financial products, are expected to grow signi…
S79
WS #462 Bridging the Compute Divide a Global Alliance for AI — Ivy Lau-Schindewolf: Sure. Yeah, it’s kind of hard to go after, you know, Elena. And that was a very, very good point an…
S80
World Economic Forum 2025 at Davos — During the Davos 2025 discussions, the topic of governance mechanisms for AI, including monitoring, reporting, verificat…
S81
Better governance for fairer digital markets: unlocking the innovation potential and leveling the playing field (UNCTAD) — The shift in conversation has coincided with advancements in Artificial Intelligence
S82
Technology Rewiring Global Finance: A Panel Discussion Summary — Forbes opened by identifying a disconnect at Davos: whilst some conversations focused on concerning geopolitical and mac…
S83
Shaping the Future AI Strategies for Jobs and Economic Development — The emphasis on collaboration over displacement provides a framework for managing workforce transitions while capturing …
S84
Scaling AI for Billions_ Building Digital Public Infrastructure — “Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verificatio…
S85
Policymaker’s Guide to International AI Safety Coordination — In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is buil…
S86
https://dig.watch/event/india-ai-impact-summit-2026/ai-collaboration-across-borders_-india-israel-innovation-roundtable — But just to make my point. So my first question to you is like very, very, and the foundational level is that is like, i…
S87
RBI highlights risks of AI in banking and private credit markets — The increasing use of AI and machine learning in financial services globally could lead to financial stabilityrisks, acc…
S88
Session — Taking into account technological evolutions like artificial intelligence and immersive virtual environments such as the…
S89
Global Enterprises Show How to Scale Responsible AI — Technology regulation should focus on technical standards agreed upon by technologists rather than geography-specific ru…
S90
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — In conclusion, generative AI technology has the potential for positive impacts in multiple industries. It enhances commu…
S91
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — Legal and regulatory | Economic Regulation and innovation must work together, not in opposition Regulation vs Innovati…
S92
Pre 12: Resilience of IoT Ecosystems: Preparing for the Future — ## Lifecycle Management and User Behavior Matthias Hudobnik: Thanks, Maartin. Hello, everyone. Yeah, it’s a pleasure to…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
J
John Tass-Parker
1 argument122 words per minute407 words199 seconds
Argument 1
Institutional AI focus
EXPLANATION
John argues that the finance sector is moving from a frontier AI era to an institutional AI era, where the key challenge is not model capability but legitimacy and trust. He emphasizes that trust is the business model for financial institutions and will determine which AI systems are adopted at scale.
EVIDENCE
He explains that while AI breakthroughs are often highlighted, finance needs to focus on legitimacy, auditability, resilience, and governance to earn trust, noting that institutions only adopt systems they trust and that boards and regulators can only scale what they can govern and supervise [4-19].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on trust, auditability and governance matches calls for auditability and principle-based regulation in the financial sector [S24] and the principle-based regulatory approach highlighted in [S27]; the context-aware, India-first governance model also aligns with the need for legitimacy and trust [S25].
MAJOR DISCUSSION POINT
Institutional AI focus
AGREED WITH
Suvendu K. Pati, Terah Lyons
B
Bharat
1 argument142 words per minute869 words366 seconds
Argument 1
Call for global‑south focus and regulatory‑industry partnership
EXPLANATION
Bharat frames the discussion by urging a focus on how AI can be responsibly adopted in the global south, especially through collaboration between regulators and industry. He highlights the need for partnerships that enable safe AI deployment in finance across emerging economies.
EVIDENCE
He opens the session by noting the importance of the financial sector’s regulation in India and asks the regulator about India’s AI approach, then later calls for a global-south perspective and regulatory-industry cooperation [21-27][96-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A strengthened role for the Global South in AI decision-making is advocated in the summit call [S16]; multistakeholder regulation perspectives and concerns about cooperation are discussed in [S17]; bridging digital divides through knowledge sharing is highlighted in [S22]; and the Global South finance AI adoption insights directly echo this focus [S1].
MAJOR DISCUSSION POINT
Call for global‑south focus and regulatory‑industry partnership
AGREED WITH
Suvendu K. Pati, John Tass-Parker
H
Harshil Mathur
4 arguments216 words per minute2189 words606 seconds
Argument 1
Large‑scale data processing for underwriting, risk, fraud
EXPLANATION
Harshil points out that finance deals with massive data volumes that are difficult for humans to process, and AI dramatically speeds up analysis, enabling better underwriting, risk management, and fraud detection.
EVIDENCE
He describes how AI can handle large data sets far more efficiently than manual methods, allowing 1000× more analysis and improving underwriting, risk management, and fraud detection tasks [134-142].
MAJOR DISCUSSION POINT
Large‑scale data processing for underwriting, risk, fraud
AGREED WITH
Ashutosh Sharma, Harshil Mathil, John Tass-Parker
Argument 2
Voice‑first, multilingual “agentic commerce” to unlock mass market
EXPLANATION
Harshil argues that India’s payment landscape can be transformed by agentic, voice‑first, multilingual commerce, which would make online shopping accessible to the majority of consumers who prefer conversational interactions.
EVIDENCE
He explains that only 10 million of 300-400 million UPI users conduct most commerce, and that building conversational, voice-first experiences can bridge the gap for the billions who still shop offline, citing examples from travel and insurance where people rely on agents [143-166].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The potential of voice-first, multilingual, agentic commerce in India is documented in the Global South finance report [S1] and the shift from recommendation to action in agentic AI is described in [S26].
MAJOR DISCUSSION POINT
Voice‑first, multilingual “agentic commerce” to unlock mass market
AGREED WITH
John Tass-Parker, Ashutosh Sharma, Suvendu K. Pati, Bharat
DISAGREED WITH
Ashutosh Sharma
Argument 3
Conversational banking as a bridge for low‑digital‑savvy users
EXPLANATION
Harshil emphasizes that Indian consumers prefer talking to a person rather than using app‑based interfaces, so conversational banking is essential to increase adoption among low‑digital‑savvy users.
EVIDENCE
He notes that Indian shopping habits differ from Western ones, describing how people rely on personal brokers and agents, and that current apps are “American-style” supermarkets that lack the conversational element needed for broader adoption [149-165].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for conversational interfaces for low-digital-savvy users is supported by low-income conversational AI use cases [S21] and the same Global South finance insights on conversational commerce [S1].
MAJOR DISCUSSION POINT
Conversational banking as a bridge for low‑digital‑savvy users
Argument 4
Data residency, compute infrastructure, and model hallucination risks
EXPLANATION
Harshil outlines three major technical challenges: strict Indian data‑residency rules, limited access to cutting‑edge compute infrastructure, and the risk of large language models hallucinating or providing incorrect outputs.
EVIDENCE
He details how foreign AI models often violate data-residency requirements, the scarcity of high-performance models in Indian data centres, and the difficulty of guaranteeing that LLMs do not produce false answers, which could create liability for financial firms [300-329].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Risks of model hallucination and the need for governance are highlighted in AI governance fit-for-purpose analysis [S23] and auditability concerns [S24]; data-protection impact assessment requirements for high-risk processing are outlined in [S18] and [S19]; data residency challenges are mentioned in the Global South finance report [S1].
MAJOR DISCUSSION POINT
Data residency, compute infrastructure, and model hallucination risks
AGREED WITH
Suvendu K. Pati, John Tass-Parker
DISAGREED WITH
Suvendu K. Pati
T
Terah Lyons
3 arguments171 words per minute796 words277 seconds
Argument 1
Risk‑aware governance and auditability
EXPLANATION
Terah stresses that the financial sector’s risk‑aware governance framework, built on principles‑based regulation, enables trustworthy AI deployment through auditability and oversight.
EVIDENCE
She highlights that regulators’ principles-based, technology-neutral approach allows banks to experiment while managing proportional risk, and that this governance supports auditability and transparency of AI systems [82-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for risk-aware governance and auditability aligns with the auditability and governance recommendations in [S24] and the principle-based regulatory framework discussed in [S27] and [S28].
MAJOR DISCUSSION POINT
Risk‑aware governance and auditability
AGREED WITH
Ashutosh Sharma, Suvendu K. Pati, John Tass-Parker
Argument 2
Importance of principles‑based regulation for experimentation
EXPLANATION
Terah notes that the principle‑based, tech‑neutral regulatory stance in finance encourages experimentation with AI while keeping risk proportional to use‑case importance.
EVIDENCE
She references the regulator’s safety and consumer-protection principles that apply regardless of technology, enabling banks to test AI under a proportionate risk framework [82-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on principle-based, technology-neutral regulation matches the discussions of principle-based regulatory approaches in [S27] and [S28].
MAJOR DISCUSSION POINT
Importance of principles‑based regulation for experimentation
AGREED WITH
Suvendu K. Pati
Argument 3
Fraud detection, payments, markets, compliance
EXPLANATION
Terah lists the most impactful AI use cases at JPMorgan Chase, emphasizing fraud and scams remediation, payments, market applications, and compliance as priority areas.
EVIDENCE
She explicitly mentions fraud and scams remediation, exciting payment applications, market use cases, and compliance workloads as the key domains where AI adds value [78-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The focus on fraud detection and secure payments corresponds with the importance of retail payment security in the financial sector [S30] and the broader auditability requirements for AI systems [S24].
MAJOR DISCUSSION POINT
Fraud detection, payments, markets, compliance
A
Ashutosh Sharma
5 arguments138 words per minute1228 words530 seconds
Argument 1
Productivity gains and underwriting risk reduction
EXPLANATION
Ashutosh argues that AI can dramatically improve productivity in finance and enable better underwriting, especially for thin‑file borrowers, by leveraging unstructured data.
EVIDENCE
He cites India’s $2 trillion credit market, high OPEX spending, and explains that AI can boost productivity and allow rapid, cost-effective thickening of thin credit files, improving underwriting for large population segments [106-118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Productivity gains from AI in finance are quantified in ICT-driven growth studies [S32]; AI’s role in fraud detection and risk management is also noted in payment security literature [S30]; the Global South finance insights reinforce the productivity narrative [S1].
MAJOR DISCUSSION POINT
Productivity gains and underwriting risk reduction
AGREED WITH
Harshil Mathil, John Tass-Parker
Argument 2
Unit‑economics boost and cost reduction
EXPLANATION
Ashutosh highlights that AI can lower operating expenses, thereby improving unit economics for financial firms and making them healthier.
EVIDENCE
He notes that Indian credit OPEX is 3-5 % of a $2 trillion market, amounting to $60-100 billion annually, and that AI-driven productivity is only the beginning of cost reduction efforts [106-112].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cost reduction and improved unit-economics from AI adoption are discussed in productivity impact analyses for the economy [S32].
MAJOR DISCUSSION POINT
Unit‑economics boost and cost reduction
AGREED WITH
Harshil Mathil, John Tass-Parker
Argument 3
Solving thin‑file credit challenges to expand inclusion
EXPLANATION
Ashutosh explains that AI’s ability to process unstructured data can convert thin credit files into thick ones, enabling underwriting for large unbanked populations.
EVIDENCE
He describes the “thin-file” problem in India and how AI can quickly and cheaply enrich data to create robust credit profiles for previously underserved borrowers [115-118].
MAJOR DISCUSSION POINT
Solving thin‑file credit challenges to expand inclusion
AGREED WITH
John Tass-Parker, Harshil Mathur, Suvendu K. Pati, Bharat
Argument 4
Conversational interfaces to reach unbanked/underbanked
EXPLANATION
Ashutosh envisions voice‑enabled conversational apps that let users interact with financial services without navigating complex forms, thereby extending reach to low‑digital‑savvy users.
EVIDENCE
He paints a scenario where users can simply speak to an app to obtain financial products, turning a complex, multi-question onboarding process into a conversational experience [124-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Voice-enabled conversational apps for unbanked users are highlighted in the Global South finance report [S1] and in low-income conversational AI case studies [S21].
MAJOR DISCUSSION POINT
Conversational interfaces to reach unbanked/underbanked
AGREED WITH
John Tass-Parker, Harshil Mathur, Suvendu K. Pati, Bharat
Argument 5
Need for human‑in‑the‑loop and data‑privacy safeguards
EXPLANATION
Ashutosh stresses that, despite AI’s capabilities, human oversight remains essential and that fintechs must adhere to data‑privacy regulations such as India’s DPDP framework.
EVIDENCE
He mentions best practices like keeping a human in the loop for final decisions and ensuring compliance with data-privacy guardrails throughout AI development and deployment [127-136].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Consumer-centric safeguards, transparent disclosure and human oversight are advocated in [S29]; data-protection impact assessment guidance for high-risk processing is provided in [S18] and [S19]; auditability and governance concerns are also raised in [S24].
MAJOR DISCUSSION POINT
Need for human‑in‑the‑loop and data‑privacy safeguards
S
Suvendu K. Pati
5 arguments149 words per minute2325 words933 seconds
Argument 1
Deployers as custodians of trust, need for glass‑box transparency
EXPLANATION
Suvendu asserts that regulated entities, not model developers, must ensure AI systems are transparent (“glass‑box”) so customers know when they are interacting with AI and can opt out if desired.
EVIDENCE
He explains that while AI can be a black box, regulated entities should provide clear disclosure to customers and maintain transparency, accountability, and audit mechanisms for AI-driven services [181-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The requirement for transparent, glass-box AI systems and clear disclosure aligns with auditability and governance recommendations in [S24] and consumer-centric transparency guidelines in [S29].
MAJOR DISCUSSION POINT
Deployers as custodians of trust, need for glass‑box transparency
AGREED WITH
Harshil Mathur, John Tass-Parker
Argument 2
Enablement‑first, tech‑neutral principles
EXPLANATION
Suvendu describes RBI’s approach as technology‑neutral, focusing on enabling innovation while safeguarding consumer protection and existing IT‑outsourcing guidelines.
EVIDENCE
He notes that RBI’s policy is tech-agnostic, emphasizing safety and consumer protection irrespective of the underlying technology, and that existing guidelines already cover many AI-related risks [38-41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The RBI’s tech-neutral, enablement-first approach is described in the policy analysis [S25] and reinforced by principle-based regulatory discussions in [S27] and [S28].
MAJOR DISCUSSION POINT
Enablement‑first, tech‑neutral principles
DISAGREED WITH
Harshil Mathur
Argument 3
Seven “sutras” adopted as national AI policy
EXPLANATION
Suvendu reports that RBI’s seven AI principles (“sutras”) have been formally adopted by the Indian government for cross‑sector implementation, providing a generic yet accepted framework.
EVIDENCE
He states that the seven principles were included in the RBI report and have been adopted by the Government of India for implementation across sectors [56-59].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The adoption of AI principles (the “sutras”) across sectors is documented in the India-first, context-aware governance report [S25].
MAJOR DISCUSSION POINT
Seven “sutras” adopted as national AI policy
Argument 4
Ongoing industry engagement and AI sandbox initiative
EXPLANATION
Suvendu outlines RBI’s continuous engagement with fintechs through monthly forums, surveys, and the development of an AI sandbox that offers data and compute resources to smaller players.
EVIDENCE
He details monthly FinQuery/Finteract events, a survey of 600 entities, three rounds of stakeholder consultations, and plans to operationalise an AI sandbox providing data and compute access, as well as building models like MuleHunter.ai for banks [280-294].
MAJOR DISCUSSION POINT
Ongoing industry engagement and AI sandbox initiative
AGREED WITH
Bharat, John Tass-Parker
Argument 5
Regulatory clarity on liability and accountability
EXPLANATION
Suvendu emphasizes that responsibility for AI outcomes lies with the deploying financial institution, requiring clear liability and accountability frameworks and internal audit processes.
EVIDENCE
He explains that the regulator expects the model deployer (the regulated entity) to be accountable, and that institutions must develop liability, accountability, and audit mechanisms to capture AI-related risks [48-52].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for clear liability, accountability and audit mechanisms echo the auditability and governance concerns in [S24] and the consumer-centric safeguards on transparent disclosure and accountability in [S29].
MAJOR DISCUSSION POINT
Regulatory clarity on liability and accountability
AGREED WITH
Bharat, John Tass-Parker
Agreements
Agreement Points
Trust, legitimacy and auditability are essential for AI adoption in finance
Speakers: John Tass-Parker, Suvendu K. Pati, Terah Lyons
Institutional AI focus Deployers as custodians of trust, need for glass-box transparency Risk-aware governance and auditability
All three speakers stress that beyond model performance, finance needs trustworthy, auditable AI systems; John highlights legitimacy as the scarce attribute [4-19], Suvendu calls for a glass-box approach and clear disclosure to customers [181-188], and Terah points to risk-aware, principle-based governance that ensures auditability [82-86].
POLICY CONTEXT (KNOWLEDGE BASE)
Consumer-centric safeguards such as transparent disclosure and auditability are highlighted as essential to maintain public trust in AI-driven finance, echoing emerging governance guidelines [S46] and the European Central Bank’s call for risk mitigation to protect market stability [S49].
Principle‑based, technology‑neutral regulatory frameworks enable responsible AI innovation
Speakers: Suvendu K. Pati, Terah Lyons
Enablement-first, tech-neutral principles Importance of principles‑based regulation for experimentation
Both speakers note that a tech-agnostic, principle-based stance lets regulators safeguard consumers while allowing banks to experiment; Suvendu describes RBI’s tech-neutral, consumer-protection focus [38-41] and Terah emphasizes how principle-based, proportionate risk management supports AI experimentation [82-86].
POLICY CONTEXT (KNOWLEDGE BASE)
Regulators are urged to adopt flexible, technology-agnostic frameworks that can accommodate future advances, a stance articulated in recent digital-governance discussions [S52] and reflected in the ECB’s preference for principle-based oversight of AI in finance [S49].
AI can drive financial inclusion and reach underserved populations
Speakers: John Tass-Parker, Ashutosh Sharma, Harshil Mathur, Suvendu K. Pati, Bharat
Productivity gains and underwriting risk reduction Conversational interfaces to reach unbanked/underbanked Voice‑first, multilingual “agentic commerce” to unlock mass market Solving thin‑file credit challenges to expand inclusion
The panel agrees AI will expand access: John links trusted AI to productivity for small businesses and the Global South [16-18]; Ashutosh describes AI thickening thin credit files and improving underwriting for the unbanked [115-118]; Harshil proposes voice-first, multilingual agentic commerce to bring billions online [143-166]; Suvendu envisions AI unlocking financial inclusion through alternate data and language support [340-342]; Bharat frames the discussion around a Global-South focus [96-99].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry analyses note AI’s potential to expand services to underserved clients and accelerate adoption in the Global South, despite infrastructure gaps, underscoring its role in financial inclusion [S47][S50].
Human‑in‑the‑loop and strong governance safeguards are required
Speakers: Ashutosh Sharma, Suvendu K. Pati, Terah Lyons, John Tass-Parker
Need for human‑in‑the‑loop and data privacy safeguards Deployers as custodians of trust, need for glass‑box transparency Risk‑aware governance and auditability
All agree that AI systems must be overseen by humans and governed rigorously: Ashutosh stresses keeping a human in the loop and respecting data-privacy guardrails [127-136]; Suvendu calls for glass-box transparency and accountability [181-188]; Terah highlights auditability and risk-aware governance [82-86]; John also mentions the need for reliability and auditability [10-13].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple governance frameworks place human oversight at the core of AI safety, emphasizing human-in-the-loop mechanisms as a primary safeguard for finance [S46][S64][S63].
AI improves productivity and reduces operating costs in finance
Speakers: Ashutosh Sharma, Harshil Mathil, John Tass-Parker
Productivity gains and underwriting risk reduction Unit‑economics boost and cost reduction Large‑scale data processing for underwriting, risk, fraud
The speakers concur that AI drives efficiency: Ashutosh cites OPEX savings and productivity gains in the $2 trillion credit market [106-112]; Harshil notes AI can analyse data 1000× faster, cutting costs in underwriting, risk and fraud detection [134-142]; John links trusted AI to productivity for small businesses and broader economies [16-18].
POLICY CONTEXT (KNOWLEDGE BASE)
Sector reports document efficiency gains and cost reductions from AI-driven automation across banking and financial services [S47][S57][S58].
Regulators and industry must collaborate through ongoing engagement and sandbox initiatives
Speakers: Suvendu K. Pati, Bharat, John Tass-Parker
Ongoing industry engagement and AI sandbox initiative Regulatory clarity on liability and accountability Call for global‑south focus and regulatory‑industry partnership
There is consensus on partnership: Suvendu details monthly FinQuery/Finteract forums, surveys and an upcoming AI sandbox to support fintechs [280-294]; Bharat explicitly asks for regulator-industry cooperation and a Global-South perspective [21-27][96-99]; John notes that boards and regulators can only scale what they can govern and supervise [8-9].
POLICY CONTEXT (KNOWLEDGE BASE)
Sandboxes are promoted as multi-stakeholder tools for testing regulatory approaches and fostering responsible innovation, with calls for continuous regulator-industry dialogue [S65][S66][S52].
Model reliability, drift and hallucination are critical risk areas that must be managed
Speakers: Harshil Mathur, Suvendu K. Pati, John Tass-Parker
Data residency, compute infrastructure, and model hallucination risks Deployers as custodians of trust, need for glass‑box transparency
All three flag reliability concerns: Harshil warns that LLM hallucinations can cause liability even at 0.1 % error rates [317-329]; Suvendu mentions the need to monitor model drift, degradation and bias [190-193]; John stresses that institutions must demonstrate reliability and resilience to be rewarded [10-13].
POLICY CONTEXT (KNOWLEDGE BASE)
Hallucination risk is identified as a major trust issue, prompting regulators to require robust monitoring of model drift and reliability in financial AI systems [S45][S46][S57].
Similar Viewpoints
Both argue that a technology‑neutral, principle‑based regulatory stance enables innovation while protecting consumers [38-41][82-86].
Speakers: Suvendu K. Pati, Terah Lyons
Enablement-first, tech-neutral principles Importance of principles‑based regulation for experimentation
Both see conversational, voice‑first AI as the key to bring financial services to the large, low‑digital‑savvy Indian population [124-130][143-166].
Speakers: Ashutosh Sharma, Harshil Mathur
Conversational interfaces to reach unbanked/underbanked Voice‑first, multilingual “agentic commerce” to unlock mass market
Both highlight AI’s potential to boost productivity and transform finance, especially for small businesses and the broader economy [16-18][106-112].
Speakers: John Tass-Parker, Ashutosh Sharma
Institutional AI focus Productivity gains and underwriting risk reduction
Both stress that transparency, accountability and model reliability (including hallucination risk) are essential for safe AI deployment in finance [317-329][181-188].
Speakers: Harshil Mathur, Suvendu K. Pati
Data residency, compute infrastructure, and model hallucination risks Deployers as custodians of trust, need for glass‑box transparency
Unexpected Consensus
Regulators and fintechs both treat AI hallucination risk as a core liability concern
Speakers: Harshil Mathur, Suvendu K. Pati
Data residency, compute infrastructure, and model hallucination risks Deployers as custodians of trust, need for glass‑box transparency
While regulators usually focus on consumer protection, Suvendu explicitly calls for monitoring model drift and degradation, aligning with Harshil’s detailed warning about LLM hallucinations and their liability impact [317-329][190-193]. This convergence of technical risk focus between regulator and industry was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
Both policymakers and industry cite AI hallucinations as a liability exposure, reflected in governance discussions and risk-based policy drafts [S45][S46].
Overall Assessment

The panel shows strong convergence on the need for trustworthy, auditable AI governed by principle‑based, tech‑neutral regulation; on AI’s role in expanding financial inclusion through conversational and voice‑first interfaces; on productivity gains; and on the importance of regulator‑industry collaboration via sandboxes and ongoing engagement.

High consensus – the speakers from regulators, fintechs, and large banks largely agree on the same strategic priorities, suggesting that coordinated policy actions and industry initiatives are feasible and likely to accelerate responsible AI adoption in the Global South finance sector.

Differences
Different Viewpoints
Degree of automation versus human oversight in AI‑driven financial services
Speakers: Ashutosh Sharma, Harshil Mathur
Need for human‑in‑the‑loop and data privacy safeguards Voice‑first, multilingual “agentic commerce” to unlock mass market
Ashutosh stresses that fintechs should keep a human in the loop and follow data-privacy guardrails before AI-generated decisions are final [127-136]. Harshil, by contrast, envisions AI agents replacing human staff – e.g., AI-led collection agents and voice-first conversational commerce that can serve villagers without any human mediation [269-273][354-362].
POLICY CONTEXT (KNOWLEDGE BASE)
The balance between autonomous systems and human oversight is debated, with panels highlighting human-in-the-loop versus human-on-the-loop models and concerns about fading agency in automated finance [S60][S61][S62].
Regulatory focus: tech‑neutral guidance for deployers versus strict data‑residency and infrastructure constraints
Speakers: Suvendu K. Pati, Harshil Mathur
Enablement‑first, tech‑neutral principles Data residency, compute infrastructure, and model hallucination risks
Suvendu argues that RBI’s approach is technology-neutral, providing guidance to regulated entities (deployers) while leaving model developers outside its remit, and stresses a “glass-box” transparency model [171-178][184-188]. Harshil points to India’s stringent data-residency rules and the lack of cutting-edge compute in Indian data centres, which he says block the deployment of many foreign AI models and create practical regulatory bottlenecks [300-307][310-311].
POLICY CONTEXT (KNOWLEDGE BASE)
Tension exists between calls for technology-neutral frameworks and national data-residency requirements that limit deployment, especially in emerging markets facing compute constraints [S50][S52].
Acceptable level of AI error (risk tolerance) in financial applications
Speakers: Suvendu K. Pati, Harshil Mathur
All said and done, this is a probabilistic technology Data residency, compute infrastructure, and model hallucination risks
Suvendu acknowledges that AI is probabilistic and calls for a tolerant, differentiated regulatory approach that accepts occasional mistakes [66-68]. Harshil counters that even a 0.1 % hallucination rate is unacceptable for financial services and that models must be virtually error-free before deployment [327-329].
POLICY CONTEXT (KNOWLEDGE BASE)
Risk-tolerance thresholds vary across sectors; regulators are working to define acceptable error margins for AI in finance, as discussed in risk-assessment literature [S59][S57].
Unexpected Differences
Voice‑first agentic commerce versus human‑in‑the‑loop best practice
Speakers: Harshil Mathur, Ashutosh Sharma
Voice‑first, multilingual “agentic commerce” to unlock mass market Need for human‑in‑the‑loop and data privacy safeguards
Harshil argues that a voice-first, conversational AI layer will replace the need for human agents and unlock the majority of Indian consumers who currently shop offline [143-166]. Ashutosh, while also promoting conversational interfaces, insists that a human must ultimately validate AI-generated decisions and that data-privacy guardrails are mandatory [127-136]. The contrast between a fully automated agentic vision and a human-oversight-centric approach was not anticipated given the overall consensus on trust.
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on agentic AI in e-commerce stress the need for safeguards despite a push for voice-first autonomous transactions, highlighting the human-in-the-loop versus agentic debate [S61][S62][S60].
Perceived sufficiency of policy versus practical compute constraints
Speakers: Suvendu K. Pati, Ashutosh Sharma
Seven “sutras” adopted as national AI policy Call for global‑south focus and regulatory‑industry partnership
Suvendu highlights that RBI’s seven AI principles have been formally adopted by the Indian government, suggesting a robust policy foundation [56-59]. Ashutosh, however, points out that for fintechs the biggest hurdle is access to compute infrastructure rather than regulation, indicating that policy adoption has not yet translated into practical capability for many players [333-334]. This gap between policy confidence and on-the-ground resource constraints was not foreseen.
POLICY CONTEXT (KNOWLEDGE BASE)
Practitioners report that compute and talent shortages pose larger barriers than existing policy frameworks, echoing observations from Global South AI adoption studies and critiques of policy-practice gaps [S50][S55].
Overall Assessment

The panel largely agrees that trust, governance and regulator‑industry collaboration are essential for AI in finance. However, clear fault lines emerge around (i) the extent of automation versus human oversight, (ii) how strictly data‑residency and infrastructure constraints should shape regulation, and (iii) the acceptable level of AI error. These disagreements reflect a tension between a rapid, innovation‑first agenda and a risk‑averse, compliance‑driven stance.

Moderate to high – while the shared goal of trustworthy AI is strong, divergent views on risk tolerance, regulatory scope and automation depth could slow consensus on concrete policy measures, potentially leading to fragmented implementation across the Global South.

Partial Agreements
All speakers concur that legitimacy, auditability and trustworthy governance are the core prerequisites for AI adoption in finance, even though they differ on implementation details. John frames trust as the business model of finance [4-19]; Suvendu stresses that regulated entities must provide glass‑box transparency and accountability [181-188]; Terah highlights the importance of principle‑based, risk‑aware governance that enables auditability [82-86]; Ashutosh adds that human oversight and data‑privacy safeguards are essential best practices [127-136].
Speakers: John Tass‑Parker, Suvendu K. Pati, Terah Lyons, Ashutosh Sharma
Institutional AI focus Deployers as custodians of trust, need for glass‑box transparency Risk‑aware governance and auditability Need for human‑in‑the‑loop and data privacy safeguards
There is shared agreement that close collaboration between regulators and industry is vital for responsible AI rollout. Bharat frames the need for global‑south‑focused partnership [21-27][96-99]; Suvendu describes RBI’s regular FinQuery/Finteract forums, surveys and the planned AI sandbox to foster industry dialogue [280-294]; Harshil notes that his firm works with regulators to address data‑residency and compliance challenges [300-303].
Speakers: Bharat, Suvendu K. Pati, Harshil Mathur
Call for global‑south focus and regulatory‑industry partnership Ongoing industry engagement and AI sandbox initiative Regulatory challenges and engagement with government
Takeaways
Key takeaways
In finance, AI success hinges on trust, legitimacy and governance rather than raw capability; institutions act as custodians of that trust. RBI’s approach is principle‑based and technology‑neutral, focusing on enabling innovation while managing risk through seven “sutras” now adopted nationally. RBI will deepen industry engagement via regular FinQuery/Finteract sessions and an AI sandbox that provides data and compute resources to smaller fintechs. Key AI use cases in finance include fraud detection, payments, market analytics, compliance, underwriting of thin‑file customers, and productivity gains. Conversational, voice‑first and multilingual “agentic commerce” is seen as a major lever to reach the unbanked and under‑banked in India. Challenges identified are data‑residency requirements, limited affordable compute, model hallucinations, and the need for human‑in‑the‑loop and privacy safeguards. Regulators expect a “glass‑box” transparency model where AI deployment is accountable to the institution, not the model developer. Financial institutions can export their risk‑management and model‑risk‑governance practices as templates for broader AI oversight.
Resolutions and action items
RBI to operationalize an AI sandbox that offers access to data sets and compute infrastructure for fintechs and smaller entities. RBI to continue monthly FinQuery/Finteract engagements and conduct periodic surveys to monitor AI adoption and challenges. Regulated entities (banks, NBFCs, fintechs) to embed board‑level AI governance policies, audit frameworks, and glass‑box transparency disclosures. Fintechs to adopt human‑in‑the‑loop designs and ensure compliance with data‑privacy (DPDP) guardrails. Industry bodies (self‑regulatory organisations) to develop toolkits, benchmarking services and standards for bias‑free, transparent AI models. JPMorgan Chase to leverage its existing risk‑management culture to scale trusted AI deployments and share best practices with the sector.
Unresolved issues
How to reliably mitigate LLM hallucinations and ensure zero‑tolerance for incorrect outputs in high‑risk financial decisions. Clarification of liability and accountability when AI models produce erroneous results—especially for third‑party model developers. Scalable solutions for data residency and compute constraints that prevent use of cutting‑edge foreign models in India. Defining concrete metrics and standards for AI explainability and auditability that satisfy both regulators and innovators. Pathways to rapidly build deep, multi‑cyclical credit data in India to match the richness of western datasets for advanced underwriting.
Suggested compromises
RBI’s principle‑based, tech‑neutral framework – encouraging innovation while nudging firms toward responsible AI (innovation vs restraint). Allowing experimentation within a regulated sandbox environment rather than imposing prescriptive rules upfront. Requiring glass‑box transparency and human‑in‑the‑loop controls as a middle ground between full automation and manual processes. Balancing data‑residency requirements with the development of domestic AI models and compute resources to reduce reliance on foreign LLMs.
Thought Provoking Comments
We’re moving from an era of frontier AI to an era of institutional AI – the hard problem is not capability but legitimacy and trust. In finance, trust is the business model, and institutions will only absorb systems they can trust, audit, and govern.
Sets a paradigm shift from focusing on model performance to emphasizing legitimacy, framing the entire discussion around trust as the scarce resource in financial AI adoption.
Established the central theme, prompting all subsequent speakers to address trust, governance, and regulatory frameworks rather than just technical breakthroughs.
Speaker: John Tass-Parker
Our approach is to enable responsible adoption of AI – we are tech‑neutral, focus on innovation versus restraint, and place the responsibility on the model deployers (the regulated entities) rather than the developers.
Introduces a nuanced regulatory stance that balances encouragement of innovation with risk mitigation, and clarifies the locus of accountability.
Shifted the conversation toward practical governance, leading panelists to discuss audit frameworks, liability, and the need for ‘glass‑box’ transparency.
Speaker: Suvendu K. Pati
The principles‑based, technology‑neutral regulatory approach has allowed banks to experiment widely while managing proportional risk for each use case.
Highlights how a flexible regulatory philosophy can foster innovation without stifling it, reinforcing the regulator’s earlier point and providing a concrete example from JPMorgan.
Validated the regulator’s stance, encouraging other participants to discuss how similar frameworks can be exported globally and applied to AI governance.
Speaker: Terah Lyons
AI can turn thin‑file credit data into thick files by leveraging unstructured data, dramatically improving underwriting for the large unformalized segment of India’s economy.
Identifies a specific, high‑impact application of AI that addresses a core financial inclusion challenge in India, linking technology to socioeconomic outcomes.
Steered the discussion toward concrete use‑cases in credit risk, prompting further dialogue on unit economics and the strategic value of AI for fintechs.
Speaker: Ashutosh Sharma
Agentic commerce – voice‑first, multilingual, conversational interfaces – is the next wave that will unlock online commerce for the 300‑400 million Indian UPI users who currently don’t shop online.
Introduces a transformative vision for payments that goes beyond traditional UI/UX, tying AI to mass adoption and cultural buying habits.
Shifted the focus from backend data processing to customer‑facing experiences, leading to discussions on accessibility, language diversity, and the role of AI in bridging the digital divide.
Speaker: Harshil Mathur
We need a ‘glass‑box’ approach: customers must be told they are interacting with an AI system and should have the option to opt for a non‑AI interaction; institutions must embed auditability, bias mitigation, and model‑drift monitoring into board policies.
Moves the abstract notion of trust into concrete operational requirements, emphasizing transparency and continuous governance.
Prompted panelists to discuss practical implementation steps such as human‑in‑the‑loop, audit frameworks, and the challenges of LLM hallucinations.
Speaker: Suvendu K. Pati
Regulatory sandbox and the upcoming AI sandbox will democratize access to data and compute for smaller fintechs, enabling them to innovate without prohibitive resource constraints.
Proposes a concrete mechanism to level the playing field, addressing a major barrier for fintech innovation in the Global South.
Generated interest from other participants about infrastructure challenges and led to Harshil’s remarks on data residency and compute availability.
Speaker: Suvendu K. Pati
AI can act as a personal financial assistant that reduces fraud and mis‑selling for everyday consumers – even my 70‑year‑old father could get real‑time advice before making a purchase.
Personalizes the broader societal benefit of AI, illustrating a tangible consumer‑level impact beyond institutional efficiency.
Humanized the discussion, reinforcing the earlier points about trust and prompting further reflection on user‑centric design and risk of hallucinations.
Speaker: Harshil Mathur
Financial inclusion can be accelerated by AI‑driven language and voice‑based banking, making services accessible to illiterate or differently‑abled users and bridging the digital divide.
Links AI capabilities directly to inclusive policy goals, expanding the conversation from technical adoption to societal transformation.
Served as a concluding thematic thread, influencing the final “big bet” round where multiple panelists echoed the vision of AI‑enabled inclusive finance.
Speaker: Suvendu K. Pati (later reiterated)
My bet: AI‑led financial services will create a ‘Vixit Bharat’ – a fully AI‑driven financial ecosystem that reaches every citizen.
Summarizes the aspirational potential of AI for the nation, tying together earlier points on inclusion, language, and accessibility.
Provided a rallying statement that encapsulated the discussion’s optimism, reinforcing the forward‑looking tone of the closing remarks.
Speaker: Ashutosh Sharma
Overall Assessment

The discussion was anchored by John Tass‑Parker’s framing of legitimacy over capability, which set a trust‑centric agenda. The regulator’s tech‑neutral, innovation‑first stance (Suvendu) and its concrete proposals (glass‑box transparency, AI sandbox) acted as pivotal turning points, steering the conversation from abstract policy to actionable governance. Panelists then layered depth by presenting high‑impact use‑cases—credit underwriting for thin‑file borrowers, agentic commerce for mass‑market payments, and personal AI assistants—to illustrate how trust translates into real‑world value. Each of these insights sparked new sub‑threads (risk management, infrastructure constraints, inclusion) and collectively shaped a narrative that moved from regulatory philosophy to concrete pathways for AI‑driven financial inclusion in the Global South.

Follow-up Questions
How can regulators ensure AI model transparency to customers (glass‑box approach) and what mechanisms are needed?
Suvendu emphasized the need for customers to know they are interacting with AI and to have the option for non‑AI engagement, highlighting a gap in current practice.
Speaker: Suvendu K. Pati
What frameworks are required to audit AI model drift, bias, and degradation over time within financial institutions?
He mentioned the importance of periodic checks on model performance and incremental risks, indicating a need for systematic audit processes.
Speaker: Suvendu K. Pati
How effective will the proposed AI sandbox be in democratizing access to data and compute resources for smaller fintechs?
Suvendu described plans for an AI sandbox to address compute and data access constraints, but its impact remains to be evaluated.
Speaker: Suvendu K. Pati
What standards, toolkits, or benchmarking services should industry bodies develop to assess AI model bias, transparency, and compliance?
He called on self‑regulatory organizations to create such tools, suggesting a research gap in practical implementation.
Speaker: Suvendu K. Pati
How can data residency requirements be reconciled with the deployment of cutting‑edge large language models that are hosted abroad?
Harshil highlighted regulatory data‑locality rules that block use of foreign LLMs, indicating a need for solutions or policy adjustments.
Speaker: Harshil Mathur
What techniques can mitigate hallucinations in LLMs to meet the financial sector’s low‑tolerance for erroneous outputs?
He expressed concern that even a 0.1% hallucination rate is unacceptable for finance, pointing to a research need for more reliable models or guardrails.
Speaker: Harshil Mathur
What AI‑driven methods can improve underwriting for thin‑file customers in India?
Ashutosh noted AI’s ability to use unstructured data to thicken thin files, but practical approaches and validation remain open questions.
Speaker: Ashutosh Sharma
How can AI enable language‑ and voice‑based conversational banking for illiterate, low‑literacy, or differently‑abled users?
He stressed the potential of assistive AI to bridge the digital divide, requiring research into inclusive interfaces.
Speaker: Suvendu K. Pati
How can lessons from financial sector AI governance be transferred to other industries?
Tara suggested that the sector’s strong risk‑management practices could be exported, but concrete pathways need exploration.
Speaker: Tara Lyons
How can the principle of ‘innovation versus restraint’ be operationalized in real‑world AI deployments?
He cited this principle as a regulatory nudge, yet practical implementation guidelines are lacking.
Speaker: Suvendu K. Pati
What metrics should be used to evaluate AI’s impact on fraud reduction and mis‑selling in finance?
Harshil discussed AI’s potential to curb fraud and mis‑selling but did not specify measurement frameworks.
Speaker: Harshil Mathur
Beyond regulatory sandboxes, how can compute and data access constraints for fintechs be addressed?
He identified compute scarcity as a major hurdle for fintech innovation, indicating a need for broader infrastructure solutions.
Speaker: Ashutosh Sharma
What public‑private partnership models could accelerate AI adoption in the financial sector?
He asked whether such collaborations could help overcome regulatory and technical challenges, suggesting a research avenue.
Speaker: Harshil Mathur
How can AI‑driven personalized ‘N of 1’ services be scaled cost‑effectively for rural and low‑income populations?
He highlighted the potential of AI to lower service costs and personalize experiences, but scalability and affordability need study.
Speaker: Harshil Mathur
How should accountability and liability be structured for AI models deployed by regulated entities?
He noted that responsibility rests with model deployers, raising questions about legal and governance frameworks.
Speaker: Suvendu K. Pati

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Multistakeholder Partnerships for Thriving AI Ecosystems

Multistakeholder Partnerships for Thriving AI Ecosystems

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how artificial intelligence can accelerate sustainable development but must be deployed responsibly to avoid deepening existing inequities [1-5]. UNDP warned that AI’s rapid evolution is currently inequitable and, without firm responsible-use commitments, the AI equity gap could worsen [3-5]. To confront this, the Hamburg Sustainability Conference introduced the Hamburg Declaration on AI for Sustainable Development, establishing principles and seeking concrete commitments from governments, businesses and civil society [10-12].


Bärbel Kofler emphasized that governments need legal frameworks and global coordination so AI benefits all citizens, highlighting a “power gap” evident in the concentration of venture-capital (≈17 % in the Global South) and data-center capacity (≈0.1 % in the Global South) [38-44][49-58][61-63]. She called for investment in open-source tools, vocational training and university curricula to bring small- and medium-sized enterprises into AI development [75-80]. Arundhati Bhattacharya argued that technology must be democratized, describing Salesforce’s 1 % profit, product and time pledge and its large-scale skilling programme that has created 3.9 million certified “Trailblazers” in India [92-99][100-103]. She cited India’s financial-inclusion journey-UIDAI’s biometric IDs and the UPI system-as examples of AI-enabled digital tools that channel subsidies directly to citizens and lower borrowing costs [108-118][119-126].


Nakul Jain positioned Wadwani AI Global as a convener that bridges technology providers with governments, illustrating successful education-reading-fluency pilots in Gujarat and health-focused tuberculosis projects that required early-stage policy and evaluation partnerships [146-152][155-168][170-176]. Under the Hamburg Declaration, Germany’s ministry reported exceeding its target of training 160 000 people (reaching 190 000), releasing 12 AI building blocks (now 15), delivering 30 AI diagnostics (now 55) and supporting projects in Kenya, Cambodia and India on satellite data, cervical-cancer detection and multilingual datasets [236-244][245-254]. Tata Consultancy Services added that India’s responsible-AI task force has spawned AI Centres of Excellence, a “Trusty Platform” for responsible-AI evaluation, and initiatives to green AI through resource-aware computing and a quantum-valley partnership [199-207][210-218][270-274].


Looking ahead, panelists identified gaps such as a global repository or marketplace for AI solutions and regional evaluation hubs, while urging private firms to adopt self-regulation and expand skilling missions to sustain collaboration [292-300][280-288][276-279]. All participants agreed that AI progress cannot be achieved in silos; multistakeholder collective action, anchored by the Hamburg Declaration, is essential to translate high-level principles into measurable impact [376-379].


Keypoints


Major discussion points


Responsible AI governance is essential to avoid widening equity gaps.


The UNDP opener stresses AI’s development is “not equitable” and warns of an “AI equity gap” that could worsen inequality if not responsibly managed [1-5]. Bärbel Kofler expands on this, describing a “power gap” manifested in venture-capital flows (≈ 17 % to the Global South) and data-center capacity (≈ 0.1 % in the Global South) that must be closed through government-led frameworks, energy policies, and open-source investment [49-60][61-63].


Multi-stakeholder partnerships are the backbone of inclusive AI ecosystems.


Kofler emphasizes that AI “depends on multistakeholder engagement” and must involve civil society, governments, and the private sector [38-40]. Nakul Jain notes that technology is the easy part; the real challenge is “institutional mechanisms” and embedding solutions from day one through government, technical partners, and field-level capacity-building [146-152][158-166]. The moderator’s question to Arundhati also frames the issue as “how do we distribute responsibility between different stakeholders?” [84-87].


The Hamburg AI Declaration provides concrete, measurable commitments.


Since its adoption, the German ministry has “trained 160 000 people” (actually 190 000) and delivered AI building blocks, diagnostics, and datasets-exceeding targets such as 12 building blocks (now 15) and 30 diagnostics (now 55 datasets) [238-246]. Ongoing projects illustrate impact: satellite-data services for Kenyan farmers, cervical-cancer detection in Cambodia, and multilingual datasets for India [250-257].


Private-sector actors contribute by democratizing technology, skilling, and scaling solutions.


Arundhati describes Salesforce’s “1 % profit, 1 % product, 1 % time” model, massive up-skilling of 3.9 million “Trailblazers,” and the role of digital ID and UPI in India’s financial-inclusion success [92-99][110-118]. Wadwani AI positions itself as a “convener of technology for social good,” bridging startups, governments, and NGOs to move pilots into field deployment [173-175].


Future challenges and opportunities: shared repositories, AI assurance, greening AI, and appropriate model choices.


Jain calls for a global marketplace of solutions and regional evaluation hubs to standardise “AI assurance” across countries [292-301]. Speaker 1 highlights the need for sensing infrastructure, compute equity, and “greening of AI” through resource-aware designs [210-218]. In low-resource settings, traditional ML often outperforms large language models, prompting a shift toward small, task-specific models [363-365].


Overall purpose / goal


The panel convened to examine how AI can be harnessed responsibly for sustainable development, identify governance gaps, showcase concrete actions (e.g., the Hamburg Declaration), and chart collaborative pathways that involve governments, international bodies, the private sector, academia, and civil society.


Overall tone


The discussion begins with a formal, cautionary tone about risks and inequities, moves into a collaborative and solution-focused mood as participants share concrete examples and successes, and ends on a pragmatic, forward-looking note emphasizing collective action, concrete commitments, and the need for new partnership mechanisms. Throughout, the tone remains constructive and optimistic, shifting from problem-identification to actionable recommendations.


Speakers

Robert Opp


– Role/Title: Representative and Chief Digital Officer, United Nations Development Programme (UNDP)[S10][S11]


– Area of Expertise: AI for sustainable development, digital transformation, multistakeholder partnerships


Bärbel Kofler


– Role/Title: Parliamentary State Secretary at Germany’s Federal Ministry for Economic Cooperation and Development; Member of the Bundestag[S13][S14][S15]


– Area of Expertise: Government policy, international development, AI governance, sustainable development


Arundhati Bhattacharya


– Role/Title: Chairperson and CEO, Salesforce South Asia; Former Chairperson, State Bank of India[S16][S17]


– Area of Expertise: Digital transformation, financial inclusion, AI ethics, technology for inclusive growth


Nakul Jain


– Role/Title: CEO and Managing Director, Wadwani AI Global[S18][S19]


– Area of Expertise: AI solutions for underserved communities, impact-focused AI deployment, multi-stakeholder partnerships


Speaker 1


– Role/Title: Representative of Tata Consultancy Services (TCS) (inferred from discussion of TCS endorsement)


– Area of Expertise: Responsible AI, AI evaluation platforms, greening AI, industry-academia collaborations


Audience Member 1


– Role/Title: Founder, Corral Inc[S4]


– Area of Expertise: AI/ML technologies (inferred from question on LLM vs. traditional ML)


Audience Member 2


– Role/Title: Audience participant (part of a German group; specific affiliation not specified)[S1][S2][S3]


– Area of Expertise: (not specified)


Audience Member 3


– Role/Title: Undergraduate student, Economics, University of Delhi (as stated in transcript)


– Area of Expertise: Economics, interest in AI’s impact on business models


Additional speakers:


(None – all speakers identified in the provided list appear in the transcript.)


Full session reportComprehensive analysis and detailed insights

The session opened with a stark warning from the UN Development Programme that, while artificial intelligence (AI) can be a powerful catalyst for sustainable development, its current trajectory is “not equitable” and risks widening an “AI equity gap” that could exacerbate existing inequalities if responsible-use commitments are not put in place [1-5]. This framing set the tone for a discussion centred on how to harness AI’s benefits without deepening social and economic divides.


The moderator, Robert Opp, introduced the panel and linked the conversation to the Hamburg Sustainability Conference, which annually produces the Hamburg Declaration on AI for Sustainable Development – a set of principles and measurable commitments endorsed by governments, private firms and civil-society actors [1-5]. The declaration was presented as a concrete mechanism to move from high-level rhetoric to actionable steps.


Bärbel Kofler, Parliamentary State Secretary at Germany’s Federal Ministry for Economic Cooperation and Development, stressed that AI’s future depends on “multistakeholder engagement” involving governments, civil society and the private sector [38-40]. She identified a “power gap” rather than an “innovation gap”, noting that only about 17 % of venture-capital flows go to the Global South despite it housing over 90 % of the world’s population, and that data-centre capacity in the South is a mere 0.1 % of global capacity [55-58]. To close this gap, Kofler called for legal frameworks, coordinated global policies, investment in energy-efficient infrastructure, open-source tools and large-scale vocational and university training programmes that bring small- and medium-sized enterprises into AI development [61-63][75-80].


Arundhati Bhattacharya, Chairperson and CEO of Salesforce South Asia, argued that technology must be “democratised” to have impact [92-93]. Salesforce’s “1 % profit, 1 % product, 1 % time” pledge directs resources to the non-profit sector, while its “Trailblazer” programme has up-skilled 3.9 million users in India [96-104][100-104]. She illustrated the transformative power of digital tools through India’s financial-inclusion journey: biometric IDs from UIDAI and the Universal Payment Interface (UPI) enabled direct subsidy transfers, turning dormant bank accounts into cash-flow assets that reduced borrowing costs for low-income borrowers [108-118][121-126][124-129]. Bhattacharya stressed that policymakers must create ethical infrastructure, protect data privacy and ensure that technology is offered “in an ethical manner” [124-129]; she also emphasized that large corporates should self-regulate to avoid heavy-handed external regulation, a stance that contrasts with Kofler’s government-led approach [280-283].


Nakul Jain, CEO of Wadwani AI Global, positioned his organisation as a “convener of technology for social good”, bridging startups, governments and NGOs to ensure AI solutions move from labs to the field [173-175]. He argued that building the technology is “the easiest part” and that success hinges on institutional mechanisms that embed solutions from day one within existing government programmes [146-152][158-166]. Jain cited two pilots: an oral-reading-fluency tool for Gujarat schools co-developed with the state government and technical partners, and a tuberculosis-diagnosis project that involved the Indian Council of Medical Research to define evaluation criteria from the outset [153-168][170-176]. He further noted that in low-resource settings traditional machine-learning models often outperform large language models (LLMs) because of bandwidth and device constraints [350-354].


Dr. Sachin Loda, Chief Scientist & Head of Research at Tata Consultancy Services, expanded the discussion to data infrastructure and compute equity. He highlighted the need for extensive sensing networks-such as low-cost air-pollution sensors-to generate high-quality, high-velocity data for AI analytics [192-196][197-199]. He noted India’s responsible-AI task force, AI Centres of Excellence (e.g., the collaboration with IIT Kanpur), and initiatives to “green” AI through resource-aware model design and a “quantum valley” partnership with IBM and the Andhra government to explore quantum-hardware solutions [200-207][210-218][270-274].


The panel then returned to concrete progress under the Hamburg Declaration. Kofler reported that Germany’s ministry exceeded its training target, delivering 190 000 AI-related trainings against a commitment of 160 000 [236-241]. The ministry also released 12 AI building blocks for climate action (now 15) [242-246], 30 AI diagnostics (now 15) [247-250], and 55 open datasets [251-254], with pilots supporting satellite-data services for Kenyan farmers, cervical-cancer detection in Cambodia, and multilingual datasets for India [242-246][247-250][251-254].


Areas of consensus emerged across the discussion. All speakers affirmed that responsible AI governance is essential to prevent widening equity gaps [1-5][49-58][124-129][149-164][200-203]; that multistakeholder partnerships-linking government, private firms, academia and civil society-are the backbone of inclusive AI ecosystems [38-40][146-152][280-288][250-257]; and that large-scale capacity-building programmes are critical, as evidenced by Germany’s training numbers, Salesforce’s Trailblazer community and the teacher-training components of Wadwani’s pilots [236-241][96-104][165-166][75-80]. Participants also agreed on the importance of open-source data and tools to democratise AI [80-81][192-199][95-99].


The panel highlighted several nuanced disagreements. Kofler emphasised a government-led approach-legal frameworks, open-source investment and state-funded training-to close the power gap [55-58][61-63][75-80]. Bhattacharya advocated for corporate self-regulation, arguing that large firms must proactively adopt ethical standards to avoid heavy-handed external regulation [280-283]. Jain positioned his organisation as a neutral convener, proposing a global repository of AI solutions and a marketplace of shared playbooks, governance frameworks and talent pools to facilitate cross-border deployment [292-298]; he also called for regional AI-assurance hubs to standardise evaluation criteria across countries [298-301].


During the Q&A, an audience member asked how to bring together fragmented, competing initiatives. Kofler and Jain responded that coordinated frameworks, shared repositories and “hand-holding” mechanisms are needed to avoid duplication and to align disparate efforts around a common purpose [300-306]. This echoed the earlier call for a global marketplace and highlighted the urgency of consolidating initiatives.


Looking forward, the panel identified further opportunities. Jain reiterated the need for a global repository and marketplace, as well as regional evaluation hubs, to enable startups to scale tools from India to Ethiopia [292-298]. Bhattacharya highlighted the need for expanded skilling missions through industry bodies such as NASCOM and AICTE [280-288]. Dr. Loda stressed the importance of building affordable sensing infrastructure, repurposing legacy hardware and developing quantum-valley capabilities to narrow the compute gap between the Global North and South [210-218][213-218]. All agreed that “technology is the easiest part” and that lasting impact depends on embedding AI solutions within existing institutional frameworks and providing ongoing support for end-users [146-152][158-166].


In closing, Opp reminded the audience that the Hamburg Declaration is a living instrument that invites further endorsement via a QR-code link [376-380], and he summarised the overarching message: AI can accelerate sustainable development, but only through coordinated, multistakeholder action, robust governance, and measurable commitments can the risk of deepening inequities be avoided [376-380]. The discussion outlines a pragmatic roadmap for responsible AI that balances optimism with concrete, accountable actions.


Session transcriptComplete transcript of the session
Robert Opp

From the perspective of the UN Development Program, certainly we see a concern with what is happening in the development and adoption of AI in that there’s no question and we are very convinced that AI can have such a powerful impact on sustainable development in a positive way. It can help us really close some of those gaps that we see in persistent development challenges. However, we know that the way that it is evolving now is not equitable. And if we do not have a kind of commitment to responsible use of AI, we fear that the AI equity gap could actually get worse or inequality could get worse. And more so, it also can be harmful in some cases if we’re not applying AI responsibly.

So we want to get into then the point about… How do we actually address some of those challenges? How do we get some of the… measures in place or the kinds of commitments, principles that we need to have as an overall community, especially in the international development community, but beyond in terms of private sector and others as well. In terms of what are our commitments to making sure that we are deploying and using AI in a responsible way and building AI ecosystems. So this does tie in with the process that has been a feature of previous AI summits as well as the Hamburg Sustainability Conference that is hosted every year in the city of Hamburg and sponsored by the government of Germany with other partners like UNDP in the city of Hamburg.

And as part of that process, we have introduced a declaration on AI for sustainable development, the Hamburg Declaration. And so we’ll talk about that a little bit in the context of this conversation after exploring some of the kind of opening thoughts around multistate. Thank you very much. in general. So, I’m going to start off and introduce our panelists and then we’ll go through a couple of rounds of questions. Be thinking about your questions as well because I think that we’ll have time to go to you as well for a Q &A and so happy to get your thoughts and get some interaction with the panel as part of this session. So, with that, I have a very distinguished panel to introduce to you and it’s a real pleasure to have them here today.

I would like to introduce, sitting on my left, Dr. Barbel Koffler, who is the Parliamentary State Secretary at Germany’s Federal Ministry for Economic Cooperation and Development. She’s a long -standing champion of human rights and development, a member of the Bundestag since 2004, and she plays a key role in shaping Germany’s global development economy. Thank you. Yes, exactly. We are also honored to have Ms. Arundhati Bhattacharya, who’s the chairperson and CEO of Salesforce South Asia, a transformative leader with over four decades of experience. She made history as the first woman chairperson of the State Bank of India, where she led one of the country’s most significant digital transformation journeys. She oversees a huge part of Salesforce business in this region and beyond, shaping how technology partnerships drive innovation skills and inclusive growth.

We’re also joined by Nakul Jain, who’s the CEO and managing director of Wadwani AI Global. Nakul is a mission -driven technology leader. He works to advance AI solutions for underserved communities across the global south. And his team has done 40 deployments, of AI reaching more than 150 million people in use cases such as health care, healthcare, agriculture, and education. And we’re also joined by Dr. Sachin Loda, who’s the chief scientist and head of research at Tata Consultancy Services. He’s a leader in cybersecurity and privacy research. He heads Tata Consultancy Services’ work on trustworthy AI, quantum resilience, cloud de -risking, and privacy by design. And that is all about translating cutting -edge research into real -world, award -winning innovation.

Please join me in welcoming our panelists. Okay, so let us start. Dr. Kofler, I’d like to start with you, please. You know, you represent the government perspective here on this panel. What do you think are the difficulties with the distinct roles of government, but also some of the other players like… private sector, civil society and international organizations, the roles that they play in building AI ecosystems that are both innovative and inclusive.

Bärbel Kofler

I used this one in the meantime. Thank you for the question and thank you for having me also as a representative of the government. But I’m very happy to be on a panel with scientists, with academia and with private sector because at the end of the day, artificial intelligence and how we shape really the future with that is depending on multistakeholder engagement. There has to be an engagement from all parts of society and also including civil society because it’s a broad outreach in our societies we have to undertake. So that’s why I think it’s very important to discuss it. You were pointing out challenges and advantages of artificial intelligence. And I think there is the role.

I think there is the role of governments coming in. because, yes, there are tremendous advantages of artificial intelligence. In my sphere of politics, we discuss about how we can use that for detect diseases better through AI, cancer in various topics. We discuss about climate change and how we can make prediction applicable for everybody on the ground. We discuss about administration and the advantages in administration. AI could offer. But at the end of the day, all those hopes will come into reality if we shape that in a framework, in a legal framework, in a framework we coordinate with partners around the globe and within society, which makes all those advantages accessible for everybody, for all parts of society, for any citizen to address its government through AI conversation in its own language, for example, to smallholder farmers if they get the chance to make use of it, to doctors in remote areas who can really then detect with the new technology diseases or whatever.

So we have to close the gap. And I would say it’s not an innovation gap, it’s a power gap. Because innovative people are existing around the globe. Ideas are created around the globe in all spheres of society. So I’ve been here on a panel or at a session, this brilliant young Indian startup, yeah, how would I call it, people, I don’t know, I forgot the English word, sorry. This brilliant people who are creating startups. And that is possible all over the globe, but the environment has to be there. And if you look how venture capital, for example, is distributed, and we know that, I think, it’s 17 % of venture capital only is in those parts of the world who are presenting more than 90 % of the people.

So we feel a power gap at the end of the day. If you look about where you have data center resources, it’s even worse. Global North, it’s almost all the data centers available capacity and only 0 .1 % is somewhere in the Global South. So there is a big gap and we have to overcome that gap. We have to close that gap. And I think that’s something governments have to do also and where they utilize to create an environment. If it’s coming to energy consumption, if it’s coming to an enabling environment for research, if it’s coming to skill training for everybody. I learned in one of the sessions that we should change our mindset and everybody should get a job.

And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. And I think that’s something that we should do. But to do so, we have to invest in the mindset and in the skills of all users. We have to work on vocational training and university training to make research, education, and the needs of even small and medium -sized enterprises heard and linked together to bring small and medium -sized enterprises into the position to participate also from that new technology.

Not only the big players. So all those things need framework and need governance. And we have to make sure that the outcome, the research, the results, the data sets are available for everybody. That’s why we are also investing in open source. It’s something we very much are aligned with the Indian ideas because if we don’t do so, the first thing, the advantages are

Robert Opp

Okay, now it’s working. So framework and governance, important factors to have in the multi -stakeholder partnerships. Arundhati, maybe let me turn to you with a very similar question, but taking this from the private sector perspective. How do you think the responsibility should be distributed between the different stakeholders? And what does industry or private sector companies, tech companies in this case, uniquely bring to the table? And where is partnership essential?

Arundhati Bhattacharya

Thank you and good morning, every one of you. So I have been very fortunate in the fact that I have worked with two organizations. The one that I worked with initially was the State Bank of India, which is the largest bank in India and also the bank that really and truly spearheaded the financial inclusion program of the current government, which is the PM Chandan Yojana. And while doing that, I realized that any improvement in technology, if it is not really democratized, then it doesn’t really have an impact. If you want it to have an impact, you need to democratize technology. That’s the first thing. The second thing is currently I work at Salesforce. And Salesforce, again, it was set up with the intention that business is a platform for change.

We have what we call a one -by -one -by -one mission, which is we contribute 1 % of our profits or equity. 1 % of our products and 1 % of our time to the community, to the non -profit community. Of course, in India, it is 2 % of the profits because that’s what the law demands. And while doing that, we ensure that the non -profit sector not only has access to our products but also knows how to use them and use them to the best of their abilities. Along with that, we also do a lot of work in skilling various people. So we call people who basically are trained in the sales force technology as trailblazers. And India has 3 .9 million of them, the second largest up to the US.

And this is a community that has been literally nurtured by us. So it’s not only a question of ensuring that we are getting our products, out to the community, to the non -profit sector, to the various enterprises where we obviously market our products to them and then help them implement it in their organizations. But we are doing a lot of work on the skilling front as well because we feel that is something that’s very, very essential. Without that, India will not really be able to take the advantage of all that technology brings to it. But one thing is very clear in our heads. And that is that a populous nation like India can never really have the standard of living that it deserves unless technology is a part of the play.

We tried this financial inclusion initiative for 15 long years before we actually were successful in 2014. And the reason for that was technology. Because by that time, the unique identification authority which gave us our biometric units, you know, marker to enable us to get the unique identification number. And that’s what we’re doing. that helped us do the KYC and second the spread of the mobile networks enabled us to actually approach the 600 ,000 villages which is India before this we never had that connectivity now it was because of this that these people then got accounts 97 % of these accounts were without any balances and a lot of people asked us was this merely a ploy was this merely you know some kind of action without any meaning but very soon we found that those accounts started getting money because the government could then target the subsidies they were giving to below poverty line people directly to the citizens without having to go through any middlemen in fact some politician at some point of time had said that only 13 paise after for every one rupee of subsidy actually goes to the person who needs it Here, when the money started flowing directly into the account, the whole of the rupee was going to the person who needed the subsidy.

Now, over and above that, we then had the UPI, which is the Universal Payment Interface. And what did that do? That enabled even small customers and small shopkeepers to start actually taking money in a digital form. And because they were taking money in the digital form, these accounts of theirs, which had been opened, started showing a cash flow. And the moment an account has cash flow, bankers then become interested in lending you money because they know that there is a cash flow to sort of back it up, to enable you to repay it. So here is a person who was taking a loan of, say, 2 ,000 rupees, paying 10 % a month, suddenly becomes eligible for that 2 ,000 rupees at 7 % a year.

Imagine the difference it can make in the life of that person. And this is all… All on account of TikTok. AI is not going to be anything different. AI is going to solve a lot of problems which otherwise in a populous nation like ours cannot be solved. And in order to solve it, we therefore need to understand people are eager to take it. They will take the technology because it improves their lives. But in order for them to take it, it’s up to the policy makers to make those interventions to ensure that they are not being taken for a ride. That whatever is being offered to them is being offered in an ethical manner. That we are doing the right, we are creating the right kind of infrastructure to enable them to actually be able to access it.

That the right to privacy of their data is properly maintained. So I think the policy makers, the people who are there, who are conducting all of this, they have a far bigger responsibility. Because adoption is not something which is a policy. Adoption is not a problem in India. Adoption will happen. as soon as people realize that it’s helping them in their regular lives adoption is not going to be a problem there’s not going to be any pessimism on adoption and if you go to the floor where the expo is you’ll actually see people how interested they are in finding out how it’s going to improve their lives but it is people who make the policies people who make the infrastructure available people who actually make the initiatives available who need to take the responsibilities and companies like ours we need to take the responsibility as to how we skill the people enable them to understand what is good for them and what is not good for them make them understand that there is both good and bad and they need to choose the better they should not be taken for a ride so all of us have a role to play and I think we need to be aware of those roles and we need to play them well thank you thank you so much

Robert Opp

so actually those Those two answers complement each other so nicely because, as you were saying, Dr. Koffler, there’s the governance and the framework, and this is key to the Indian experience as well. Government sets down the framework, the rails, but then private sector has such an important role in scaling, in innovating, in helping people be successful and interact with that. So I’m going to turn now to Nakul. And a question now because, you know, your Wadwani AI Global has done a lot of work in the space of multi -stakeholder partnerships and the rollout of AI for specific use cases. From your experience, where have multi -stakeholder partnerships already demonstrated tangible impact in terms of advancing responsible AI for development?

And what are the conditions that helped ensure these collaborations? Translated into sustained impact rather than… and just remaining solved.

Nakul Jain

Thank you for that question, Rob. Also, thank you for having me here. I hope this is working. It’s working, but it won’t be working. All right, okay. Yeah, no, thank you, everyone, for being here. So, you know, my answer will be from my experience of deploying some of these solutions on field and what has worked for us, what has not worked for us. Now, what we have realized is that building technology, at least for an application organization like ours, is the most easiest part of the entire spectrum. It’s everything around it that becomes, you know, tedious for us to sort of get done, which essentially means that there is a need for a multi -stakeholder ecosystem that can bring in their expertise, who will take care of everything that sits around the technology.

The ecosystem that works is the one that ensures from day one that we are able to do everything that we need to do. What will be the institutional mechanism? of getting something done. How will the solution, the use case be institutionalized in the ecosystem, how will it be embedded within the framework we are speaking about, within the department ministry we are talking about, within the problem area and the people who will be using it. That cannot be an afterthought, that has to be something that needs to come in from day one. And I’ll give you a few examples, some of the solutions that we were able to deploy at scale and what helped them work.

So we built a solution in education around oral reading fluency, essentially to assess students on their reading abilities. Technology part, like I said, not very difficult to do. But this is not something that Wadwani AI Global could have done alone. This required a collaboration between government, ownership from government, and what Arunati also mentioned, that it is required for policy makers to own this problem. We as an organization, came in as a facilitator. But the government of Gujarat was the one who led this entire initiative, who made sure that data is available, who helped us find the right partners who could then be leveraged to annotate some of this data. The other thing, rather than trying to create this solution in silos, what we also started thinking about is where will it get embedded, right?

Tomorrow, if this has to be used, and there are tons and tons of applications that are already being pitched to government for education, pitched to teachers, how do you ensure that they actually get adopted? And one way to do that was to understand how is government currently running their programs. So we started working with government and their technical partner to work on this together. So rather than thinking about this as a solution that we have created and we give you an app, we started figuring out how can whatever we develop plug into the existing system. . in classrooms, wherever they were, and then how do you ensure that there is a monitoring mechanism established so that the government is now fully capable of not just following up on the adoption, but also to figure out if there’s a genuine impact of technology.

All of this was only possible because we had this collaboration between the government with the technical partner who government was working with us being the AI partner. And the partners who were working with government at school levels to help them programmatically, to help teachers build capacity on this, to help teachers understand it, to also counsel teachers, because there’s also a constant threat about AI trying to replace jobs. So there’s also that handholding that is required at field level. And as a tech organization, we could not have done that. So you need that partner as well. Moving to a slightly different example around healthcare. And we have done a lot of product. in tuberculosis and you know some of the things remain common around your government partnership data collection through them and you know your programmatic partnership but what additional support we got was from icmr how can we start thinking about evaluating this from day one because health being a much more sensitive area right directly impacting the lives you know when some of our decisions give outcome but some of our models give those decisions how do you ensure that evaluation and the success criteria is not an afterthought so you don’t just work with the government you also work with the agency who will eventually evaluate it so you are finally following your from the first day itself how you should be optimizing for what parameters and what results so these are some of the examples robert i can think of where you know technology government ecosystem partners played a role came together and delivered something

Robert Opp

thank you nicole and just a quick follow up on that so Wadwani AI Global is how would you describe the space that you occupy in that? You’re not government, you’re not exactly private sector you’re kind of an integrator, right?

Nakul Jain

Yeah no absolutely, I think what we would like to call ourselves is a convener of technology, especially for social good, so our role essentially is to ensure there is an impact, leveraging artificial intelligence who all need to participate is something that we work with partners along we work with governments, whether it requires capacity building whether it requires advisory to the government, whether it requires actual product development we help government with everything because what we have also realized is while government is well intent, there is a need for a lot of hand holding because they have priorities and artificial intelligence intelligence, while now it’s become a buzzword, they still need a lot of hand -holding in order to make sure that some of these solutions just don’t end up living in labs and some of these rooms like this, but actually go on field.

So our role becomes even more important to make sure that everyone who needs to participate comes along, participates, and ensures that there is an actual impact on field. Thank you.

Robert Opp

And the reason I asked that is because also to go over to Sachin, you know, there are, you know, kind of the roles of, you know, private sector innovation, and there’s the government role. The international organizations like UNDP have also bring a kind of multilateral and global perspective on some of these things. But, you know, companies like Tata Consultancy also serve kind of in the middle there as well. And I’d love your perspectives on that same question, which is, you know, where are you seeing multi -stakeholder partnerships resulting in tangible and tangible results? And I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question.

I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great question. I think that’s a great I think that’s a great question. I think that’s a

Speaker 1

to contextualize it’s not just true for help it’s true for any domain you look at and and yes data is fragmented siloed that’s true even within enterprise it’s of course true for at a bigger scale I personally believe that bringing this data kit together is important to create open data ecosystem but I believe problem is lies somewhere else the problem lies in the sensing infrastructure that you need to put together to be able to have high quality high volume high velocity data I take example of for example I mean air pollution monitoring right just think of that you probably could do a much better job if you had sufficient number of sensors deployed across the region of interest probably they are going to be large in number.

You will have to think of, you know, how do you make it cheaper, faster, better. And having done that, it will also depend on, you know, how you could analyze, process and sort of derive insights. That’s where AI will come in, right? So I think government can play a very big role in sort of catalyzing this and enabling sort of, you know, putting together a great sensing infrastructure could be thought of, you know, part of the digital public infrastructure. On top of that, private and public entities could innovate, academicians and inventor community could innovate. So what are we doing here? So we are. So Indian government actually has been very proactive on the overall responsible AI mission.

I think. I was part of the task force that was set up by. principal scientific advisor to Indian government, Professor K. Vijay Raghavan on responsible AI. This happened about eight years back. And we kind of tried to study the implications of AI and the responsible facet of AI in Indian context then. And following that, a lot of mission projects were launched. So as part of that, I have seen now there are different AICOEs supported by government. So we are part of AICOE for sustainability in collaboration with IIT Kanpur. And it’s going live. In fact, some of you might have attended a session by IIT Kanpur team yesterday on this very same topic, where we are looking at a lot of interesting problems in overall sustainability domain.

So that’s just data. I just want to make, So if you just mention your other two points super quickly. Yeah, yeah. So I’ll just mention about compute, which is, again, a very important facet. There is a big gap between what is available in global north versus global south. I have just two ideas to share. One is we should think of how we could repurpose the legacy hardware. And you would see that a lot of high -tech companies have innovated along that. It’s not necessary that you need to have current latest hardware. You could do a lot of innovations around it. Two, of course, you should explore the new hardware that is becoming available. So TCS, IBM, and Andhra government have come together to create a quantum valley in India, which could open ways for that.

So I think these are some quick remarks I thought I’d make.

Robert Opp

Great. No, I mean, and the depth is incredible, right? And you’re talking about these very… Very concrete collaboration. that are making this, you know, a possibility. Okay, I think we’ll go straight into just a quicker second round and then we’re going to go for some comments from you. And I want to bridge this now into, you know, we’ve been talking about multi -stakeholder partnerships and how there are different roles across different parts and stakeholders that have different roles to play. But what’s clear is that we also need collective action. And so I wanted just to kind of think about, okay, how do we move to kind of getting overall kind of collective action in these spaces for responsible use of AI that kind of gives that sort of global framing around it.

And as I mentioned at the beginning, one of the things that we have been doing with the, as part of the Hamburg Sustainability Conference and the Hamburg Declaration, on responsible AI for sustainable development. is really looking at this space of collective action. So a couple of years ago, we did the first Hamburg Declaration. We have about 50 organizations that have endorsed the declaration. It has a number of principles related to it around responsible use of AI. And Dr. Koffler, I’m going to turn back to you to ask you a question especially about that process, which is since the adoption of the Hamburg AI Declaration, what tangible progress have we seen and what are the concrete actions that you think are now required to move from the higher level principles into sustained and scaled impact?

Bärbel Kofler

Well, thank you for mentioning the concrete action because that’s actually what really it is all about. We were coming up with that idea on Hamburg Sustainability Conference and on that paper, not because we want to create another system, but because we want to create another paper or we want to invite to another conference. We really want to come up with commitment every stakeholder has to undertake when they are signing that memorandum. And so my ministry also was signing it, and we had concrete, tangible, measurable duties we have to fulfill, commitments we have to fulfill. I would love to give you a few examples. In the sake of time, I’m trying to be very short and very brief.

For example, one of the commitments was training people. We were committing to train 160 ,000 people within one year. We fulfilled that already, and that makes me very proud because we were trained already 190 ,000. So that is the first step we were committing. We were committing to open up AI building blocks, 12 AI building blocks, especially for using AI to climate action. We are now by a number of 15. And we were also committing 30 AI diagnostics. We are now by a number of 15. and we achieved 55 data sets. So that’s what it’s all about. We are not signing something we don’t want to do and then we have another year of time and applauding ourselves for signing something. We want to come up really with concrete steps and that’s what my ministry did until now.

There are examples all over the world we are working with partners. We are working, for example, in Kenya. There is about analyzing satellite data which makes them available for farmers. We are working with Cambodia where it’s about medicine to detect cervix cancer, for example. We are working with the Indian government. Your government is doing a fantastic project including languages of the subcontinent, analyzing, we were just talking about it, analyzing the data sets for many languages which have to be included in the overall AI framework. So we are supporting Bershini by nine data sets. We were supporting the collection. So that’s what it’s all about. at the end of the day, and that will be my last sentence.

I invite everybody in the room, and maybe you spread the thought, to join Hamburg Declaration, because we started that, and it’s continuing. Government should be part, private sector should be part, civil society should be part, academia should be part of that, and come up with concrete, measurable, tangible commitments which could be fulfilled or can be fulfilled then. So you’re

Robert Opp

I’m glad you mentioned that, because the QR code that you see on the screen here, it gives you more information. If you would like to endorse the Hamburg Declaration, just go to the website there. We’re getting short on time for audience interaction here, but Sachin, really quickly, TCS endorsed the Hamburg Declaration last year. What? What progress have you seen since then?

Speaker 1

Yeah, so I’ll just quickly point out… to some activities we have been doing since then. So as I said, we are part of responsible AI deliberation in India and elsewhere. And we have launched a big program around it, where Carnegie Mellon is now our academic partner. And we are also talking to some Indian academics. As you rightly said, these are very hard problems, and this requires significant collaborations across sectors. We are also building our own technology to be able to evaluate, calibrate, and sort of help AI engineers build more responsible AI. It’s called Trusty Platform. More details we can share offline. We are also very keen on greening of AI, and that requires… a lot of resource -aware AI.

work. So this is again very significant and that’s something we are on.

Robert Opp

Fantastic. And you just mentioned greening AI. That’s one of the key principles of the declaration. Arundhati and Nakul, I’ll turn back to you now for some quick thoughts before we go for some Q &A. Where do you think as we look ahead, we’ve talked about the Hamburg Declaration as one sort of platform, but where do you see the strongest opportunities for new types of partnerships going forward? Things that maybe don’t exist yet but that should? Or what do you think is the next wave of this?

Arundhati Bhattacharya

Well, you know, as an organization, Salesforce from 2014 has had an office of humane and ethical use of technology. So this is something that I think most large corporates have to have a self -regulatory feature. This is important because unless you self -regulate you will have regulators coming down with a very heavy hand at some point of time or the other and that is not something you want if you want to keep innovating. So that’s something I think everybody, all of us should look at. And having said that, the other ways in which we can actually take part with the stakeholders is to continue to co -innovate with many of our customers, those who are interested, to take forward our participation in many of these forums that are there.

For instance, the National Skilling Mission, the skilling mission that is undertaken by NASCOM, which is the IT industry body. We also do internships in partnership with AICT, which is the All India Council of Technical Education. So there are very many apex bodies that are willing to give us space for innovation. For the private sector to come in and be part of the initiative. and the advantage of going through them is they have far better reach to the colleges, to the communities that enables us to actually bring our products directly to them. So I think if we look at all of these taking them together, that really makes the story much better and much stronger.

Robert Opp

Thank you very much. Nakul, same question to you.

Nakul Jain

So in the interest of time I’ll try to be quick. Two things that we feel is missing in the ecosystem one is this global repository of solutions that we are trying to work on and this is not just like a DPG platform or a DPI platform but even that can include private sector. If I am a startup in India, I have built a good tool, how do I sell it, deploy it in Ethiopia? It becomes very difficult right now. So is there an opportunity for us to create a marketplace of sorts that has shared solutions, shared playbooks, shared covers? governance ecosystem frameworks has a talent pool that could be leveraged who worked on these solutions.

So I think that’s one thing that is currently missing. The other is around AI assurance. So is there an opportunity for us to create regional evaluation hubs, which are, you know, for a given particular region, because I see a lot of organizations struggle within country to get solutions evaluated. Now, imagine a situation where we are trying to take solution from one country to other, but there is no clear guidelines around how the evaluation, you know, yardsticks would differ from one country to another. So that could be another area of collaboration between, you know, various organizations who can set this up.

Robert Opp

Great ideas from you both on very concrete things. Okay, so we have time for a couple of questions from Okay, all right, we’ve got a lot of hands. I’m going to try. We’re going to take a couple of questions. We’re going to take a couple of questions at the same time. And then we’ll just go back. for one more round from the panelists for answers. So I think we’ll start here, and then I saw another hand here. We’ll try to do three quick ones. Okay, so one, two, three.

Audience Member 1

My question is to Mr. Robert and Nakul. So I want to ask is, in comparison to LLMs, your chat GPTs, Geminis, what is the value being added in comparison to LLMs versus traditional ML? I mean, your classical improvements over functions, et

Robert Opp

Okay. Then there was a question over here. There are two questions there. Yes, gentleman in the blue, and then beside him.

Audience Member 2

My question is to Mr. Bhadwani. You talked about bringing together people. There are fragmented initiatives. There are fragmented enabling services. But they are competing. So how do we bring them together for a shared purpose? And… And to create… a bigger impact.

Audience Member 3

Okay. Hi, sir. Hi, ma ‘am. I’m current undergraduate student of economics at University of Delhi. My question to you, ma ‘am Bhattacharya, first of all, I would like to understand from your perspective, having served in the private and in the public sector, how do you see all this debate that is around that SaaS as a service will be dead as a result of all this, but the context window, the context you have will actually be the most beneficial. So what will actually happen might, one might say that the valuation will be doubled by all these companies who are actually in this space. So in that context, if I ask you that how do you create value for small enterprises that’s happened?

For TCS, for example, the valuation is around $100 billion. Cursor, which is a very new company, the valuation is $30 billion. So an employee count of 300 is creating so much value, and that’s at 600 ,000 employee count is creating that value. So in that context, with the democratization of software and tech actually happen or is it just the big tech that benefit out of it? There will obviously be the benefit of people here and there but in the larger context,

Robert Opp

Okay, so some very technical questions here. Arundhati, do you want to start with that one first about how to create value for small companies?

Arundhati Bhattacharya

Yeah, so you know, over here, I don’t think we are talking merely about the value something creates on a particular day. And by the way, those values, they keep sliding up and down. Nobody really knows who’s the winner in the race till the race is over and the race is far from over. Okay, so towards that extent, I would ask you not to be too, you know, not to be too pressured or too influenced by things like SaaS is dead and the entire valuation will come to only these few companies and things like that. End of the if there are no users, what valuation will this company have? You need users. Now who creates users?

Users are created by many many processes and methods and only just having an LLM is not going to give you all of that. So a company when it actually works, there are workflows, there are governance rules, there are auditability requirements, there is observability, there is everything in over there and therefore just to think about one particular development by some particular company making that kind of a statement, I think at this point of time is a little too ahead of its time. Things will shake themselves out. Having said that, it’s also true that whether it be TCS, whether it be Salesforce, whether it be State Bank of India or whether it be any other company, the ways in which we do our work is going to change.

The ways in which people take in technology and that influences their life. will change and companies that will be amenable to the change companies that will take advantage of the change are the ones that are going to sustain are the ones that are going to grow up in value but my sincere request to you because you’re a youngster is don’t try to look at market cap while trying to create your company do the right things the market cap will follow okay it’s not market cap that makes you want to work it’s your work and your satisfaction and being able to get something that really and truly helps people improve their standards of living improve the way they actually stay in this world that is what should drive you not the market cap numbers

Robert Opp

and you wanted to add quickly to that

Speaker 1

yeah I just want to add to that think of LLM as something that is worldwide Okay. And that’s a great advancement of AI. But you need something that is industry wise, something that is company wise, something that is context wise, task wise, right? We are far from that. And there are lots of challenges. I mean, just before this, Arundhati and I were discussing the kind of data silos that exist even within an enterprise, right? So we are looking at AI and agentic AI in particular is something that is going to bring more connectedness within enterprise. There is significant amount of work to be done. But you imagine now this combined intelligence, right? Where you have different agents coordinating and working together will probably help us get more.

So some of parts could be more than one is very likely scenario. There are challenges to get it done. Okay. But nature is, the nature of work may change, but there is plenty of work coming at us. LLM has just solved part of the problem

Robert Opp

Nakul there were a couple questions about how do we bring people together for shared purpose and specific question I think it was LLM versus ML is that what you’re saying yeah do you want to take this sorry we’ve got just a few minutes before we have to close the session so just

Nakul Jain

so LLM versus traditional ML in our experience what we have realized is in a lot of places specifically because we work in impact sector a lot of work we do is for ground level users resource is a big challenge right you’re talking about situations where the cell mobile phones are very basic internet is a challenge adoption is an issue in those scenarios we have often learned that traditional ML models have worked much better to serve the purpose and large language models have not been able to serve the purpose and in a lot of cases not even deployable right so there are genuine technical issues because we have of infrastructure because of low resource settings, which have not allowed LLMs to sort of reach where we would want them to reach, right?

Second, in such cases, small language models or small AI is what we’ve been now moving towards to have it serve very specific purpose rather trying to give general intelligence to, you know, some of these users on field, right? So that’s a very quick answer I could have just gone on. To answer your question, sir, I don’t think we are trying to eliminate competition. I think the way we should be thinking about this, that there are different ways organizations can collaborate, right? Some of them could collaborate based on geographical synergies. Some of them can collaborate based on expertise, you know, synergies. Some of them can collaborate based at what part of the life cycle of this entire AI deployment they can collaborate on that.

And absolutely, there will be consortia that will be created. It will still continue to compete with each other. And that’s a good thing to ensure that there’s also quality in the… but the idea is to foster collaboration where those synergies are possible, at least we start with that.

Robert Opp

Thank you. Sorry, sir, we’re going to have to close the panel because we have literally 10 seconds left on the counter. I just want to say thank you very much to our panelists. We have gone from highest level government frameworks to questions about competitive landscape and how small companies can exist or can prosper, but the through line to all of this is that it doesn’t happen in silos. It cannot happen with only one stakeholder of society. This has to be multistakeholder and we have to commit to collective action. So I again refer to the QR code if you’re interested in the Hamburg Declaration that has more information on how to endorse that. Please join me in thanking our panelists for the excellent insights.

And thanks to all of you and good rest of summit. you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (19)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Robert Opp acted as the moderator of the panel and is identified as a representative from the UN Development Programme.”

The knowledge base lists Robert Opp as the moderator and UNDP representative for the discussion [S15] and records his opening remarks on multistakeholder involvement [S96].

Confirmedhigh

“Bärbel Kofler, Parliamentary State Secretary at Germany’s Federal Ministry for Economic Cooperation and Development, emphasized multistakeholder engagement for AI.”

Her role and focus on responsible AI are confirmed by entries describing her as Parliamentary State Secretary and a stakeholder in AI governance [S13] and [S14].

Additional Contextmedium

“Only about 17 % of venture‑capital flows go to the Global South, highlighting a financing gap for AI development.”

The knowledge base notes a financing gap with less than 1 % of venture-capital funding directed to Africa, providing related context but a different figure [S102].

Additional Contextmedium

“The UN Development Programme warned that AI’s current trajectory is “not equitable” and could widen an “AI equity gap”.”

General statements in the knowledge base acknowledge that AI development carries significant risks, supporting the concern about inequitable outcomes [S90].

Confirmedhigh

“Arundhati Bhattacharya, Chairperson and CEO of Salesforce South Asia, advocated for the democratization of technology and inclusive AI.”

Her involvement in building inclusive AI societies is recorded in the knowledge base, confirming her leadership role and focus on democratization [S105].

Additional Contextmedium

“Calls for large‑scale vocational and university training programmes to up‑skill SMEs for AI development.”

The knowledge base discusses skilling and education in AI, highlighting opportunities and challenges in India, which adds nuance to the reported training initiatives [S95].

External Sources (105)
S1
Global Perspectives on Openness and Trust in AI — -Audience member 2- Part of a group from Germany
S2
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S3
The Arc of Progress in the 21st Century / DAVOS 2025 — – Paula Escobar Chavez: Audience member asking a question (specific role/title not mentioned)
S4
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 1- Founder of Corral Inc -Audience member 6- Role/title not mentioned
S5
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S6
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S7
Global Perspectives on Openness and Trust in AI — – Alondra Nelson- Audience member 3
S8
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 3- Student -Audience member 6- Role/title not mentioned
S9
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S10
High-Level Session 4: From Summit of the Future to WSIS+ 20 — – Robert Opp: Representative from UNDP Robert Opp’s comment broadened the discussion on environmental sustainability: “…
S11
WS #278 Digital Solidarity &amp; Rights-Based Capacity Building — Robert Opp: Okay. Thank you. Well, it’s a pleasure to be here. As Jennifer said, I’m Robert Opp. I come from the Unite…
S12
Day 0 Event #189 Toward the Hamburg Declaration on Responsible AI for the SDG — – CLAIRE: No role/title mentioned ROBERT OPP: Okay. Hello, everyone. This is a strange way of doing a workshop with e…
S13
Responsible AI for Shared Prosperity — -Barbel Kofler- Parliamentary State Secretary to the Federal Minister for Economic Cooperation and Development of German…
S14
GermanAsian AI Partnerships Driving Talent Innovation the Future — -Dr. Bärbel Kofler- Title: Parliamentary State Secretary to the Federal Ministry of Economic Cooperation and Development…
S15
Multistakeholder Partnerships for Thriving AI Ecosystems — -Bärbel Kofler- Parliamentary State Secretary at Germany’s Federal Ministry for Economic Cooperation and Development, me…
S16
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — <strong>Moderator:</strong> With a big round of applause, kindly welcome the panelists of this last panel of AI Impact S…
S17
S18
https://dig.watch/event/india-ai-impact-summit-2026/multistakeholder-partnerships-for-thriving-ai-ecosystems — We’re also joined by Nakul Jain, who’s the CEO and managing director of Wadwani AI Global. Nakul is a mission -driven te…
S19
Multistakeholder Partnerships for Thriving AI Ecosystems — – Arundhati Bhattacharya- Nakul Jain – Nakul Jain- Arundhati Bhattacharya
S20
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S21
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S22
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S23
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Aubra Anthony, Armando Guio-Español, Ojoma Ochai, Robert Opp, Anshul Sonak Yu Ping Chan: Good afternoon, everyone, and …
S24
War crimes and gross human rights violations: e-evidence | IGF 2023 WS #535 — Additionally, the need for a clear and applicable legal framework that incorporates open source intelligence and data fr…
S25
Opening and Sustaining Government Data | IGF 2023 Networking Session #86 — By implementing these recommendations, governments can utilize open data to improve decision-making, foster innovation, …
S26
Thinking Big on Digital Inclusion — In conclusion, while mobile penetration is high in Sub-Saharan Africa, access to advanced technology remains limited. Pr…
S27
Responsible AI in India Leadership Ethics &amp; Global Impact part1_2 — And last, enterprises. Like many of yours in this room, that are willing and excited to go first that really look at tra…
S28
Advancing Scientific AI with Safety Ethics and Responsibility — And also create more awareness about the main fundamental thing is that they will be expected to document whatever testi…
S29
Keynote-Ankur Vora — Policymakers must build inclusive governance, safeguards, and infrastructure to ensure everyone benefits
S30
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Awesome. Thanks so much, Maru. And as one of the PAI folks, thanks for being here, everyone. It’s great to see you all. …
S31
WS #123 Responsible AI in Security Governance Risks and Innovation — Michael Karimian: Thank you Yasmin, it’s a pleasure to join you all and thank you Yasmin not just for facilitating today…
S32
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — These efforts are crucial to prevent the exacerbation of inequality and the marginalization of vulnerable groups. Stakeh…
S33
Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach — Global AI governance was the focus of a high-levelforumat the IGF 2024 in Riyadhthat brought together leaders from gover…
S35
Artificial Intelligence &amp; Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S36
WSIS Action Line C7: E-Agriculture — Henry van Burgsteden: Thank you very much. Yes, I’m here. Dear colleagues, partners and friends, on behalf of Vincent Ma…
S37
A bottom-up approach: IG processes and multistakeholderism | IGF 2023 Open Forum #23 — The analysis emphasises the significance of multi-stakeholder engagement in policy processes, specifically in the contex…
S38
Open Forum #30 High Level Review of AI Governance Including the Discussion — Several concrete commitments emerged from the discussion:
S39
Creating Eco-friendly Policy System for Emerging Technology — In an era distinguished by rapid technological growth, Mwikali underscores the necessity for intergenerational dialogue …
S40
Open Forum #66 the Ecosystem for Digital Cooperation in Development — All speakers emphasised the importance of collaboration between government, private sector, civil society, and academia….
S41
Donor roundtable: Enabling impact at scale in supporting inclusive and sustainable digital economies — Additionally, the analysis emphasizes the importance of public-private partnerships between donors and the private secto…
S42
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — The discussion acknowledged environmental and social challenges, including impacts from increased electricity generation…
S43
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S44
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Sustainable development | Infrastructure | Development The moderator emphasized the paradoxical nature of AI technology…
S45
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — For closing the context gap, Patel proposed three interconnected solutions. First, organisations must connect proprietar…
S46
Powering AI Global Leaders Session AI Impact Summit India — The rapid acceleration of AI technology expands the gap between those who can effectively use it and those who cannot. C…
S47
What policy levers can bridge the AI divide? — ## Forward-Looking Perspectives ## Infrastructure as Foundation ## Key Challenges and Opportunities ## Regulatory App…
S48
Multistakeholder Partnerships for Thriving AI Ecosystems — So we feel a power gap at the end of the day. If you look about where you have data center resources, it’s even worse. G…
S49
Keynote Adresses at India AI Impact Summit 2026 — The speakers demonstrate remarkable consensus across multiple dimensions: the strategic importance of U.S.-India partner…
S50
High-Level sessions: Setting the Scene – Global Supply Chain Challenges and Solutions — Initiatives by UNIDO and potential changes to financial structures reflect the international community’s acknowledgment …
S51
Keynotes — Both speakers, despite representing different institutional perspectives (government vs. human rights oversight), agree …
S52
Setting the Rules_ Global AI Standards for Growth and Governance — Very high level of consensus with no significant disagreements identified. This strong alignment across industry, govern…
S53
WS #219 Generative AI Llms in Content Moderation Rights Risks — Given the technical limitations of LLMs with low-resource languages, these contexts require a greater reliance on human …
S54
How Small AI Solutions Are Creating Big Social Change — But what very few people actually know is that the actual performance of what we do at the moment is not 99 .999 % So mo…
S55
Open Forum #3 Cyberdefense and AI in Developing Economies — Philipp Grabensee: you know, follows up on the session you had in Riyadh and I think we all agreed that the bottleneck i…
S56
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — Ammari highlighted META’s open-source approach to large language models, explaining, “META has adopted an open source me…
S57
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — Another concern addressed was the inherent biases and limitations of large language models trained on skewed web data. T…
S58
The rise of large language models and the question of ownership — What are large language models? Large language models (LLMs) are advanced AI systems that can understand and generate va…
S59
How to make AI governance fit for purpose? — The discussion maintained a collaborative and optimistic tone throughout, despite representing different national perspe…
S60
Artificial Intelligence &amp; Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S61
Laying the foundations for AI governance — – The potential for science-based approaches to provide common ground Artemis Seaford: So the greatest obstacle, in my …
S62
AI Meets Agriculture Building Food Security and Climate Resilien — The collaborative approach involving multiple stakeholders allows solutions to be deployed with confidence across differ…
S64
The open-source gambit: How America plans to outpace AI rivals by democratising tech — A “worker-first AI agenda” is the key social pillar of the Plan. The focus is on helping workers reskill and build capac…
S65
The Foundation of AI Democratizing Compute Data Infrastructure — Availability of top-performing open models is necessary but insufficient condition Open models, talent development, and…
S66
Open Forum #17 AI Regulation Insights From Parliaments — AI governance requires ongoing education for all stakeholders – politicians, policymakers, and the general public. This …
S67
10 Digital Governance and Diplomacy Trends for 2022 — Open standards, data, and software have shaped the growth of the internet for decades from open internet standards (TCP/…
S68
Host Country Open Stage — Nordhaug argues that digital public goods provide governments and organizations with greater control and sovereignty com…
S69
Waves of infrastructure Open Systems Open Source Open Cloud — While both support India’s technology development, there’s an unexpected disagreement on approach – Renu emphasizes open…
S70
WS #152 a Competition Rights Approach to Digital Markets — Government offices should use open source by default and transmit this to public procurement requirements Governments c…
S71
WS #123 Responsible AI in Security Governance Risks and Innovation — Michael Karimian: Thank you Yasmin, it’s a pleasure to join you all and thank you Yasmin not just for facilitating today…
S72
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — These efforts are crucial to prevent the exacerbation of inequality and the marginalization of vulnerable groups. Stakeh…
S73
Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach — Global AI governance was the focus of a high-levelforumat the IGF 2024 in Riyadhthat brought together leaders from gover…
S74
Open Forum #30 High Level Review of AI Governance Including the Discussion — AI governance can learn from Internet governance frameworks and mechanisms, but requires more extensive global partnersh…
S75
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Multi-stakeholder cooperation and inclusive governance frameworks are essential
S76
WSIS Action Line C7: E-Agriculture — Henry van Burgsteden: Thank you very much. Yes, I’m here. Dear colleagues, partners and friends, on behalf of Vincent Ma…
S77
Multistakeholder Partnerships for Thriving AI Ecosystems — This comment challenges the tech-centric view that dominates many AI discussions by revealing that technology developmen…
S78
Open Forum #33 Building an International AI Cooperation Ecosystem — – **Multi-stakeholder Approach and Inclusive Development**: Drawing parallels to internet governance, speakers stressed …
S80
Day 0 Event #189 Toward the Hamburg Declaration on Responsible AI for the SDG — Implementation and Accountability Opp emphasizes the need for concrete, implementable commitments in the Hamburg Declar…
S81
GermanAsian AI Partnerships Driving Talent Innovation the Future — Dr. Kofler referenced the Hamburg Sustainability Conference as an example of concrete commitments being made, emphasisin…
S82
Prosperity Through Data Infrastructure — The private sector’s role in producing innovations and democratizing technology is emphasized. The private industry help…
S83
UNSC meeting: Artificial intelligence, peace and security — Jack Clark:Thank you very much. I come here today to offer a brief overview of why AI has become a subject of concern fo…
S84
[Webinar summary] What is the role of the private sector towards a peaceful cyberspace? — Regarding the private sector as monolithic is a mistake from a policy-making and governance perspective, Dion pointed ou…
S85
Enhancing the digital infrastructure for all | IGF 2023 Open Forum #135 — Private sectors have the technological solutions
S86
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Sustainable development | Infrastructure | Development The moderator emphasized the paradoxical nature of AI technology…
S87
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — In conclusion, AI presents a unique opportunity for human progress and the achievement of the SDGs. However, careful con…
S88
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Artificial intelligence (AI) is reshaping the corporate governance framework and business processes, revolutionizing soc…
S89
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Costa Rica has chosen to lead by example. Together with the OECD, we’re leading the development of the OECD AI Policy To…
S90
OPENING STATEMENTS FROM STAKEHOLDERS — Discussions on artificial intelligence show that technological development is not without risk.
S91
Main Session | Policy Network on Artificial Intelligence — Brando Benifei: Yes. So obviously there have been around four important resolutions this year regarding AI. One was pr…
S92
Artificial intelligence (AI) – UN Security Council — During the9821st meetingof the Security Council, the discussions centered around the concept of accidental risks associa…
S93
How AI Drives Innovation and Economic Growth — The tone was notably optimistic yet pragmatic, described as representing “hope” rather than the “fear” that characterize…
S94
AI for Social Good Using Technology to Create Real-World Impact — The tone was consistently optimistic and collaborative throughout, with speakers demonstrating genuine enthusiasm for AI…
S95
Skilling and Education in AI — The tone was cautiously optimistic throughout. Speakers acknowledged both the tremendous opportunities AI presents for I…
S96
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Robert Opp:Okay. Hello, okay. No, thanks for that. I think all of these interventions so far have drawn attention to som…
S97
WS #103 Aligning strategies, protecting critical infrastructure — – The need to move from high-level discussions to concrete, actionable measures
S98
Protection of Subsea Communication Cables — The discussion produced several concrete commitments and initiatives. The ITU Advisory Body for Submarine Cable Resilien…
S99
The Declaration for the Future of the Internet: Principles to Action — A fervent supporter of converting principles into tangible actions, Donohoe stresses the critical necessity of transform…
S100
Closing Session  — And I think the fact that the advisory body consists of governments and industry experts show that these are true. Truly…
S101
Agents of inclusion: Community networks &amp; media meet-up | IGF 2023 — She implies there is a gap in the use of technology in South America.
S102
Open Forum #40 Building a Child Rights Respecting Inclusive Digital Future — Speakers identified significant financing gaps, noting that less than 1% of venture capital funding goes to Africa, and …
S103
HIGH LEVEL LEADERS SESSION I — The argument made was for the establishment of a global framework or mechanism to address this gap. The sentiment regard…
S104
Leaders TalkX: Building inclusive and knowledge-driven digital societies — Many communities lack reliable internet connectivity and stable electricity, which are foundational elements for develop…
S105
Building Inclusive Societies with AI — Arundhati Bhattacharya show a video which will give you context of what the informal sector is, what are some of the in…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
R
Robert Opp
1 argument138 words per minute1956 words845 seconds
Argument 1
AI equity gap and need for responsible use (Robert Opp)
EXPLANATION
Robert warns that while AI can drive sustainable development, its current trajectory is inequitable and could widen existing gaps. He stresses the necessity of a commitment to responsible AI to prevent worsening inequality and potential harms.
EVIDENCE
He notes that AI has powerful potential for development but its evolution is not equitable, warning that without responsible use the AI equity gap could worsen and cause inequality and harm [1-5]. He later emphasizes the importance of framework and governance in multi-stakeholder partnerships [83-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Robert Opp’s warning about an AI equity gap and the need for responsible AI is reflected in UNDP remarks and multistakeholder partnership discussions that stress power gaps, governance frameworks and responsible use of AI [S10][S12][S15].
MAJOR DISCUSSION POINT
AI equity gap
AGREED WITH
Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain, Speaker 1
B
Bärbel Kofler
4 arguments160 words per minute1225 words458 seconds
Argument 1
Power gap, need for legal framework and open‑source resources (Bärbel Kofler)
EXPLANATION
Kofler highlights a global power gap where innovation exists worldwide but resources such as venture capital and data centers are concentrated in the Global North. She calls for legal frameworks, open‑source resources, and government action to close this gap.
EVIDENCE
She describes the power gap, noting that only 17 % of venture capital goes to regions where 90 % of people live, and that 0.1 % of data-center capacity is in the Global South [55-58]. She stresses the need for legal frameworks and open-source investment to make AI benefits accessible to all [47-53][80-81].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session notes quote Kofler’s statement “it’s not an innovation gap, it’s a power gap” and call for legal frameworks, open-source investment and governance to close the gap [S15][S13][S14].
MAJOR DISCUSSION POINT
Power gap and open‑source
AGREED WITH
Robert Opp, Arundhati Bhattacharya, Nakul Jain, Speaker 1
DISAGREED WITH
Arundhati Bhattacharya, Nakul Jain
Argument 2
Government must create enabling environment, training, and data access (Bärbel Kofler)
EXPLANATION
Kofler argues that governments need to provide an enabling environment that includes energy, research infrastructure, and skill development, as well as open data, to ensure AI benefits are widely shared.
EVIDENCE
She mentions the need for energy consumption support, research environments, and skill training for all users, emphasizing vocational and university training and making datasets publicly available [61-63][75-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources stress that governments need to provide energy, research infrastructure, skill development and open data to enable AI benefits, aligning with Kofler’s points on an enabling environment [S26][S25][S15].
MAJOR DISCUSSION POINT
Enabling environment and capacity building
Argument 3
Hamburg Declaration commitments: training, AI building blocks, datasets, country projects (Bärbel Kofler)
EXPLANATION
Kofler outlines concrete actions taken under the Hamburg Declaration, including exceeding training targets, delivering AI building blocks, diagnostics, datasets, and partnering on projects in Kenya, Cambodia, and India.
EVIDENCE
She reports that the commitment to train 160 000 people was exceeded with 190 000 trained, that 12 AI building blocks were expanded to 15, 30 diagnostics to 15, and 55 datasets were delivered, alongside projects in Kenya (satellite data for farmers), Cambodia (cervix cancer detection), and India (language datasets) [238-257].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Hamburg Declaration is highlighted as a collective action platform that set training targets and delivered AI building blocks and datasets across Kenya, Cambodia and India [S15][S13].
MAJOR DISCUSSION POINT
Hamburg Declaration tangible outcomes
Argument 4
Trained 190 000 people, delivered 12 AI building blocks, 30 diagnostics, 55 datasets, projects in Kenya, Cambodia, India (Bärbel Kofler)
EXPLANATION
Kofler reiterates the specific quantitative achievements of the Hamburg Declaration, emphasizing the scale of training and the breadth of AI tools and datasets produced for diverse regions.
EVIDENCE
She cites the numbers: 190 000 people trained, 12 AI building blocks (now 15), 30 diagnostics (now 15), and 55 datasets, plus collaborations in Kenya, Cambodia, and India for satellite data, cancer detection, and language resources [238-257].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same Hamburg Declaration summary reports the quantitative achievements Kofler cites – 190 000 trainees, expanded AI building blocks, diagnostics and datasets for the listed countries [S15].
MAJOR DISCUSSION POINT
Quantified progress of Hamburg Declaration
A
Arundhati Bhattacharya
4 arguments158 words per minute1779 words673 seconds
Argument 1
Self‑regulation and ethical deployment of technology (Arundhati Bhattacharya)
EXPLANATION
Arundhati stresses that large corporations must adopt self‑regulatory measures and ethical standards to avoid heavy-handed external regulation, ensuring technology benefits users responsibly.
EVIDENCE
She describes Salesforce’s self-regulatory approach, noting its “humane and ethical use of technology” office and the need for self-regulation to prevent strict regulator intervention [280-283].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel discussions note the importance of corporate self-regulation through offices of humane and ethical technology use to avoid heavy-handed regulation [S15][S27][S28].
MAJOR DISCUSSION POINT
Corporate self‑regulation
DISAGREED WITH
Bärbel Kofler, Nakul Jain
Argument 2
Policy makers must ensure ethical infrastructure, privacy and inclusive policies (Arundhati Bhattacharya)
EXPLANATION
Arundhati argues that policymakers must create ethical frameworks, protect data privacy, and build inclusive infrastructure so that AI adoption improves lives without exploitation.
EVIDENCE
She states that policy makers must ensure ethical infrastructure, privacy protection, and that offerings are not exploitative, emphasizing the role of government in providing right infrastructure and safeguarding privacy [124-129].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy-maker responsibilities for inclusive governance, privacy safeguards and ethical AI infrastructure are emphasized in the session on building inclusive governance [S29].
MAJOR DISCUSSION POINT
Policy responsibility for ethical AI
AGREED WITH
Robert Opp, Bärbel Kofler, Nakul Jain, Speaker 1
Argument 3
Salesforce’s democratization agenda, skilling millions, 1 % profit/product/time pledge (Arundhati Bhattacharya)
EXPLANATION
Arundhati outlines Salesforce’s commitment to democratize technology through a 1 % pledge of profit, product, and employee time, and through large‑scale skilling programmes that have created millions of certified “Trailblazers”.
EVIDENCE
She details the 1 % pledge (profit, product, time) and notes that Salesforce has trained 3.9 million Trailblazers in India, the second largest cohort after the US, and that the company actively skilling people and providing products to the nonprofit sector [96-104][101-104].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Salesforce’s 1-1-1 model (1 % profit, product, employee time) and the training of 3.9 million “Trailblazers” in India are documented in the discussion of corporate democratization efforts [S29].
MAJOR DISCUSSION POINT
Democratization through corporate pledges and skilling
DISAGREED WITH
Bärbel Kofler, Nakul Jain
Argument 4
Self‑regulation, co‑innovation, and skilling missions through industry bodies (Arundhati Bhattacharya)
EXPLANATION
She highlights the importance of industry‑led self‑regulation, co‑innovation with customers, and participation in national skilling missions and academic partnerships to broaden AI impact.
EVIDENCE
She mentions Salesforce’s self-regulatory office, co-innovation with customers, participation in the National Skilling Mission, internships with AICTE, and collaborations with apex bodies to reach colleges and communities [280-288].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session highlights industry-led self-regulation, co-innovation with customers and participation in national skilling missions as key to broadening AI impact [S27][S28].
MAJOR DISCUSSION POINT
Industry‑driven co‑innovation and skilling
N
Nakul Jain
7 arguments170 words per minute1535 words539 seconds
Argument 1
AI assurance and regional evaluation hubs (Nakul Jain)
EXPLANATION
Nakul proposes establishing regional AI assurance hubs that would provide standardized evaluation frameworks, enabling solutions to be assessed and transferred across countries.
EVIDENCE
He suggests creating regional evaluation hubs to set common yardsticks for AI solutions, noting the difficulty of evaluating across borders and the need for such collaboration [298-301].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A dedicated segment on “Ensuring Safe AI – Monitoring Agents to Bridge the Global Assurance Gap” proposes regional AI assurance hubs to standardise evaluation across borders [S30].
MAJOR DISCUSSION POINT
Regional AI evaluation infrastructure
Argument 2
Institutional mechanisms to embed AI solutions in ministries (Nakul Jain)
EXPLANATION
He argues that embedding AI applications must be planned from day one, with institutional mechanisms that integrate solutions into existing government programmes and ministries.
EVIDENCE
He describes the need for institutional mechanisms to embed AI from day one, citing examples of education reading-fluency tools and health TB projects that required government ownership, data availability, and integration with existing systems [149-164].
MAJOR DISCUSSION POINT
Embedding AI in government structures
AGREED WITH
Robert Opp, Bärbel Kofler, Arundhati Bhattacharya, Speaker 1
Argument 3
Education and health pilots that required government, technical partners and NGOs (Nakul Jain)
EXPLANATION
Nakul shares concrete pilots in education and healthcare that succeeded only through multi‑stakeholder collaboration among government, technical partners, and NGOs.
EVIDENCE
He details an oral-reading-fluency education pilot in Gujarat that involved government data, technical partners, and teacher capacity-building, and a tuberculosis health product that involved ICMR for evaluation, illustrating the need for cross-sector collaboration [152-168].
MAJOR DISCUSSION POINT
Multi‑stakeholder pilots in education and health
Argument 4
Wadwani AI as a convener, scaling solutions and hand‑holding for impact (Nakul Jain)
EXPLANATION
Nakul describes Wadwani AI Global as a convening entity that brings together governments, private sector, and NGOs to ensure AI solutions move from labs to field impact, providing hand‑holding and capacity‑building.
EVIDENCE
He states that Wadwani AI sees itself as a convener that works with governments, provides advisory, capacity-building, and product development, and ensures solutions are not confined to labs but reach the field [173-175].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Nakul Jain’s role at Wadwani AI Global as a convener that supports governments, private sector and NGOs to move AI solutions from labs to field is described in the summit overview [S18].
MAJOR DISCUSSION POINT
Convener role for impact‑driven AI
DISAGREED WITH
Bärbel Kofler, Arundhati Bhattacharya
Argument 5
LLMs often unsuitable for low‑resource settings; traditional ML works better (Nakul Jain)
EXPLANATION
Nakul observes that large language models are often impractical in low‑resource environments due to infrastructure constraints, whereas traditional machine‑learning models are more deployable and effective.
EVIDENCE
He explains that in low-resource settings with basic mobile phones and limited internet, traditional ML models have performed better, while LLMs face deployment challenges, leading to a shift toward small, task-specific models [363-365].
MAJOR DISCUSSION POINT
Suitability of ML vs LLM in low‑resource contexts
DISAGREED WITH
Robert Opp, Arundhati Bhattacharya
Argument 6
Creation of a global marketplace for AI solutions and shared playbooks (Nakul Jain)
EXPLANATION
He calls for a global repository or marketplace where AI solutions, playbooks, and governance frameworks can be shared, facilitating cross‑border deployment and scaling.
EVIDENCE
He points out the lack of a global repository, proposing a marketplace that includes shared solutions, playbooks, governance frameworks, and talent pools to help startups deploy tools internationally [293-298].
MAJOR DISCUSSION POINT
Global AI solution marketplace
DISAGREED WITH
Bärbel Kofler, Arundhati Bhattacharya
Argument 7
Regional AI assurance hubs to standardise evaluation across countries (Nakul Jain)
EXPLANATION
He reiterates the need for regional hubs that would provide consistent evaluation criteria, enabling AI solutions to be assessed and transferred between countries.
EVIDENCE
He again mentions the opportunity to create regional evaluation hubs that would set common evaluation yardsticks for AI solutions across borders [298-301].
MAJOR DISCUSSION POINT
Standardised regional AI evaluation
S
Speaker 1
5 arguments139 words per minute853 words367 seconds
Argument 1
Greening AI and trustworthy AI platforms (Speaker 1)
EXPLANATION
Speaker 1 emphasizes the need to make AI environmentally sustainable and introduces a trustworthy platform that helps engineers build responsible AI.
EVIDENCE
He mentions the development of the “Trusty Platform” to evaluate and calibrate AI, and notes work on greening AI through resource-aware AI initiatives [270-274].
MAJOR DISCUSSION POINT
Sustainable and trustworthy AI
Argument 2
National AI task‑force, AI Centres of Excellence, and compute strategies (Speaker 1)
EXPLANATION
He describes India’s national AI task‑force, the establishment of AI Centres of Excellence, and strategies to address compute gaps between the Global North and South.
EVIDENCE
He recounts participation in a task-force led by the principal scientific advisor, the creation of AICOEs with IIT Kanpur, and proposes repurposing legacy hardware and exploring new hardware like quantum valleys to bridge compute disparities [200-219].
MAJOR DISCUSSION POINT
National AI infrastructure and compute
Argument 3
Cross‑sector collaborations for AI diagnostics, climate data, and language resources (Speaker 1)
EXPLANATION
Speaker 1 outlines collaborations across countries and sectors delivering AI diagnostics, climate‑related satellite data for farmers, and multilingual language datasets.
EVIDENCE
He cites projects in Kenya (satellite data for farmers), Cambodia (cervix cancer diagnostics), and India (language datasets supporting AI frameworks) [250-257].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The multistakeholder partnership summary lists cross-country projects delivering AI diagnostics, satellite climate data for farmers and multilingual language datasets in Kenya, Cambodia and India [S15].
MAJOR DISCUSSION POINT
Cross‑sector AI applications
Argument 4
Need for industry‑specific, task‑specific AI and resolution of data silos (Speaker 1)
EXPLANATION
He argues that generic LLMs are insufficient; instead, industry‑specific, task‑focused AI solutions are needed, and data silos must be addressed to enable high‑quality data collection.
EVIDENCE
He discusses fragmented data silos, the need for extensive sensor infrastructure for air-pollution monitoring, and the importance of task-specific AI and agentic AI to connect enterprise systems [192-199][353-358].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion points stress fragmented data silos and the necessity for industry-specific, task-focused AI solutions to unlock high-quality data, echoing the speaker’s argument [S15][S26].
MAJOR DISCUSSION POINT
Task‑specific AI and data integration
Argument 5
TCS’s Trusty Platform, open‑data ecosystem and greening AI initiatives (Speaker 1)
EXPLANATION
Speaker 1 highlights TCS’s contributions, including the Trusty Platform for responsible AI, building an open‑data ecosystem, and initiatives to reduce AI’s carbon footprint.
EVIDENCE
He mentions the Trusty Platform for evaluating AI, the focus on greening AI through resource-aware approaches, and TCS’s endorsement of the Hamburg Declaration [265-274].
MAJOR DISCUSSION POINT
TCS responsible AI tools
A
Audience Member 1
1 argument128 words per minute43 words20 seconds
Argument 1
Question on comparative value of LLMs versus traditional ML (Audience Member 1)
EXPLANATION
The audience member asks for clarification on the added value of large language models compared with traditional machine‑learning approaches.
EVIDENCE
The question is phrased: “what is the value being added in comparison to LLMs versus traditional ML?” [311-313].
MAJOR DISCUSSION POINT
LLM vs. traditional ML value
A
Audience Member 2
1 argument139 words per minute43 words18 seconds
Argument 1
Need to unify fragmented initiatives for greater impact (Audience Member 2)
EXPLANATION
The audience member asks how competing, fragmented initiatives can be coordinated to achieve a shared purpose and larger impact.
EVIDENCE
The question states: “There are fragmented initiatives… they are competing. So how do we bring them together for a shared purpose?” [317-322].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session on multistakeholder partnerships calls for collective action and coordination of fragmented AI initiatives under the Hamburg Declaration framework [S15].
MAJOR DISCUSSION POINT
Coordination of fragmented initiatives
A
Audience Member 3
1 argument183 words per minute211 words69 seconds
Argument 1
Concerns about SaaS valuation, democratisation and value creation for small enterprises (Audience Member 3)
EXPLANATION
The audience member seeks insight on how SaaS business models will evolve, whether democratization will benefit small enterprises, and how valuation trends affect them.
EVIDENCE
The question asks about the future of SaaS, valuation impacts, and how to create value for small enterprises in the context of technology democratization [326-333].
MAJOR DISCUSSION POINT
SaaS valuation and democratization
Agreements
Agreement Points
All speakers stress the need for responsible AI governance and legal/policy frameworks to prevent widening equity or power gaps.
Speakers: Robert Opp, Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain, Speaker 1
AI equity gap and need for responsible use (Robert Opp) Power gap, need for legal framework and open‑source resources (Bärbel Kofler) Policy makers must ensure ethical infrastructure, privacy and inclusive policies (Arundhati Bhattacharya) Institutional mechanisms to embed AI solutions in ministries (Nakul Jain) National AI task‑force, AI Centres of Excellence and compute strategies (Speaker 1)
The panel repeatedly highlighted that without clear governance, legal frameworks and coordinated policy action AI risks deepening existing inequities; governments, industry and academia must jointly create enforceable commitments and standards [1-5][49-58][124-129][149-164][200-203].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the fit-for-purpose AI governance discussions at the IGF and aligns with policy-lever recommendations for bridging the AI divide presented in recent multistakeholder forums [S59][S47].
Multi‑stakeholder collaboration is essential for deploying AI responsibly and at scale.
Speakers: Robert Opp, Bärbel Kofler, Nakul Jain, Arundhati Bhattacharya, Speaker 1
Framework and governance, important factors to have in the multi‑stakeholder partnerships (Robert Opp) Multistakeholder engagement is required from all parts of society (Bärbel Kofler) Need for a multi‑stakeholder ecosystem that brings expertise around the technology (Nakul Jain) Co‑innovation with customers and participation in industry‑led skilling missions (Arundhati Bhattacharya) Cross‑sector collaborations for AI diagnostics, climate data and language resources (Speaker 1)
All participants agreed that AI solutions only succeed when governments, private firms, civil society and academia work together from the outset, sharing data, expertise and implementation pathways [83-84][38-40][147-152][280-288][250-257].
POLICY CONTEXT (KNOWLEDGE BASE)
Multistakeholder partnerships are repeatedly highlighted, for example in the UN-led AI ecosystem session on power gaps [S48] and the Digital Public Infrastructure report emphasizing stakeholder involvement in policy formulation [S63][S66].
Capacity building and large‑scale training are critical to democratize AI benefits.
Speakers: Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain, Speaker 1
Trained 190 000 people, exceeding the Hamburg Declaration target (Bärbel Kofler) Salesforce’s 1 % pledge and 3.9 million “Trailblazers” skilling programme (Arundhati Bhattacharya) Teacher capacity‑building as part of education pilots (Nakul Jain) Need for vocational and university training, and skill development for all users (Speaker 1)
The panelists cited concrete numbers and programmes that illustrate a shared commitment to up-skilling millions of users, teachers and professionals to ensure AI is usable and inclusive [238-241][96-104][101-104][165-166][75-80].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity constraints were identified as a bottleneck in AI deployment for developing economies [S55], while the U.S. “worker-first” AI agenda and broader talent-development strategies stress large-scale training as essential for democratization [S64][S65].
Open‑source data, open‑data ecosystems and shared resources are necessary to close the AI divide.
Speakers: Bärbel Kofler, Speaker 1, Arundhati Bhattacharya
Investing in open‑source and making datasets publicly available (Bärbel Kofler) Building an open‑data ecosystem and addressing fragmented data silos (Speaker 1) Democratizing technology by providing product access to non‑profits and NGOs (Arundhati Bhattacharya)
All three highlighted that open-source tools, open datasets and free product access are key levers to make AI usable across regions and sectors [80-81][247-257][192-199][95-99].
POLICY CONTEXT (KNOWLEDGE BASE)
Open-source approaches are advocated by META’s open LLM release [S56], by digital public-goods advocates promoting sovereignty over proprietary tools [S68], and by policy recommendations to default to open source in public procurement [S70].
Similar Viewpoints
Both warned that AI’s current trajectory risks widening existing inequities and called for concrete governance mechanisms to prevent a widening AI equity/power gap [1-5][49-58].
Speakers: Robert Opp, Bärbel Kofler
AI equity gap and power gap Need for responsible AI governance
Both emphasized that governments should set the legal and policy foundations while the private sector adopts self‑regulation to ensure ethical AI deployment [124-129][280-283].
Speakers: Arundhati Bhattacharya, Bärbel Kofler
Policy makers must ensure ethical infrastructure, privacy and inclusive policies Government must create enabling environment, legal frameworks and open‑source resources
Both advocated for dedicated evaluation mechanisms—regional hubs or platforms—that provide standardized, trustworthy assessment of AI solutions before scaling [298-301][270-274].
Speakers: Nakul Jain, Speaker 1
AI assurance and regional evaluation hubs Trusty Platform for responsible AI and evaluation
Unexpected Consensus
Alignment of government and private‑sector leaders on self‑regulation combined with formal, measurable commitments.
Speakers: Bärbel Kofler, Arundhati Bhattacharya
Self‑regulation and ethical deployment of technology (Arundhati Bhattacharya) Hamburg Declaration commitments with concrete, measurable duties (Bärbel Kofler)
While governments typically push for external regulation, Bärbel highlighted that the Hamburg Declaration requires concrete, self-imposed duties, and Arundhati argued that large corporations must self-regulate to avoid heavy-handed external rules; both converged on a hybrid model of self-regulation backed by enforceable commitments [280-283][235-241].
POLICY CONTEXT (KNOWLEDGE BASE)
Both government and human-rights representatives endorsed private-sector self-regulation within broader regulatory frameworks, and global AI standards bodies report high consensus on collaborative self-regulation [S51][S52].
Creation of a global marketplace/repository for AI solutions and playbooks.
Speakers: Nakul Jain, Speaker 1
Creation of a global repository of solutions, shared playbooks, governance frameworks (Nakul Jain) Building an open‑data ecosystem and open‑source resources (Speaker 1)
Both identified the lack of a centralized platform for sharing AI tools, methodologies and data as a barrier; Nakul proposed a marketplace, while Speaker 1 called for an open-data ecosystem, indicating a shared vision for a global, reusable AI resource hub [293-298][192-199].
Overall Assessment

The panel displayed strong consensus on four pillars: (1) the necessity of robust, multi‑stakeholder governance frameworks to prevent AI‑driven inequities; (2) the centrality of collaborative partnerships across government, industry, academia and civil society; (3) the urgency of large‑scale training and capacity‑building programmes; and (4) the importance of open‑source data and shared resources to democratize AI. These convergences suggest a solid foundation for collective action and signal that future policy and programme design can build on shared commitments such as the Hamburg Declaration.

High consensus – most speakers reiterated the same themes with concrete examples, indicating a unified stance that can drive coordinated implementation of responsible AI for sustainable development.

Differences
Different Viewpoints
Who should bear the primary responsibility for governing and deploying AI for development
Speakers: Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain
Power gap, need for legal framework and open‑source resources (Bärbel Kofler) Self‑regulation and ethical deployment of technology (Arundhati Bhattacharya) Wadwani AI as a convener, scaling solutions and hand‑holding for impact (Nakul Jain)
Kofler argues that governments must lead by creating legal frameworks, open-source resources and training programmes to close the power gap [55-58][61-63][75-80]. Arundhati stresses that large corporations must self-regulate, adopt ethical standards and work with policy makers to ensure responsible AI deployment, highlighting Salesforce’s 1 % pledge and skilling initiatives [280-283][96-104][101-104]. Nakul positions his organisation as a neutral convener that brings together governments, private firms and NGOs, proposing marketplaces and evaluation hubs rather than a single lead actor [173-175][298-301]. All agree on the need for multi-stakeholder action but disagree on which actor should drive the process.
POLICY CONTEXT (KNOWLEDGE BASE)
IGF and UN-IDO sessions stress shared responsibility among governments, private sector, and civil society, emphasizing science-based, multi-stakeholder governance models [S59][S61][S63].
Preferred method for closing the AI equity / power gap
Speakers: Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain
Power gap, need for legal framework and open‑source resources (Bärbel Kofler) Salesforce’s democratization agenda, skilling millions, 1 % profit/product/time pledge (Arundhati Bhattacharya) Creation of a global marketplace for AI solutions and shared playbooks (Nakul Jain)
Kofler points to structural imbalances – only 17 % of venture capital and 0.1 % of data-centre capacity are in the Global South – and calls for legal frameworks, open-source investment and government-led training to close the gap [55-58][80-81]. Arundhati proposes corporate-driven democratization through large-scale skilling (3.9 million Trailblazers) and a 1 % pledge of profit, product and employee time to make technology accessible [96-104][101-104]. Nakul suggests building a global repository/marketplace of AI solutions, playbooks and talent pools to enable cross-border deployment, thereby addressing the gap through shared resources rather than direct investment [293-298].
POLICY CONTEXT (KNOWLEDGE BASE)
Speakers at AI Impact Summits and multistakeholder forums identified proactive policy levers, infrastructure investment, and capacity-building as key methods to narrow the equity and power gap [S46][S47][S48].
Technical suitability of large language models (LLMs) versus traditional machine‑learning in low‑resource contexts
Speakers: Nakul Jain, Robert Opp, Arundhati Bhattacharya
LLMs often unsuitable for low‑resource settings; traditional ML works better (Nakul Jain) AI can have a powerful impact on sustainable development in a positive way (Robert Opp) AI is going to solve a lot of problems which otherwise in a populous nation like ours cannot be solved (Arundhati Bhattacharya)
Nakul reports that in environments with basic mobile phones and limited internet, traditional ML models are more deployable, while LLMs face infrastructure constraints and often cannot be used, prompting a shift to smaller, task-specific models [363-365]. By contrast, Opp and Arundhati speak of AI’s broad transformative potential without distinguishing model types, implying that LLMs are part of the solution for large-scale challenges [1-5][121-123]. This creates a disagreement on the practicality of LLMs for development-focused AI.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses highlight LLM limitations for low-resource languages and the need for human moderation, as documented in content-moderation risk studies and performance critiques of current AI systems [S53][S54][S57].
Open‑source versus proprietary/partner‑driven AI solutions
Speakers: Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain
Power gap, need for legal framework and open‑source resources (Bärbel Kofler) Self‑regulation and ethical deployment of technology (Arundhati Bhattacharya) Creation of a global marketplace for AI solutions and shared playbooks (Nakul Jain)
Kofler emphasizes investment in open-source AI to make data sets and tools freely available [80-81]. Arundhati describes Salesforce’s model of providing proprietary products to the nonprofit sector while ensuring access and training, relying on corporate licences rather than open-source distribution [98-104]. Nakul proposes a shared marketplace that could include both open and proprietary components, aiming to aggregate solutions for cross-border use [293-298]. The speakers differ on whether openness or controlled proprietary sharing is the optimal path to widespread AI adoption.
POLICY CONTEXT (KNOWLEDGE BASE)
The debate is reflected in META’s open-source LLM strategy [S56], government advocacy for open digital public goods [S68], and U.S. policy promoting open-source by default in public procurement [S70].
Unexpected Differences
Contrasting views on the practicality of large language models in development contexts
Speakers: Nakul Jain, Robert Opp, Arundhati Bhattacharya
LLMs often unsuitable for low‑resource settings; traditional ML works better (Nakul Jain) AI can have a powerful impact on sustainable development in a positive way (Robert Opp) AI is going to solve a lot of problems which otherwise in a populous nation like ours cannot be solved (Arundhati Bhattacharya)
While Opp and Bhattacharya express broad optimism about AI’s transformative potential, Nakul points out concrete technical limitations of LLMs in low‑resource environments, favouring traditional ML models. This technical clash was not anticipated given the generally hopeful tone of the other speakers.
POLICY CONTEXT (KNOWLEDGE BASE)
Some experts point to bias, data-skew, and performance gaps of LLMs in low-resource settings, while others argue for human-centric approaches until technology matures [S53][S54][S57].
Open‑source emphasis by a government minister versus a corporate leader’s reliance on proprietary platforms
Speakers: Bärbel Kofler, Arundhati Bhattacharya
Power gap, need for legal framework and open‑source resources (Bärbel Kofler) Self‑regulation and ethical deployment of technology (Arundhati Bhattacharya)
Kofler explicitly promotes open-source investment as a means to democratise AI [80-81], whereas Arundhati’s strategy centres on providing proprietary Salesforce products and a 1 % pledge to the nonprofit sector, suggesting a different philosophy for achieving accessibility. The divergence between open-source advocacy and proprietary-driven democratisation was not foreseen.
Overall Assessment

The panel shows substantial consensus that AI can drive sustainable development and that multi‑stakeholder collaboration is essential. However, there are clear disagreements on who should lead the governance effort, the preferred mechanisms for closing the AI equity gap, the technical suitability of LLMs in low‑resource settings, and whether open‑source or proprietary approaches best achieve democratisation.

Moderate – while all participants share the overarching goal of responsible, inclusive AI, they diverge on strategic pathways and technical choices. These differences could affect policy design, funding allocations and partnership models, potentially slowing coordinated action unless reconciled.

Partial Agreements
All three agree that multi‑stakeholder collaboration is essential for responsible AI and that capacity building is required, but they diverge on the mechanism: Kofler favours government‑led legal frameworks and open‑source; Arundhati favours corporate self‑regulation, skilling and product access; Nakul favours a neutral convener role with marketplaces and evaluation hubs [47-53][61-63][75-80][280-283][96-104][173-175][298-301].
Speakers: Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain
Power gap, need for legal framework and open‑source resources (Bärbel Kofler) Self‑regulation and ethical deployment of technology (Arundhati Bhattacharya) Wadwani AI as a convener, scaling solutions and hand‑holding for impact (Nakul Jain)
All agree that large‑scale training/skilling is crucial for AI adoption. Kofler cites a government commitment to train 160 000 people (exceeded to 190 000) [238-241]; Arundhati highlights Salesforce’s 3.9 million Trailblazers programme [101-104]; Nakul stresses the need for a global repository to help startups scale solutions internationally, which would also require extensive skill development [293-298]. The disagreement lies in who should fund and deliver the training.
Speakers: Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain
Government must create enabling environment, training, and data access (Bärbel Kofler) Salesforce’s democratization agenda, skilling millions, 1 % profit/product/time pledge (Arundhati Bhattacharya) Creation of a global marketplace for AI solutions and shared playbooks (Nakul Jain)
All concur that ethical AI deployment requires oversight, but differ on the oversight model: Kofler calls for government‑mandated legal frameworks and open‑source transparency; Arundhati promotes corporate self‑regulation to pre‑empt heavy regulation; Nakul proposes independent regional AI assurance hubs to standardise evaluation across borders [280-283][298-301][80-81].
Speakers: Bärbel Kofler, Arundhati Bhattacharya, Nakul Jain
Self‑regulation and ethical deployment of technology (Arundhati Bhattacharya) AI assurance and regional evaluation hubs (Nakul Jain) Power gap, need for legal framework and open‑source resources (Bärbel Kofler)
Takeaways
Key takeaways
AI can accelerate sustainable development, but without responsible governance it risks widening equity and power gaps. A multi‑stakeholder approach—government, private sector, civil society, academia, and international organisations—is essential for inclusive AI ecosystems. Governments must provide legal frameworks, data access, training, and infrastructure (e.g., energy, compute, sensing) to enable equitable AI deployment. Private‑sector firms can democratize technology through skilling programmes, open‑source tools, and self‑regulation, but need supportive policy and partnership. AI assurance, regional evaluation hubs, and trustworthy‑AI platforms are needed to certify impact and mitigate risks. Greening AI and responsible‑by‑design practices are becoming integral parts of AI strategies. Traditional ML often outperforms large language models (LLMs) in low‑resource settings; task‑specific, lightweight models remain important. The Hamburg Declaration provides concrete, measurable commitments (training, AI building blocks, diagnostics, datasets) and serves as a model for collective action.
Resolutions and action items
Germany’s Federal Ministry for Economic Cooperation and Development (BMZ) fulfilled and exceeded its Hamburg Declaration commitment to train 160,000 people, achieving 190,000 trained. BMZ delivered 12 AI building blocks for climate action, 30 AI diagnostics, and 55 open datasets, with pilot projects in Kenya, Cambodia, and India. Salesforce continues its 1 % profit/product/time pledge, expands the Trailblazer community (3.9 M members in India), and partners with national skilling missions and AICTE for AI education. Wadwani AI Global will act as a convener, creating a global repository/marketplace for AI solutions, shared playbooks, and talent pools. Tata Consultancy Services (TCS) endorsed the Hamburg Declaration, launched the Trusty Platform for responsible‑AI evaluation, and is collaborating with Carnegie Mellon on AI‑trust research. Proposal to repurpose legacy hardware and develop a “quantum valley” in India to address compute disparities between Global North and South. Call for the establishment of regional AI assurance/evaluation hubs to standardise impact assessment across countries. Invitation for additional organisations to endorse the Hamburg Declaration via the QR‑code link.
Unresolved issues
How to operationalise a global marketplace for AI solutions and shared playbooks, including mechanisms for licensing, deployment, and revenue sharing. Specific design and governance of regional AI assurance hubs – standards, funding, and coordination remain undefined. Ways to effectively unify fragmented initiatives and enabling services without stifling healthy competition. Detailed guidance on when to use LLMs versus traditional ML in low‑resource environments; scalability and deployment pathways are still open questions. Long‑term financing and sustainability models for open‑source AI building blocks and data resources. Responses to audience queries about SaaS valuation trends and the impact of AI on small enterprises were only partially addressed.
Suggested compromises
Private sector self‑regulation combined with government‑led frameworks to avoid heavy‑handed regulation while ensuring ethical standards. Embedding AI solutions within existing government programmes (e.g., education, health) rather than creating parallel pilots. Balancing competition and collaboration: encouraging consortia for shared infrastructure while allowing market competition for innovation. Using lightweight, task‑specific ML models where LLMs are impractical, thereby matching technology to resource constraints. Leveraging both open‑source AI components and proprietary innovations to broaden access while sustaining commercial incentives.
Thought Provoking Comments
It’s not an innovation gap, it’s a power gap. Venture capital is 17 % in the Global South which holds over 90 % of the world’s population, and data centre capacity is only 0.1 % there.
Highlights structural inequities that underlie AI adoption, reframing the problem from lack of technology to concentration of power and resources.
Shifted the conversation from abstract benefits of AI to concrete systemic barriers, prompting later speakers to discuss concrete measures (e.g., training, open‑source, infrastructure) to address the power imbalance.
Speaker: Bärbel Kofler
The financial‑inclusion programme succeeded only after technology (UIDAI, mobile networks, UPI) enabled direct subsidies, turning empty bank accounts into cash‑flow assets that banks would lend against.
Provides a vivid, data‑driven case study showing how technology can transform poverty alleviation, linking AI potential to real‑world outcomes and underscoring the role of policy and infrastructure.
Reinforced the need for government‑backed frameworks and sparked the discussion on responsible deployment, leading to deeper talk about policy makers’ responsibilities and the importance of skilling.
Speaker: Arundhati Bhattacharya
Technology is the easiest part; the real challenge is building a multi‑stakeholder ecosystem from day one that institutionalises the solution, embeds it in existing government processes, and provides monitoring and hand‑holding for users.
Moves the focus from building AI models to the surrounding institutional and operational context, emphasizing that impact depends on governance, not just tech.
Created a turning point toward concrete examples of successful partnerships (education reading‑fluency tool, TB health‑care), and set the stage for later suggestions about marketplaces and evaluation hubs.
Speaker: Nakul Jain
We have already trained 190,000 people (exceeding our 160,000 target), opened 12 AI building blocks, delivered 30 AI diagnostics and 55 datasets, and are working on satellite‑data services for farmers in Kenya and cervical‑cancer detection in Cambodia.
Demonstrates measurable progress from the Hamburg Declaration, turning a high‑level policy document into tangible outcomes.
Provided concrete evidence that collective commitments can be operationalised, encouraging other panelists to reference their own implementation efforts and reinforcing the call for more measurable pledges.
Speaker: Bärbel Kofler
We should create a global repository/marketplace of AI solutions and regional evaluation hubs so that a startup in India can sell a tool in Ethiopia and have clear, harmonised evaluation criteria.
Identifies a practical infrastructure gap that hampers scaling of responsible AI across borders, proposing a concrete mechanism to accelerate diffusion.
Introduced a new topic—cross‑border solution marketplaces—that broadened the discussion from national pilots to a global ecosystem, prompting other speakers to think about standardisation and shared governance.
Speaker: Nakul Jain
Large corporations need a self‑regulatory function for humane and ethical use of technology; otherwise regulators will impose heavy‑handed rules that could stifle innovation.
Frames corporate responsibility as a proactive safeguard against future regulation, linking ethical practice to business sustainability.
Shifted the tone toward pre‑emptive governance, influencing the later dialogue about private‑sector roles, skilling missions, and the balance between competition and collaboration.
Speaker: Arundhati Bhattacharya
In low‑resource settings, traditional ML models often outperform large language models because of bandwidth, device, and power constraints; we are moving toward small, purpose‑built AI rather than general‑purpose LLMs.
Challenges the prevailing hype around LLMs by grounding the conversation in the realities of deployment in the Global South.
Redirected the technical discussion to suitability of AI approaches for development contexts, reinforcing the earlier point about power gaps and influencing the audience’s perception of what AI solutions are feasible.
Speaker: Nakul Jain
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the dialogue from abstract optimism about AI to a nuanced examination of systemic inequities, concrete implementation challenges, and actionable solutions. Bärbel Kofler’s framing of the ‘power gap’ set the problem definition; Arundhati’s financial‑inclusion story illustrated the transformative potential when policy and technology align; Nakul Jain’s emphasis on ecosystem design and his marketplace proposal provided a roadmap for scaling impact; and the Hamburg Declaration updates supplied proof that collective commitments can translate into measurable outcomes. Together, these comments reshaped the conversation, prompting participants to focus on governance, infrastructure, and cross‑sector collaboration rather than technology alone, and they anchored the panel’s recommendations in real‑world examples and actionable next steps.

Follow-up Questions
How can a global repository or marketplace of AI solutions, playbooks, and governance frameworks be created to enable startups to sell and deploy tools across countries (e.g., from India to Ethiopia)?
A shared platform would reduce barriers to scaling AI for development and facilitate cross‑border collaboration, addressing the current difficulty of international deployment.
Speaker: Nakul Jain
What would be the design and governance model for regional AI assurance/evaluation hubs that can provide consistent evaluation criteria for AI solutions across different countries?
Standardized evaluation would help ensure that AI tools meet local regulatory and ethical standards, enabling smoother adoption in multiple regions.
Speaker: Nakul Jain
What are the technical and economic requirements for building affordable, high‑quality, high‑volume, high‑velocity sensing infrastructure (e.g., air‑pollution sensors) to feed AI systems?
Data quality is a bottleneck for AI; research is needed to develop cost‑effective sensor networks that can generate reliable data at scale.
Speaker: Sachin Loda
How can legacy hardware be repurposed effectively for AI workloads in the Global South to narrow the compute gap?
Leveraging existing hardware could provide a pragmatic path to increase AI compute capacity without the expense of the latest equipment.
Speaker: Sachin Loda
What opportunities does emerging quantum hardware (e.g., the proposed ‘quantum valley’ in India) present for sustainable AI applications?
Exploring quantum‑enabled AI could unlock new efficiencies and reduce energy consumption, aligning with greening‑AI goals.
Speaker: Sachin Loda
What metrics and impact assessments are needed to evaluate the effectiveness of the AI building blocks, AI diagnostics, and data sets released under the Hamburg Declaration?
Quantifying outcomes will determine whether the declared commitments translate into real‑world sustainable development benefits.
Speaker: Bärbel Kofler
How can open‑source AI resources and open data ecosystems be expanded to close the power gap between the Global North and South?
Open resources can democratize access to AI technology, mitigating inequities in venture capital and data‑center distribution.
Speaker: Bärbel Kofler
What governance frameworks are required to ensure equitable AI deployment in the Global South, addressing the identified AI equity and power gaps?
Effective policies are needed to prevent AI from widening existing socioeconomic disparities.
Speaker: Bärbel Kofler
What is the comparative performance and suitability of traditional machine‑learning models versus large language models (LLMs) in low‑resource, low‑connectivity environments?
Understanding which approaches work best in constrained settings will guide technology choices for impact‑focused deployments.
Speaker: Nakul Jain
How can fragmented AI initiatives and enabling services be coordinated into a shared purpose to avoid competition and increase collective impact?
Co‑ordination mechanisms are needed to align diverse projects, maximize resource use, and achieve larger development outcomes.
Speaker: Audience Member 2 (directed to Arundhati Bhattacharya)
What strategies are needed to mitigate potential job displacement (e.g., teachers) caused by AI adoption and to provide appropriate hand‑holding and capacity‑building at the field level?
Ensuring that AI augments rather than replaces workers is crucial for sustainable adoption and social acceptance.
Speaker: Nakul Jain
What role should self‑regulation play for large tech firms versus external regulatory oversight to ensure ethical AI deployment?
Balancing internal governance with public regulation can prevent heavy‑handed interventions while maintaining responsible innovation.
Speaker: Arundhati Bhattacharya
How effective are large‑scale skilling programs (e.g., Salesforce’s Trailblazer community) in creating AI‑ready workforces and driving technology adoption in developing economies?
Evaluating skilling outcomes will inform how best to build human capital for AI‑driven development.
Speaker: Arundhati Bhattacharya
What methodologies can be employed to measure and reduce the environmental footprint of AI systems (greening AI), including resource‑aware model design and energy consumption tracking?
Quantifying AI’s carbon impact is essential for aligning AI development with sustainability goals.
Speaker: Sachin Loda (TCS)
What concrete indicators can track progress of collective action on responsible AI after the Hamburg Declaration, and how can they be operationalized across stakeholder groups?
Defining clear, measurable indicators will enable monitoring of commitments and guide future scaling efforts.
Speaker: Robert Opp (question to Bärbel Kofler)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.